0% found this document useful (0 votes)
15 views

Lecture 03

Uploaded by

jinyaoz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Lecture 03

Uploaded by

jinyaoz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

Computer Vision

CS-E4850, 5 study credits


Lecturer: Juho Kannala
Lecture 3: Image features
• Low-level features (edges, corners, texture patches) are
needed for many tasks:
– Image matching and registration
– Structure-from-motion and image-based 3D modeling
– Image segmentation
– Image retrieval

• Relevant reading:
– Szeliski’s book (1st edition Chapter 4 or 2nd edition Chapter 7)
– David Lowe’s article (2004)
http://www.cs.ubc.ca/~lowe/keypoints/

Acknowledgement: many slides from Svetlana Lazebnik, Steve Seitz, David Lowe,
Kristen Grauman, and others (detailed credits on individual slides)
Edge detection
• An edge is a place of rapid change in the
image intensity function

intensity function
image (along horizontal scanline) first derivative

edges correspond to
extrema of derivative
Source: S. Lazebnik
Derivatives with convolution
For 2D function f(x,y), the partial derivative is:
∂f ( x, y ) f ( x + ε , y ) − f ( x, y )
= lim
∂x ε →0 ε
For discrete data, we can approximate using finite
differences:
∂f ( x, y ) f ( x + 1, y ) − f ( x, y )

∂x 1
To implement the above as convolution, what would be
the associated filter?

Source: K. Grauman
Partial derivatives of an image

∂f ( x, y ) ∂f ( x, y )
∂x ∂y

-1 1
-1 1 or
1 -1
Which shows changes with respect to x?
Source: S. Lazebnik
Image gradient

The gradient of an image:

The gradient points in the direction of most rapid increase


in intensity
• How does this direction relate to the direction of the edge?

The gradient direction is given by

The edge strength is given by the gradient magnitude

Source: Steve Seitz


Effects of noise
Consider a single row or column of the image

Where is the edge?


Source: S. Seitz
Solution: smooth first

f*g

d
( f ∗ g)
dx

d
• To find edges, look for peaks in ( f ∗ g)
dx Source: S. Seitz
Derivative theorem of convolution
• Differentiation is convolution, and convolution
is associative: d ( f ∗ g ) = f ∗ d g
dx dx
• This saves us one operation:

d
g
dx

d
f∗ g
dx
Source: S. Seitz
Derivative of Gaussian filters

x-direction y-direction

Which one finds horizontal/vertical edges?


Source: S. Lazebnik
Derivative of Gaussian filters

x-direction y-direction

These filters are separable


Source: S. Lazebnik
Scale of Gaussian derivative filter

1 pixel 3 pixels 7 pixels

Smoothed derivative removes noise, but blurs


edge. Also finds edges at different “scales”
Source: D. Forsyth
Review: Smoothing vs. derivative filters
Smoothing filters
• Gaussian: remove “high-frequency” components;
“low-pass” filter
• Can the values of a smoothing filter be negative?
• What should the values sum to?
– One: constant regions are not affected by the filter

Derivative filters
• Derivatives of Gaussian
• Can the values of a derivative filter be negative?
• What should the values sum to?
– Zero: no response in constant regions

Source: S. Lazebnik
Keypoint extraction: Corners

9300 Harris Corners Pkwy, Charlotte, NC

Source: S. Lazebnik
Why extract keypoints?
• Motivation: panorama stitching
• We have two images – how do we combine them?

Source: S. Lazebnik
Why extract keypoints?
• Motivation: panorama stitching
• We have two images – how do we combine them?

Step 1: extract keypoints


Step 2: match keypoint features

Source: S. Lazebnik
Why extract keypoints?
• Motivation: panorama stitching
• We have two images – how do we combine them?

Step 1: extract keypoints


Step 2: match keypoint features
Step 3: align images
Source: S. Lazebnik
Characteristics of good keypoints

• Repeatability
• The same keypoint can be found in several images despite geometric
and photometric transformations
• Saliency
• Each keypoint is distinctive
• Compactness and efficiency
• Many fewer keypoints than image pixels
• Locality
• A keypoint occupies a relatively small area of the image; robust to
clutter and occlusion
Source: S. Lazebnik
Applications
Keypoints are used for:
• Image alignment
• 3D reconstruction
• Motion tracking
• Robot navigation
• Indexing and database retrieval
• Object recognition

Source: S. Lazebnik
Corner Detection: Basic Idea
• We should easily recognize the point by
looking through a small window
• Shifting a window in any direction should
give a large change in intensity

“flat” region: “edge”: “corner”:


no change in no change along significant
all directions the edge change in all
Source: S. Lazebnik
direction directions
Corner Detection: Mathematics

Change in appearance of window W for the shift [u,v]:

E(u, v) = ∑ [I(x + u, y + v) − I(x, y)] 2

( x,y)∈W

I(x, y)
E(u, v)

E(3,2)

Source: S. Lazebnik
Corner Detection: Mathematics

Change in appearance of window W for the shift [u,v]:

E(u, v) = ∑ [I(x + u, y + v) − I(x, y)] 2

( x,y)∈W

I(x, y)
E(u, v)

E(0,0)

Source: S. Lazebnik
Corner Detection: Mathematics

Change in appearance of window W for the shift [u,v]:

E(u, v) = ∑ [I(x + u, y + v) − I(x, y)] 2

( x,y)∈W

We want to find out how this function behaves for


small shifts
E(u, v)

Source: S. Lazebnik
Corner Detection: Mathematics
• First-order Taylor approximation for small
motions [u, v]:

I ( x + u , y + v ) ≈ I ( x, y ) + I x u + I y v
• Let’s plug this into E(u,v):

E(u, v) = ∑ [I(x + u, y + v) − I(x, y)]2


( x,y)∈W

≈ ∑ [I(x, y) + I x u + I y v − I(x, y)]2


( x,y)∈W

= ∑ 2
[I x u + I y v] = ∑ 2 2
I u + 2I x I y uv + I v
x
2 2
y
( x,y)∈W ( x,y)∈W
Source: S. Lazebnik
Source: S. Lazebnik

Corner Detection: Mathematics


The quadratic approximation can be written as

⎡u ⎤
E (u , v) ≈ [u v ]M ⎢ ⎥
⎣v ⎦
where M is a second moment matrix computed from image
derivatives:

⎡ ∑ I x2 ∑I I x y

⎢ x, y x, y ⎥
M =⎢ 2 ⎥
∑ IxI y ∑I y
⎢⎣ x , y x, y ⎥⎦

(the sums are over all the pixels in the window W)


Interpreting the second moment matrix
• The surface E(u,v) is locally approximated by a
quadratic form. Let’s try to understand its shape.
• Specifically, in which directions E(u, v)
does it have the smallest/greatest
change?
⎡u ⎤
E (u, v) ≈ [u v] M ⎢ ⎥
⎣v ⎦
⎡ ∑ I x2 ∑I I x y

⎢ x, y x, y ⎥
M =⎢ 2 ⎥
∑ IxI y ∑I y
⎢⎣ x , y x, y ⎥⎦
Interpreting the second moment matrix
⎡u ⎤
Consider a horizontal “slice” of E(u, v): [u v] M ⎢ ⎥ = const
⎣v ⎦
This is the equation of an ellipse.

Source: S. Lazebnik
Interpreting the second moment matrix
⎡u ⎤
Consider a horizontal “slice” of E(u, v): [u v] M ⎢ ⎥ = const
⎣v ⎦
This is the equation of an ellipse.
λ 0⎤
−1 ⎡ 1
Diagonalization of M: M =R ⎢ ⎥ R
⎣ 0 λ2 ⎦
The axis lengths of the ellipse are determined by the
eigenvalues and the orientation is determined by R
direction of the
fastest change
direction of the
slowest change

(λmax)-1/2
(λmin)-1/2

Source: S. Lazebnik
Interpreting the second moment matrix

Consider the axis-aligned case (gradients


are either horizontal or vertical)
⎡ ∑ I x2 ∑I I ⎤
⎢ x, y x, y
x y
⎥ ⎡ a 0⎤
M =⎢ 2 ⎥
=⎢ ⎥

⎢⎣ x , y
IxI y ∑I
x, y
y
⎥⎦ ⎣0 b⎦
If either a or b is close to 0, then this is not a corner,
so look for locations where both are large.

Source: S. Lazebnik
Visualization of second moment matrices

Source: S. Lazebnik
Visualization of second moment matrices

Source: S. Lazebnik
Interpreting the eigenvalues
Classification of image points using eigenvalues
of M:
λ2 “Edge”
λ2 >> λ1 “Corner”
λ1 and λ2 are large,
λ1 ~ λ2;
E increases in all
directions

λ1 and λ2 are small;


E is almost constant “Flat” “Edge”
in all directions region λ1 >> λ2

λ1
Source: S. Lazebnik
Corner response function
R = det( M ) − α trace(M ) 2 = λ1λ2 − α (λ1 + λ2 ) 2
α: constant (0.04 to 0.06)
“Edge”
R<0 “Corner”
R>0

|R| small
“Flat” “Edge”
region R<0

Source: S. Lazebnik
Source: S. Lazebnik

The Harris corner detector

1. Compute partial derivatives at each pixel


2. Compute second moment matrix M in a
Gaussian window around each pixel:

⎡ ∑ w( x, y ) I x2 ∑ w( x, y) I I x y

⎢ x, y x, y ⎥
M =⎢ 2 ⎥
∑ w( x, y ) I x I y ∑ w( x, y) I y
⎢⎣ x , y x, y ⎥⎦

C.Harris and M.Stephens.


“A Combined Corner and Edge Detector.” Proceedings of the 4th Alvey
Vision Conference: pages 147—151, 1988.
Source: S. Lazebnik

The Harris corner detector

1. Compute partial derivatives at each pixel


2. Compute second moment matrix M in a
Gaussian window around each pixel
3. Compute corner response function R

C.Harris and M.Stephens.


“A Combined Corner and Edge Detector.” Proceedings of the 4th Alvey
Vision Conference: pages 147—151, 1988.
Source: S. Lazebnik

Harris Detector: Steps


Source: S. Lazebnik

Harris Detector: Steps


Compute corner response R
Source: S. Lazebnik

The Harris corner detector

1. Compute partial derivatives at each pixel


2. Compute second moment matrix M in a
Gaussian window around each pixel
3. Compute corner response function R
4. Threshold R
5. Find local maxima of response function
(nonmaximum suppression)

C.Harris and M.Stephens.


“A Combined Corner and Edge Detector.” Proceedings of the 4th Alvey
Vision Conference: pages 147—151, 1988.
Source: S. Lazebnik

Harris Detector: Steps


Find points with large corner response: R > threshold
Source: S. Lazebnik

Harris Detector: Steps


Take only the points of local maxima of R
Source: S. Lazebnik

Harris Detector: Steps


Source: S. Lazebnik

Robustness of corner features


• What happens to corner features when the image undergoes
geometric or photometric transformations?
Source: S. Lazebnik

Affine intensity change

I→aI+b

• Only derivatives are used =>


invariance to intensity shift I → I + b

• Intensity scaling: I → a I

R R
threshold

x (image coordinate) x (image coordinate)

Partially invariant to affine intensity change


Source: S. Lazebnik

Image translation

• Derivatives and window function are shift-invariant

Corner location is covariant w.r.t. translation


Source: S. Lazebnik

Image rotation

Second moment ellipse rotates but its shape


(i.e. eigenvalues) remains the same

Corner location is covariant w.r.t. rotation


Source: S. Lazebnik

Scaling

Corner

All points will


be classified
as edges

Corner location is not covariant to scaling!


Source: S. Lazebnik

SIFT keypoint detection

D. Lowe, Distinctive image features from scale-invariant keypoints,


IJCV 60 (2), pp. 91-110, 2004.
Source: S. Lazebnik

Keypoint detection with scale selection


• We want to extract keypoints with
characteristic scale that is covariant with the
image transformation
Source: S. Lazebnik

Basic idea
• Convolve the image with a “blob filter” at
multiple scales and look for extrema of filter
response in the resulting scale space

T. Lindeberg. Feature detection with automatic scale selection.


IJCV 30(2), pp 77-116, 1998.
Source: S. Lazebnik

Blob detection

minima

* =

maxima

Find maxima and minima of blob filter response


in space and scale
Source: N. Snavely
Source: S. Lazebnik

Blob filter
Laplacian of Gaussian: Circularly symmetric
operator for blob detection in 2D

2 2
2 ∂ g ∂ g
∇ g= 2 + 2
∂x ∂y
Source: S. Lazebnik

Recall: Edge detection

Edge
f

d Derivative
g of Gaussian
dx

d Edge = maximum
f∗ g of derivative
dx

Source: S. Seitz
Source: S. Lazebnik

Edge detection, Take 2

Edge
f

2 Second derivative
d of Gaussian
2
g
dx (Laplacian)

d2 Edge = zero crossing


f∗ 2g of second derivative
dx

Source: S. Seitz
Source: S. Lazebnik

From edges to blobs


• Edge = ripple
• Blob = superposition of two ripples

maximum

Spatial selection: the magnitude of the Laplacian


response will achieve a maximum at the center of
the blob, provided the scale of the Laplacian is
“matched” to the scale of the blob
Source: S. Lazebnik

Scale selection
• We want to find the characteristic scale of the
blob by convolving it with Laplacians at several
scales and looking for the maximum response
• However, Laplacian response decays as scale
increases:

original signal increasing σ


(radius=8)
Source: S. Lazebnik

Scale normalization
• The response of a derivative of Gaussian
filter to a perfect step edge decreases as σ
increases

1
σ 2π
Source: S. Lazebnik

Scale normalization
• The response of a derivative of Gaussian
filter to a perfect step edge decreases as σ
increases
• To keep response the same (scale-invariant),
must multiply Gaussian derivative by σ
• Laplacian is the second Gaussian derivative,
so it must be multiplied by σ2
Source: S. Lazebnik

Effect of scale normalization


Original signal Unnormalized Laplacian response

Scale-normalized Laplacian response

maximum
Source: S. Lazebnik

Blob detection in 2D
• Scale-normalized Laplacian of Gaussian:

2 2
2 2 ⎛∂ g ∂ g⎞
∇ norm g = σ ⎜⎜ 2 + 2 ⎟⎟
⎝ ∂x ∂y ⎠
Source: S. Lazebnik

Blob detection in 2D
• At what scale does the Laplacian achieve a maximum
response to a binary circle of radius r?

image Laplacian
Source: S. Lazebnik

Blob detection in 2D
• At what scale does the Laplacian achieve a maximum
response to a binary circle of radius r?
• To get maximum response, the zeros of the Laplacian
have to be aligned with the circle
• The Laplacian is given by (up to scale):
2 2 2 − ( x 2 + y 2 ) / 2σ 2
( x + y − 2σ ) e
• Therefore, the maximum response occurs at σ = r / 2.
circle

r 0

Laplacian

image
Source: S. Lazebnik

Scale-space blob detector


1. Convolve image with scale-normalized
Laplacian at several scales
Scale-space blob detector: Example

Source: S. Lazebnik
Scale-space blob detector: Example

Source: S. Lazebnik
Scale-space blob detector
1. Convolve image with scale-normalized
Laplacian at several scales
2. Find maxima of squared Laplacian response
in scale-space

Source: S. Lazebnik
Scale-space blob detector: Example

Source: S. Lazebnik
Eliminating edge responses
• Laplacian has strong response along edge

Source: S. Lazebnik
Eliminating edge responses
• Laplacian has strong response along edge

• Solution: filter based on Harris response function


over neighboroods containing the “blobs”
Source: S. Lazebnik
Efficient implementation
• Approximating the Laplacian with a
difference of Gaussians:

L = σ 2 (Gxx ( x, y, σ ) + Gyy ( x, y, σ ) )
(Laplacian)

DoG = G( x, y, kσ ) − G( x, y, σ )
(Difference of Gaussians)

Source: S. Lazebnik
Efficient implementation

David G. Lowe.
"Distinctive image features from scale-invariant keypoints.” IJCV 60
(2), pp. 91-110, 2004.
Source: S. Lazebnik
From feature detection to feature description
• Scaled and rotated versions of the same
neighborhood will give rise to blobs that are related by
the same transformation
• What to do if we want to compare the appearance of
these image regions?
• Normalization: transform these regions into same-
size circles
• Problem: rotational ambiguity

Source: S. Lazebnik
Eliminating rotation ambiguity
• To assign a unique orientation to circular
image windows:
• Create histogram of local gradient directions in the patch
• Assign canonical orientation at peak of smoothed histogram

0 2π

Source: S. Lazebnik
SIFT features
• Detected features with characteristic scales
and orientations:

David G. Lowe.
"Distinctive image features from scale-invariant keypoints.” IJCV 60
(2), pp. 91-110, 2004. Source: S. Lazebnik
From feature detection to feature description

Detection is covariant:
features(transform(image)) = transform(features(image))
Description is invariant:
features(transform(image)) = features(image)
Source: S. Lazebnik
SIFT descriptors
• Inspiration: complex neurons in the primary
visual cortex

D. Lowe. Distinctive image features from scale-invariant keypoints.


IJCV 60 (2), pp. 91-110, 2004.
Source: S. Lazebnik
SIFT descriptor computation
• use the normalized region about the keypoint
• compute gradient magnitude and orientation at
each point in the region
• weight them by a Gaussian window overlaid on
the circle
• create an orientation histogram over the 4 X 4
subregions of the window
• 4 X 4 descriptors over 16 X 16 sample array
were used in practice. 4 X 4 times 8 directions
gives a vector of 128 values.
...
Summary of SIFT feature detection and description

1. Scale-space extrema detection


Search over multiple scales and image locations.

2. Keypoint localization
Fit a model to detrmine location and scale.
Select keypoints based on a measure of stability.
3. Orientation assignment
Compute best orientation(s) for each keypoint region.

4. Keypoint description
Use local image gradients at selected scale and rotation
to describe each keypoint region.
Source: D. Hoiem

Matching SIFT Descriptors

Simply match nearest neighbors in descriptor


space (Euclidean distance)

Or

Take those matches for which the ratio of


distances to nearest and 2nd nearest descriptor
is small as it will give more uniquely matching
regions (more details in Lecture 5)

Lowe IJCV 2004


Properties of SIFT
Extraordinarily robust detection and description technique
• Can handle changes in viewpoint
– Up to about 60 degree out-of-plane rotation
• Can handle significant changes in illumination
– Sometimes even day vs. night
• Fast and efficient—can run in real time
• Lots of code available

Source: N. Snavely
A hard keypoint matching problem

NASA Mars Rover images

Source: S. Lazebnik
Answer below (look for tiny colored squares…)

NASA Mars Rover images


with SIFT feature matches
Figure by Noah Snavely
Source: S. Lazebnik
Thank you

Next lecture: Model estimation

You might also like