0% found this document useful (0 votes)
99 views

Localized Feature Extraction

1. The document discusses two main approaches to localized feature extraction: estimating curvature via corners and edges, and more modern region or patch-based analysis. 2. It describes computing curvature as the rate of change in edge direction, with peaks in curvature indicating corners. Early techniques measured the difference in edge direction between connected pixels. 3. More advanced methods like SIFT aim to make feature extraction scale and rotation invariant, and partially illumination invariant, by applying the difference of Gaussians operator and analyzing location, scale, and orientation of features.

Uploaded by

sujitha
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

Localized Feature Extraction

1. The document discusses two main approaches to localized feature extraction: estimating curvature via corners and edges, and more modern region or patch-based analysis. 2. It describes computing curvature as the rate of change in edge direction, with peaks in curvature indicating corners. Early techniques measured the difference in edge direction between connected pixels. 3. More advanced methods like SIFT aim to make feature extraction scale and rotation invariant, and partially illumination invariant, by applying the difference of Gaussians operator and analyzing location, scale, and orientation of features.

Uploaded by

sujitha
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Localized feature extraction

(Ref Page: 152)

(ref book feature extraction and image processing for computer vision 4.8)
Two main areas

i) The main target has been to estimate curvature: peaks of local curvature are corners,
and analysing an image by its corners is especially suited to images of artificial
objects.
ii) The second area includes more modern approaches that improve performance by
using region or patch-based analysis.

Detecting image curvature (corner extraction)

One important feature is curvature.

Definition of curvature
Curvature as the rate of change in edge direction.

This rate of change characterizes the points in a curve; points where the edge direction
changes rapidly are corners, whereas points where there is little change in edge direction
correspond to straight lines.
Curvature is normally defined by considering a parametric form of a planar curve. The
parametric contour

Changes in the position vector are given by the tangent vector function of the curve v (t). That is

At any moment, the point moves with a speed given by in the


direction

The curvature at a point v(t) describes the changes in the directionφ (t) with respect to
changes in arc length.

(1)
where s is arc length, along the edge itself. Here φ (t) is the angle of the tangent to the curve.
That is,φ = θ±90, where θ is the gradient direction.
Curvature is given with respect to arc length because a curve parameterized by arc length
maintains a constant speed of motion. Thus, curvature represents changes in direction for
constant displacements along the curve. By considering the chain rule,
The differential ds/dt defines the change in arc length with respect to the parameter t. If
we again consider the curve as the motion of a point, this differential defines the instantaneous
change in distance with respect to time. That is, the instantaneous speed. Thus,

then the curvature at a point v(t) is given by

This relationship is called the curvature function and it is the standard measure of
curvature for planar curves.
An important feature of curvature is that it relates the derivative of a tangential vector to a
normal vector. We can express the tangential vector in polar form as

˙ |is constant. Thus, the derivative of


If the curve is parameterized by arc length, then |v (t)
a tangential vector is simply given by

Thus, the tangential vector can be written as

Therefore, for each point in the curve, there is a pair of orthogonal vectors v(˙ t) and n (t)
whose moduli are proportionally related by the curvature.

Computing differences in edge direction


Compute curvature in digital images is to measure the angular change along the curve’s
path. This approach was considered in early corner detection techniques and it merely computes
the difference in edge direction between connected pixels forming a discrete curve. As such,
curvature is simply given by

values, so the computed curvature is very ragged. This can be smoothed out by considering the
difference in mean angular direction of n pixels on the leading and trailing curve segment. That
is,

The average also gives some immunity to noise and it can be replaced by a weighted
average if Gaussian smoothing is required. The number of pixels considered, the value of n,
defines a compromise between accuracy and noise sensitivity.

Implementation for curvature detection

First, edges and magnitudes are determined. Curvature is only detected at edge points. As
such, we apply maximal suppression. The function Cont returns a matrix containing the
connected neighbour pixels of each edge. Each edge pixel is connected to one or two neighbours.
The matrix Next stores only the direction of consecutive pixels in an edge. We use a value of −1
to indicate that there is no connected neighbour. The function NextPixel obtains the position of a
neighbouring pixelby taking the position of a pixel and the direction of its neighbour. The
curvature is computed as the difference in gradient direction of connected neighbour pixels.

Measuring curvature by changes in intensity (differentiation)

As an alternative way of measuring curvature, we can derive the curvature as a function


of changes in image intensity. This derivation can be based on the measure of angular changes in
the discrete image. We can represent the direction at each image point as the functionφ ' ( x , y ).
Thus, according to the definition of curvature, we should compute the change in these direction
values normal to the image edge (i.e. along the curves in an image). The curve at an edge can be
locally approximated by the points given by the parametric line defined by
Thus, the curvature is given by the change in the function φ ' (x , y ) with respect to t. That
is,

Two further measures can be obtained by considering the forward and a backward
differential along the normal
If we consider that curves are more than one pixel wide, differentiation along the edge
will measure the difference between the gradient angle between interior and exterior borders of a
wide curve.
Moravec and Harris detectors

A measure of curvature can be obtained by considering changes along a particular


direction in the image P itself. This is the basic idea of Moravec’s corner detection operator.
This operator computes the average change in image intensity when a window is shifted in
several directions. That is, for a pixel with coordinates (x,y) and a window size of 2w+1 we
have:

This equation approximates the autocorrelation function in the direction (u, v).

Harris corner detector

Region/Patch Analysis

Scale invariant feature transform

The scale invariant feature transform (SIFT) aims to resolve many of the practical
problems in low-level feature extraction and their use in matching images.
SIFT involves two stages: feature extraction and description. The description stage
concerns use of the low-level features in object matching, and this will be considered later. Low-
level feature extraction within the SIFT approach selects salient features in a manner invariant to
image scale (feature size) and rotation, and with partial invariance to change in illumination.
Further, the formulation reduces the probability of poor extraction due to occlusion clutter and
noise. It also shows how many of the techniques considered previously can be combined and
capitalized on, to good effect.
First, the difference of Gaussians operator is applied to an image to identify features of
potential interest. The formulation aims to ensure that feature selection does not depend on
feature size (scale) or orientation. The features are then analysed to determine location and scale
before the orientation is determined by local gradient direction. Finally, the features are
transformed into a representation that can handle variation in illumination and local shape
distortion. Essentially, the operator uses local information to refine the information delivered by
standard operators.

You might also like