Unit 2 CV
Unit 2 CV
EDGE DETECTION
Additive noise refers to the fact that the noise is added to the original
image, rather than being inherent to the image capture process. Stationary
noise means that the noise characteristics, such as its mean and variance,
remain constant throughout the image.
Gaussian noise follows a Gaussian distribution, also known as a normal
distribution, which is a continuous probability distribution characterized by
its mean and standard deviation.
The main reasons why finite differences respond to noise in computer vision
are:
Note that the derivatives are multiplied by the respective spatial variables (x
and y) divided by the square of the scale parameter (σ^2), and then
multiplied by the value of the Gaussian function at that point.
These derivatives represent the gradients of the Gaussian filter in the x and
y directions, respectively. They indicate the rate of change of the Gaussian
function at each point in the image.
4. Texture analysis: Smoothing filters can also aid in texture analysis tasks.
By reducing high-frequency noise and fine details, smoothing can make it
easier to analyze and characterize textures in an image. It can help in tasks
like texture classification,
segmentation, or pattern recognition, where the overall texture information
is more important than fine-scale details.
1. Gaussian filter: The Gaussian filter is widely used for smoothing because
it effectively reduces noise while preserving edges. It applies a weighted
average to the neighboring pixels, with the weights determined by a
Gaussian function. The size of the filter kernel (i.e., the window used for
averaging) and the standard deviation of the Gaussian determine the
amount of smoothing applied.
2. Box filter: The box filter, also known as the average filter, applies a
simple averaging operation to the neighboring pixels within a rectangular
window. It provides a uniform
smoothing effect across the image but may not preserve edges as well as
the Gaussian filter. The size of the filter kernel determines the extent of
smoothing.
4. Bilateral filter: The bilateral filter considers both spatial distance and
intensity similarity when smoothing an image. It preserves edges while
reducing noise by applying a weighted average to neighboring pixels based
on their spatial proximity and intensity differences. The filter parameters
include the spatial and intensity standard deviations, which control the size
of the neighborhood and the strength of the filtering, respectively.
Here are the steps to perform edge detection using the Laplacian operator:
3. Apply the Laplacian operator: Convolve the preprocessed image with the
Laplacian filter. The Laplacian filter is a discrete approximation of the
second derivative of the image intensity. It is a 3x3 or 5x5 kernel with
specific values that capture the second-order differences in the image.
```
010
1 -4 1
010
```
For more enhanced results, you can also use a 5x5 kernel or other
variations.
4. Adjust the threshold: The output of the Laplacian operator will contain
both positive and negative values. To obtain a binary edge map, you need
to apply a threshold to determine which values are considered edges and
which are not. You can set a fixed threshold value or use adaptive methods
based on the local intensity gradient.
It's important to note that the Laplacian operator can be sensitive to noise
and produce thick edges. To mitigate these issues, alternative edge
detection methods like the Canny edge detector, which combines multiple
steps including smoothing, gradient computation, non-maximum
suppression, and hysteresis thresholding, are commonly used in practice.
```
-1 0 1
-2 0 2
-1 0 1
```
```
-1 -2 -1
000
121
```
To detect edges, the Sobel operator convolves these kernels with the input
image. The result is two gradient images representing the approximate
derivatives of the image in the horizontal and vertical directions. These
gradient images are then combined to compute the edge strength and
orientation.
The edge strength is typically calculated as the magnitude of the gradient
vector at each pixel, given by:
```
```
where Gx and Gy are the gradients computed in the horizontal and vertical
directions, respectively.
```
Once the edge strength and orientation are obtained, further processing
steps can be applied, such as thresholding to identify significant edges and
non-maximum suppression to thin out the detected edges.
Gradient-based edge detectors are widely used due to their simplicity and
effectiveness in capturing image structures. However, they are sensitive to
noise, and their performance can be limited in the presence of low-contrast
edges or in complex scenes. Therefore, more advanced edge detection
techniques, such as the Canny edge detector, have been developed to
overcome these limitations by incorporating additional processing steps.
1.9 Technique: Orientation Representations and Corners.
Orientation Representations:
Corners:
Corners are distinctive features in an image that have high local variations
in intensity or color. They represent the intersection of edges and can be
used for tasks like image registration, tracking, and stereo vision. Corner
detection algorithms aim to identify these points in an image. Some popular
corner detection methods include:
1. Harris Corner Detector: The Harris corner detector is one of the earliest
and most widely used methods for corner detection. It calculates the corner
response function by analyzing the local intensity variations in different
directions. Corners are identified as points with large corner response
values.