Image Analytics, Unit-3
Image Analytics, Unit-3
Color Fundamentals
Colors are seen as variable combinations of the primary color s of light: red (R), green (G), and
blue (B). The primary colors can be mixed to produce the secondary colors: magenta (red+blue),
cyan (green+blue), and yellow (red+green). Mixing the three primaries, or a secondary with
its opposite primary color, produces white light.
RGB colors are used for color TV, monitors, and video cameras.
However, the primary colors of pigments are cyan (C), magenta (M), and yellow (Y), and the
secondary colors are red, green, and blue. A proper combination of the three pigment primaries,
or a secondary with its opposite primary, produces black.
CMY colors are used for color printing.
Color characteristics
The characteristics used to distinguish one color from another are:
Brightness: means the amount of intensity (i.e. color level).
Hue: represents dominant color as perceived by an observer.
Saturation : refers to the amount of white light mixed with a hue.
Color Models
The purpose of a color model is to facilitate the specification of colors in some standard way. A
color model is a specification of a coordinate system and a subspace within that system where
each color is represented by a single point. Color models most commonly used in image
processing are:
CMY and CMYK (cyan, magenta, yellow, black) models for color printing
HSI (hue, saturation, intensity) model
Colour images contain more information than grayscale images because each pixel has multiple
values (usually Red, Green, and Blue components). Processing such images requires specialized
techniques.
Each colour image has three separate grayscale images (R, G, and B channels).
This approach treats each channel independently and processes them separately.
Steps:
Apply edge detection separately on R, G, and B channels, then combine the results.
Techniques:
Advantages:
COLOR TRANSFORMATION
Color transformation refers to the conversion of an image from one color space to another in
order to simplify or enhance processing, analysis, or display of color information.
Each color space has a specific use depending on the application (e.g., RGB for display, HSV for
segmentation, YCbCr for compression).
Level Sets:
Principle: Level set methods represent object boundaries implicitly as the zero level set of a
higher-dimensional function. These methods allow for topological changes in the evolving
contours.
Algorithm:
1. Initialization:
Start with an initial level set function, which represents the initial contour.
2. Propagation:
Evolve the level set function over time using partial differential equations (PDEs).
The PDEs drive the zero level set towards object boundaries while maintaining
smoothness.
3. Evolution:
The level set function evolves by solving the PDEs, causing the zero level set to
propagate and deform.
4. Convergence:
The process continues until the level set stabilizes, converging to the object boundaries.
Advantages:
Handles topological changes naturally, such as object splitting and merging.
Less sensitive to parameter initialization compared to snakes.
Disadvantages:
Computationally intensive, especially for 3D images.
Can be slower than snakes due to numerical computations involved.
Applications:
Medical Imaging: Segmenting organs, tumors, and structures from MRI, CT, or
ultrasound images.
Biomedical Analysis: Identifying cells, nuclei, or structures in microscopy images.
Object Tracking: Tracking moving objects in videos or sequences of images.
Computer Vision: Extracting objects from natural scenes or detecting boundaries for
recognition tasks.
Thresholding in Segmentation:
Thresholding is one of the simplest and most widely used techniques in image segmentation. It
involves dividing an image into foreground (object) and background regions based on pixel
intensity values.
2. Global Thresholding:
In global thresholding, a single threshold value is applied to the entire image.
Pixels with intensities greater than or equal to the threshold value are classified as
foreground (usually assigned a value of 1 or 255), while pixels with intensities below the
threshold are classified as background (usually assigned a value of 0).
The threshold value can be manually chosen based on domain knowledge or
automatically determined using techniques like Otsu's method, which maximizes the
separability of the foreground and background intensity distributions.
Otsu's method provides an automatic way to determine an optimal threshold value without the
need for manual intervention. It's particularly useful when there's a clear separation between
foreground and background intensities in the image. However, it may not perform well in cases
of uneven illumination or complex backgrounds.
Multiple Thresholding:
Multiple thresholding offers a flexible approach to segmenting images with complex intensity
distributions, allowing for the identification of multiple regions with different characteristics. By
dividing the image into multiple segments, it provides finer detail and granularity in
segmentation results compared to simple binary thresholding.
Variable Thresholding:
Variable thresholding, also known as adaptive thresholding, is a segmentation technique where
the threshold value is determined locally for each pixel in the image, based on its neighborhood.
This method is particularly useful for images with varying illumination or background intensity.
Here's how variable thresholding works:
1. Divide Image into Patches:
The input image is divided into smaller patches or windows of fixed size.
2. Calculate Threshold for Each Patch:
For each patch, compute a threshold value that is adapted to the local characteristics
of that patch.
Common methods for calculating the local threshold include:
Mean Thresholding: Compute the mean intensity of the pixel values within the patch and use it
as the threshold.
Gaussian Thresholding: Compute the weighted average of pixel intensities using a Gaussian-
weighted kernel.
Median Thresholding: Use the median intensity value of the pixel values within the patch as
the threshold.
Apply Thresholding Apply the computed local threshold to each pixel in the corresponding
patch.
Pixels with intensities greater than or equal to the local threshold are classified as
foreground, while pixels below the threshold are classified as background.
Applications:
Document Processing: Adaptive thresholding is commonly used in OCR (Optical
Character Recognition) and document analysis to segment text from the background,
especially in scanned documents with uneven lighting.
Biomedical Imaging: In medical imaging, adaptive thresholding can help segment
structures like bones or organs from X-ray images with varying intensity.
Challenges:
Parameter Selection: Choosing the appropriate size of the patch and the method for
calculating the local threshold can be critical and may depend on the characteristics of
the image.
Computational Cost: Adaptive thresholding methods can be computationally
expensive, especially for large images or when using complex thresholding techniques.
.
Segmentation by Region Growing, Splitting and Merging:
Segmentation by region growing, splitting, and merging are techniques used to partition an
image into regions or segments based on pixel similarity. These methods iteratively group pixels
into regions that exhibit similar characteristics, such as intensity, color, or texture.
Here's an explanation of each technique:
1. Region Growing:
Principle: Region growing starts with a set of seed points or seeds and iteratively adds
neighboring pixels to the region if they satisfy certain similarity criteria.
Algorithm:
1. Select seed points either manually or automatically based on predefined
criteria.
2. Initialize an empty region for each seed point.
3. For each seed point, check neighboring pixels.
4. If the neighboring pixel satisfies the similarity criteria (e.g., intensity
difference, color similarity), add it to the corresponding region.
5. Repeat step 4 for all neighboring pixels until no more pixels can be added.
Termination Condition: Typically, region growing stops when no more pixels meet the
similarity criteria or when the region reaches a predefined size.
Advantages:
Simple to implement.
Works well for regions with uniform characteristics.
Disadvantages:
Sensitive to seed selection and initial conditions.
2. Region Splitting:
Principle: Region splitting starts with the entire image as one region and iteratively splits
regions into smaller ones based on certain criteria until the segmentation is satisfactory.
Algorithm:
1. Start with the entire image as one region.
2. Evaluate each region to determine if it should be split based on predefined
splitting criteria (e.g., variance, texture complexity).
3. If a region meets the splitting criteria, divide it into smaller regions.
4. Repeat steps 2-3 for each region until no more regions meet the splitting
criteria.
Termination Condition: Region splitting stops when no more regions meet the splitting criteria.
Advantages:
Can handle images with varying characteristics.
Less sensitive to initial conditions compared to region growing.
Disadvantages:
May lead to over-segmentation if splitting criteria are too relaxed.
Computationally expensive for large images.
3. Region Merging:
Principle: Region merging starts with each pixel as a separate region and iteratively merges
neighboring regions based on certain criteria until the segmentation is satisfactory.
Algorithm:
1. Start with each pixel as a separate region.
2. Evaluate pairs of neighboring regions to determine if they should be merged
based on predefined merging criteria (e.g., similarity in intensity, color,
texture).
3. If a pair of regions meets the merging criteria, merge them into a single region.
4. Repeat steps 2-3 until no more regions meet the merging criteria.
Termination Condition: Region merging stops when no more regions meet the merging criteria.
Advantages:
Can handle over-segmented images.
Less sensitive to initial conditions compared to region growing.
Applications:
Medical Imaging: Segmenting organs or tissues from medical images like MRI or CT
scans.
Satellite and Aerial Imaging: Identifying land cover types, urban areas, or vegetation
from satellite images.