0% found this document useful (0 votes)
1 views

DIP unit 4

Unit IV covers various aspects of color image processing, including segmentation, enhancement, and edge detection. It explains techniques like color segmentation for isolating objects based on color, and discusses common color spaces such as RGB and HSV. The document also addresses challenges and applications of color processing in fields like medical imaging and object detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

DIP unit 4

Unit IV covers various aspects of color image processing, including segmentation, enhancement, and edge detection. It explains techniques like color segmentation for isolating objects based on color, and discusses common color spaces such as RGB and HSV. The document also addresses challenges and applications of color processing in fields like medical imaging and object detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

UNIT 4

Bhavana Tiwari
Unit IV
Color Image Processing, Color Models and Representation, Laws of Color Matching, Chromaticity Diagram, Color
Enhancement, Color Image Segmentation, Color Edge Detection
Color Image Processing 2
3
4
5
6
8
9
10
12
13
Color Models 14
15
16
17
18
19
20
21
22
23
24
DIGITAL IMAGE
PROCESSING

COLOR IMAGE
SEGMENTATION
INTRODUCTION

• Image segmentation divides the image into regions that are connected
and have some similarity within the region and have some difference
between adjacent regions.

• Image segmentation in Digital Image Processing (DIP) is the


process of dividing an image into different parts or regions. Each part
represents something meaningful, like objects, boundaries, or specific
areas of interest.

• The goal is usually to find an individual object in an image.


• Helps identify and isolate objects in an image for further analysis.
• For example, think of it like cutting a picture into different pieces, where each
piece shows a different object, like a person, tree, or car. This helps in analyzing
or processing only specific parts of the image.

For example, in a photo


of a cat, image
segmentation can
separate the cat from
the background, making
it easier to focus on the
cat alone. Similarly in
the image of cow also.
What is Color Segmentation?
• Focuses on separating different parts of the image based on color.
• Groups pixels with similar color values into regions.
• It's like dividing an image into color groups, making it easier to understand or manipulate.
Color segmentation is a technique used in image processing to separate different parts of an
image based on their color. It helps in identifying and isolating objects or areas in an image
that share similar colors.
• Why Use Color Segmentation?
• We use color segmentation to make it easier to work with images by grouping
together parts that have the same color. It helps us quickly find objects or areas
based on their color.
• Simplifies complex images by breaking them into color-based regions.
• Useful in object detection, medical imaging, and traffic systems.
.
• Provides a more efficient way to analyze an image.

For example if you have to find all the red apples in a picture color segmentation
helps you focus only on the red parts . This makes task like detecting objects ,
sorting items or analyzing pictures faster and simpler.
• COMMON COLOR SPACES:

• Here are some common color spaces used in color segmentation:


1.RGB (Red, Green, Blue): This is the most common color space, where colors are made by
mixing different amounts of red, green, and blue light. It's easy to understand because it
matches how screens display colors.
2.HSV (Hue, Saturation, Value): This color space is better for humans to work with because it
separates color (hue), intensity (saturation), and brightness (value). For example, you can find
all red objects by looking at hue, no matter how bright or dark they are.
3.LAB: This is a more advanced color space that tries to mimic how humans see color. It’s used
when we need very accurate color segmentation.
4.YUV: This separates color from brightness, making it useful for video processing or when
lighting conditions change.
• How Does Color Segmentation Work?
1. Choose a Color Space:
1. The image is transformed into a specific color space. Common color spaces include:
1. RGB (Red, Green, Blue): Colors are made by mixing different intensities of red, green, and blue.
2. HSV (Hue, Saturation, Value): Hue represents color, saturation shows how vivid the color is, and value indicates brightness.
3. LAB: Mimics human vision and is more accurate in representing colors.
4. YUV: Separates brightness from color and is useful in video processing.
2. Set Color Ranges:
1. Define the range of colors you want to segment. For example, if you're looking for red objects, you
specify the red color range.
3. Apply Segmentation:
1. The program checks each pixel in the image to see if it falls within the specified color range. If it does, the
pixel is kept; otherwise, it is masked (hidden or turned into a different color like black).
4. Result:
1. After segmentation, the image only shows areas with the selected color, making it easier to work with
specific objects or regions.
• Types of Color Segmentation
• Here are the types of color segmentation explained:
1.Manual Thresholding: You set a specific range of colors by hand (like choosing a shade of
red), and the program picks all the parts of the image that match this range. It’s like telling the
program, "Find everything that looks this red."
2.Automatic Segmentation:
1. K-means Clustering: The program automatically groups pixels that have similar colors.
It’s like organizing crayons of similar colors into separate groups.
2. Region Growing: The program starts with a small patch of color and "grows" that patch
by adding nearby pixels with similar colors.
3.Edge Detection-based Segmentation: The program finds the edges (boundaries) between
different colors and uses that to separate parts of the image. It’s like drawing lines between
different-colored areas in a coloring book.
•APPLICATION OF COLOR SEGMENTATION:

•Object Detection: Helps find and separate objects in an image based


on their color. For example, detecting ripe fruits (like red apples) from
other fruits in a picture.
•Medical Imaging: Used to highlight important parts in medical images,
like spotting tumors or blood vessels by their color differences.
•Traffic Sign Recognition: Helps cars or traffic systems identify road
signs and traffic lights based on their colors (red, green, yellow).
•Face Detection: Can be used to segment skin tones in photos or videos
to help detect faces.
•Sorting Items: In factories, machines can use color segmentation to sort
items (like different-colored products) automatically.
• Challenges in Color Segmentation:

• Lighting Conditions: Affects how colors appear.


• Shadows and Reflections: Can mislead color detection.
• Similar Colors: Objects with similar colors can be wrongly
grouped.
Tools for Color Segmentation:

• OpenCV: Python/C++ library for image processing.


• MATLAB: Popular for academic use.
• Scikit-Image: Easy-to-use Python library.
Color Edge Detection i n Digital Image
Processing
Wh at is Color Edge
Detection?
Edges in an image are areas where there is
a sharp change in intensity or color. These
changes usually correspond to boundaries
of objects or regions in an image

Unlike grayscale edge detection, which


works on intensity values, color edge
detection considers all three color
channels (Red, Green, and Blue) to detect
edges
Principles of Color Edge Detection
Gradient Calculation
1 The gradient of an image is a vector that points in the direction of the
steepest change in intensity (or color) and
whose magnitude represents how rapid the change is.
∇I(x,y)=(∂I/∂x,∂I/∂y)

Thresholding
Thresholding is a fundamental technique in Digital Image Processing (DIP) used to convert
2
a grayscale image into a binary image (black and white) by segmenting pixels based on
intensity values.
I(x,y)={1 if I(x,y)>T, otherwise 0

Edge L i nki n g
3 Edge linking in Digital Image Processing (DIP) refers to the process of connecting edge
pixels in an image to form continuous edges or boundaries of objects. After edge detection,
edge pixels often appear as disconnected points due to noise or variations in intensity, so
edge linking ensures that these points form coherent, smooth edges.
Types of Color Edge Detection

Gradient- Region- Transform-


based based based
Sobel, Prewitt, Segmentation Fourier and
Canny operators. and region wavelet
transforms.
growing
techniques.
Gradient
based
Color Image Representation:
Color images have multiple channels (e.g., RGB),
with each pixel containing three values
representing color information.
Calculate Gradients for Each Channel:
Apply a gradient operator (e.g., Sobel, Prewitt,
or Roberts) independently to each color channel
(Red, Green, Blue).
Compute the horizontal (GxG_xGx) and vertical
(GyG_yGy) gradients for each channel.
Region
based
Region-based edge detection methods
focus on detecting edges by analyzing
differences in pixel properties (like
intensity, texture, or color) within
different regions of an image, rather
than using gradient or intensity change.
These methods segment the image into
regions and detect edges where regions
meet or have significant differences.
Transform
based
• Transform-based edge detection in
Digital Image Processing (DIP) refers to
methods that transform an image into a
different domain (usually frequency or
parameter space) to detect edges more
efficiently. The idea is to extract features
such as edges using transforms that
highlight edge characteristics in a way that
traditional gradient-based or region-based
methods may not.
Basic code for color edge
detection
Output :
Challenges and
Limitations
1 Noise Sensitivity
Noise can cause false edge detection.

2 Blurriness
Edges in blurry images are harder to detect.

3 Computational Cost
Some algorithms can be computationally expensive.
Advantages of Color
Edge Detection
3
1
Accuracy Robustness Efficiency
Precise edge Insensitive to Computationally
location, noise and efficient
even in minor algorithms
complex variations. available.
scenes.
Applications of Color Edge
Detection
Medical Imaging Robotics Autonomous Driving
Analyzing medical scans. Object recognition for robots. Lane detection and object recognition.
COLOUR
E N H A N C E M E N T IN
D I G I TA L I M A G E
PROCESSING
I N T R O D U C T I O N TO
COLOUR
ENHANCEMENT

Co lo u r e n h a n c e m e n t is a critical
asp ect of digital i m a g e p ro cess in g. It
involves adjustin g the bri g htn ess,
contrast, a n d saturation of i m a g e s to
improve their visual appeal a n d
usefulness.
U N D E R S TA N D I N G
C O L O U R S PA C E S

Different c o l o u r s p a c e s like
R G B , C M Y K , a n d H S V p lay a
s i g n i fi c a n t role in i m a g e
p ro cess ing . E a c h s p a c e h a s its
o w n w a y of rep res en ti n g colours,
w h i c h i m p a c t s h o w i m a g e s are
enhanced. Understan ding these
s p a c e s is essential for effective
c o l o u r ma ni pu l a ti o n .
HISTOGRAM EQUALIZATION

H i s t o g r a m e q u a l i z a t i o n is a
p owerful t e c h n i q u e u s e d to
i mp r o v e t h e contrast of i m a g e s .
B y red istribu ting t h e intensity
values, it e n h a n c e s t h e visibility
of features in a n i m a g e . This
m e t h o d is especially effective
for i m a g e s wi th p o o r contrast.
Saturation Ad ju stment
Te c h n i q u e s

A d j u st i n g s at u rat i o n c a n m a ke
i m a g e s m o r e vibrant a n d eye-
catchi ng. Te c h n i q ue s s u c h as
S e l e c t i ve C o l o u r i n g a n d V i b ra n c e
A d j u s t m e n t allow for ta rg eted
e n h a n c e m e n t wi t ho ut
oversaturating t he entire i m a g e ,
p re s e r vi ng natural s k i n to ne s a n d
textures.
EXAMPLE-
SHARPENING TECHNIQUES
I m a g e sharpening enhances the e dg es and
fi n e details, m a k i n g a n i m a g e ap p e a r
clearer a n d m o r e d efi n ed .

CONTRAST TECHNIQUES
T h e t e r m c o n t r a s t refe rs t o t h e a m o u n t of
c o l o r o r g r a y s c a l e d i ffere nti a ti o n t h a t
e xi s ts b e t w e e n v ar i o u s i m a g e f e a tu re s i n
both a na log a n d digital images.
EXAMPLE-
NOISE TECHNIQUES

In dig ital i m a g e p roces sing , no ise is a r a n d o m


variation in th e b r i g h t n e s s or color information of
a n i m a g e t h a t r e d u c e s its quality.

BRIGHTNESS TECHNIQUES

I n i m a g e pr oce ss in g, br i g h t n e s s is t h e overall
l i g h t n e s s of a n i m a g e , w h i c h c a n b e a d j u s t e d b y
c h a n g i n g t he valu e of e a c h pixel.
S o m e O t h e r Te c h n i q u e s

• S h a d o ws
• Hue
• Te m p e ra t u re
• Wa r m t h
• G ra i n
• V i g n e tt e
• Gain
• Gamma& Lift
EXAMPLE-
EXAMPLE-
Applications

1 . I m a g e Editing
2 . I m a ge Compression
3 . C o m p u t e r Vision
4. Medi cal I m a g i n g
CHALLENGES
C o m p u t a t i o n a l co mp l exi ty: Co lor transforms c a n b e
c o mp u t a t i o n a l l y expensive, esp ecially for h i g h - reso l ution i m a g e s
or real-t i m e applications. T his c a n limit their practical use,
particularly i n resou rce-co ns t r ai n ed en viron me n t s .
R o b u s t n e s s to n o i s e a n d variation: Co lor t rans fo rms c a n b e
sensitive to variations in l igh ti ng, i m a g e noise, a n d o ther s o u rc es
of variation. Thi s c a n r e d u c e their a c c u r a c y a n d reliability,
esp ecially for a pp l i ca ti o ns s u c h a s m e d i c a l i m a g i n g or c o m p u t e r
vision.
S u b j ect i v i ty of col or p erc ep t i on : Color p e r c e p t i on c a n vary
acro ss i n dividuals a n d cultures, w h i c h c a n m a k e it difficult to
estab li sh co nsi sten t color s t a n d a r d s or to a ch i ev e s p e c i fi c color
effects.
CONCLUSION AND FUTURE DIRECTIONS

• In conclusion, colour e n han c e me n t is vital for


improvin g the quality of digital images. As
tech nology evolves, n e w techn iques a n d tools
will continue to emerge. E m br ac i n g these
a dva nc em ents will enable us to create even more
vibrant visuals in our work.
Laws of Color Matching

The Laws of Color Matching are foundational principles in color science


that guide how we perceive colors and combine them to create new ones.
They are derived from experiments in human color perception and have
significant implications in digital color representation, especially in image
processing.
Tristimulus Theory (Color Matching Function):
The Tristimulus Theory states that any color can be represented as a combination of three primary colors.
In digital systems, colors are often described using the RGB (Red, Green, Blue) color space. These
primary colors form the basis of color matching, as each color in an image can be represented as a
weighted sum of these primaries. In terms of image processing, each pixel in an RGB image holds values
for red, green, and blue intensities, creating a wide range of colors when combined.

Additive and Subtractive Color Mixing:


Additive Color Mixing
is used in digital displays, where colors are created by adding different amounts of red, green, and blue light. When all
three colors are combined at their full intensity, they create white light. This principle is directly applied in image
processing for screens, where RGB values are added to display the intended color.
Subtractive Color Mixing
involves combining colors in a way that filters out (or subtracts) certain wavelengths. This is more relevant in printing
with CMY (Cyan, Magenta, Yellow) where colors are produced by subtracting red, green, and blue light. In digital image
processing, subtractive mixing isn’t as directly applied but is important for preparing digital images for print.
Grassmann's Laws: Grassmann’s laws describe how colors combine and are perceived,
and they form the mathematical foundation for color mixing:
•Scalar Law: If the intensity of a color is scaled, the color perception scales
proportionally. This is used in image processing when adjusting brightness levels.
•Additive Law: Colors can be combined by adding their respective values, as in RGB
color processing. This is a common method in digital blending techniques.
•Associative Law: Colors can be combined in any sequence to get the same resultant
color, which helps in algorithms where color combinations are manipulated iteratively.
•Transposability: This states that a mixture of colors is independent of the order in which
they are combined.
Chromaticity Diagram

• The Chromaticity Diagram is a 2D representation of color, often based on the CIE 1931 color
space, which maps color perception in terms of human vision. It’s used in digital image
processing to describe and analyze colors in an image, especially for applications in color
correction, display calibration, and color gamut mapping.
CIE XYZ Color Space:
The chromaticity diagram is typically derived from the CIE XYZ color space, where X, Y, and Z
are tristimulus values representing how a color is perceived. This color space is device-independent
and is designed to mimic human color perception.
Chromaticity Coordinates (x, y):
The chromaticity diagram is a 2D plane that plots color in terms of chromaticity coordinates 𝑥x and
𝑦y, which are derived as follows:
This mapping excludes the brightness component (Y), focusing solely on hue and
saturation. Digital image processing systems often use chromaticity diagrams to
define the gamut of display devices or color transformations between color spaces.

Visible Spectrum and Color Gamut:


The horseshoe-shaped area on the chromaticity diagram represents the spectrum of
visible colors. Any color within this boundary is visible to the human eye. In digital
image processing, the RGB color gamut is usually represented as a triangle within this
area, indicating the range of colors a device can display. For instance, sRGB, Adobe
RGB, and DCI-P3 gamuts represent different triangles on this diagram, with larger
triangles covering more colors. Image processing algorithms frequently utilize the
chromaticity diagram to convert colors between different color gamuts, essential for
ensuring color accuracy across devices.
Applications in Image Processing:
1.Color Balancing: Chromaticity values help adjust the color balance of
images by altering chromaticity coordinates, which can neutralize color casts
and enhance the natural appearance of an image.
2.Color Gamut Mapping: When images are viewed on different devices, the
chromaticity diagram helps remap colors to fit within the display’s color
gamut, avoiding colors outside the gamut.
3.Color Correction and Calibration: Color calibration uses chromaticity
diagrams to adjust color output to match reference standards (e.g., sRGB),
which is essential for accurate display and printing.

You might also like