0% found this document useful (0 votes)
6 views

Module 2 IVP

Image enhancement is necessary to address issues such as poor contrast, noise, and distortion in images. Various methods exist for enhancement, including spatial domain methods, frequency domain methods, and combination methods, each employing different techniques to manipulate pixel values. Key techniques discussed include intensity transformations, spatial filtering, and sharpening filters, which aim to improve image quality by enhancing details and reducing noise.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Module 2 IVP

Image enhancement is necessary to address issues such as poor contrast, noise, and distortion in images. Various methods exist for enhancement, including spatial domain methods, frequency domain methods, and combination methods, each employing different techniques to manipulate pixel values. Key techniques discussed include intensity transformations, spatial filtering, and sharpening filters, which aim to improve image quality by enhancing details and reducing noise.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 151

Module 3

Image Enhancement
Why we need image enhancement?
⚫ Images may suffer from the following degradations:

⚫ Poor contrast due to poor illumination or finite sensitivity of the imaging


device

⚫ Electronic sensor noise or atmospheric disturbances leading to broad band


noise

⚫ Aliasing effects due to inadequate sampling

⚫ Finite aperture effects or motion leading to spatial distortion


Image Enhancement Methods
⚫ Spatial Domain Methods (Image Plane)
Techniques are based on direct manipulation of pixels in an image.

⚫ Frequency Domain Methods


Techniques are based on modifying the Fourier transform of the image.

⚫ Combination Methods
There are some enhancement techniques based on various combinations of
methods from the first two categories
Image Enhancement in the Spatial Domain

Image Enhancement: –
⚫ - To “improve” the usefulness of an image by using some transformation on the
image.
⚫ -- “better” looking, such as increasing the intensity or contrast.
⚫ A mathematical representation of spatial domain enhancement:
⚫ g(x,y) = T [f(x,y)]

⚫ where f(x, y): the input image


⚫ g(x, y): the processed image
⚫ T: an operator on f, defined over some neighborhood of (x, y)
Spatial filtering

⚫ Point processing
⚫ Neighborhood processing
Some Basic Intensity (Gray-level) Transformation
Functions(point processing functions)

⚫ simplest of all image enhancement techniques.


⚫ The value of pixels, before and after processing, will be denoted by r
and s, respectively. These values are related by the expression of the
form:
s = T (r)
where T is a transformation that maps a pixel value r into a pixel value
s.
Some Basic Intensity (Gray-level) Transformation Functions
continued..
⚫ The three basic types of functions used frequently for image
enhancement:
⚫ Linear Functions:
⚫ Negative Transformation
⚫ Identity Transformation

⚫ Logarithmic Functions:
⚫ Log Transformation
⚫ Inverse-log Transformation
⚫ Power-Law Functions:
th
⚫ n power transformation
th
⚫ n root transformation
Linear Functions

⚫ Identity Function

⚫ Output intensities are identical to input intensities


⚫ This function doesn’t have an effect on an image, it was included in the graph
only for completeness
⚫ Its expression:
s=r
Linear Functions
⚫ Image Negatives (Negative Transformation)

⚫ The gray level in the range [0, L-1], where L = Largest value in an image
the negative transformation’s expression:

⚫ s=L–1–r

⚫ --reverses the intensity levels of an input image,


⚫ --produces the equivalent of a photographic negative.

⚫ for enhancing white or gray detail embedded in dark regions of an image,


especially when the black area are dominant in size
Logarithmic Transformations
⚫ Log Transformation

⚫ The general form of the log transformation:

⚫ s = c log (1+r)
Where c is a constant, and r ≥ 0
⚫ Log curve maps a narrow range of low gray-level values in the input
image into a wider range of the output levels.
⚫ Used to expand the values of dark pixels in an image while compressing
the higher-level values.
⚫ It compresses the dynamic range of images with large variations in pixel
values.
expand the values of high pixels expand the values of dark pixels
in an image while compressing in an image while compressing
the darker-level values the higher-level values

Log
InvLog
Power-Law Transformations
⚫ Power-law transformations have the basic form of:
⚫ s = c.rᵞ
⚫ Where c and ᵞ are positive constants

• γ>1
• Compresses dark values
• Expands bright values
γ<1
Expands dark values
Compresses bright values
Power-Law Transformations
⚫ A variety of devices used for image capture, printing and display respond
according to a power law.
⚫ By convention, the exponent in the power law equation is referred as
gamma.
⚫ The process used to correct these power-law response phenomenon is
called as gamma correction.
⚫ E.g. CRT devices have an intensity-to-voltage response that is a power
function, with exponent varying from 1.8 to 2.5
⚫ With reference to the curve for gamma=2.5, the display system would tend
to produce images that are darker than intended.
Power-Law Transformation
Example 1: Gamma Correction
Example 2: Gamma Correction
Example 3: Gamma Correction
Piecewise-Linear Transformation Functions

⚫ Advantage:
⚫ the form of piecewise functions can be arbitrary complex over the
previous functions

⚫ Disadvantage: require considerably more user input.


Contrast stretching
⚫ Low contrast image result from:
1) Poor illumination
2) Lack of dynamic range in the imaging sensor
3) Wrong setting of lens aperture during image acquisition

• One of the simplest piecewise function


• Increase the dynamic range of the gray levels in the image
•Location of (r1, s1) and (r2,s2)control the shape of the transformation function
• A typical transformation: control the shape of the transformation
r1=r2 s1=0 and s2=L-1
Piecewise-Linear Transformation Functions
Case 1: Contrast Stretching
Case 2:Gray-level Slicing OR
Intensity- level slicing
⚫ Purpose: Highlight a specific range of grey values.

⚫ Two approaches:

1. 1) Display high value for range of interest, low value else (‘discard
background’)

2. 2) Display high value for range of interest, original value else


(‘preserve background’)
Case 3:Bit-plane Slicing
⚫ highlight the contribution made to total image appearance by
specific bits.
⚫ Each pixel in an image represented by 8 bits.

1
⚫ Image is composed of eight -bit planes.

Extracts the information of a single bit plane


BP 0

BP 5

BP 7
Advantages of bit plane slicing

⚫ It is useful for analyzing the relative importance of each bit in the image, a
process that aids in determining the adequacy of the number of bits used
to quantize the image.

⚫ This type of decomposition is useful for image compression in which


fewer than all planes are used in reconstructing an image.

⚫ Reconstruction:
⚫ The reconstruction is done by multiplying the pixels of the nth plane by
the constant 2 rest to n-1 and added all planes.
Neighborhood processing/ Spatial Filtering

⚫ Spatial filtering can be either linear or non-linear.

⚫ For each output pixel, some neighborhood of input pixels is used in the
computation.
⚫ In general, linear filtering of an image f of size M X N with a filter mask
of size m x n is given by

Where a=(m-1)/2 and b=(n-1)/2


Basics of Spatial Filtering
The Spatial Filtering Process
Origin x
a b c j k l
d
g
e
h
f
i
* m
p
n
q
o
r
Original Image Filter (w)
Simple 3*3 Pixels
e 3*3 Filter
Neighbourhood
eprocessed = n*e +
j*a + k*b + l*c +
m*d + o*f +
y Image f (x, y) p*g + q*h + r*i

The above is repeated for every pixel in the original image to


generate the filtered image
Strange Things Happen At The Edges!
At the edges of an image we are missing pixels to form a
neighbourhood
Origin x
e e

e e e
y Image f (x, y)
Strange Things Happen At The Edges! (cont…)

⚫There are a few approaches to dealing with missing edge pixels:


⚫ Omit missing pixels
⚫ Only works with some filters
⚫ Can add extra code and slow down processing

⚫ Pad the image


⚫ Typically with either all white or all black pixels
⚫ Replicate border pixels
⚫ Truncate the image
Correlation and Convolution
⚫ Filter masks are sometimes called convolution
masks or convolution kernels.
Smoothing Spatial Filters
⚫ Averaging filters (Low pass filters )
⚫ Smoothing linear filters

⚫ Purpose: for blurring and random noise reduction.


⚫ Blurring: removal of small details from an image

⚫ Averaging results in an image with reduced sharp transitions in intensities.

⚫ Edges- characterized by sharp intensity transitions.

⚫ Reduction of irrelevant details of an image


Smoothing Spatial Filters
⚫ Box filter
⚫ Weighted average filter

Box filter ( Average) Weighted average


Weighted Smoothing Filters
⚫More effective smoothing filters can be generated by allowing different
pixels in the neighbourhood different weights in the averaging function

1 2 1
/16 /16 /16
⚫ Pixels closer to the
central pixel are more 2 4 2
/16 /16 /16
important
1 2 1
/16 /16 /16
⚫ Often referred to as a
weighted averaging Weighted averaging
filter
Smoothing Spatial Filtering
Origin x
1 1 1
104 100 108 /9 /9 /9
1 1 1
99

95
106

90
98

85
* 1
/9
/9 1
/9
/9 1
/9
/9
1 10
/9 110
/9 110
/9
Original Image Filter
4 0 8
Simple 3*3 1
/9
99 1106
/9 1
/9
98
3*3 Smoothing Pixels
Neighbourhood 1
/95 1
/9
90 1
/9
85
Filter
9
1
e= /9*106 +
1
/9*104 + 1/9*100 + 1/9*108 +
1
/9*99 + 1/9*98 +
1
y Image f (x, y) /9*95 + 1/9*90 + 1/9*85
=98.3333
The above is repeated for every pixel in the original image to
generate the smoothed image
Smoothing Spatial Filters

⚫ The general implementation for filtering an M X N image


with a weighted averaging filter of size m x n is given by

where a=(m-1)/2 and b=(n-1)/2


Image smoothing with masks of various sizes
Another Smoothing Example
⚫By smoothing the original image we get rid of lots of the finer detail which
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

leaves only the gross features for thresholding

Original Image Smoothed Image Thresholded Image

* Image taken from Hubble Space Telescope


Limitations of Averaging filter

⚫ 1) Averaging operation leads to blurring of an image. Blurring affects


feature localization.

⚫ 2) If the averaging operation is applied to an image corrupted by


impulse noise then the impulse noise is attenuated and diffused but
not removed.
Order-Statistic Filters
⚫ Nonlinear spatial filter.

⚫ Response is based on ordering the pixels contained in the image area


encompassed by filter,

⚫ replacing the value of center pixel with value determined by the ranking
result.
Median filtering example
Median filtering example
⚫ Order-statistic filters:
⚫ Median filter: to reduce impulse noise (salt-and-pepper
noise)

Original Image Image After Image After


With Noise Averaging Filter Median Filter

Sometimes a median filter works better than an averaging filter


Sharpening Spatial Filters/ High pass filter
⚫Previously we have looked at smoothing filters which remove fine detail

⚫Sharpening spatial filters seek to highlight fine detail


⚫ Remove blurring from images
⚫ Highlight edges
⚫Sharpening filters are based on spatial differentiation.

⚫Fundamentally, the strength of response of a derivative operator is


proportional to the degree of intensity discontinuity of the image at that
point at which the operator is applied. Thus enhances edges and other
discontinuity and deemphasizes area with slowly varying intensities.
⚫ Sharpening filters are based on computing spatial derivatives of an image.
⚫ The derivative of digital function are defined in terms of differences.

⚫ First derivative 1) must be zero in areas of constant intensities


⚫ 2)must be non0zero at the onset of an intensity step or ramp
⚫ 3) must be non-zero along ramps.

⚫ Second derivative 1) must be zero in areas of constant intensities


⚫ 2)must be non0zero at the onset and end of an intensity step or ramp
⚫ 3) must be zero along ramps of constant slope.
Sharpening Spatial Filters

⚫ The derivative of digital function are defined in terms of differences.

⚫ The first-order derivative of a one-dimensional function f(x) is

⚫ The second-order derivative of a one-dimensional function f(x) is


Spatial Differentiation

A B
st nd
1 and 2 Derivative

f(x)

f’(x)

f’’(x)
st
1 Derivative (cont…)

f(x)

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
0 -1 -1 -1 -1 -1 0 0 6 -6 0 0 0 1 2 -2 -1 0 0 0 7 0 0

f’(x)
nd
2 Derivative (cont…)

f(x)

5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
-1 0 0 0 0 1 0 6 -12 6 0 0 1 1 -4 1 1 0 0 7 -7 0 0

f’’(x)
Using Second Derivatives For Image sharpening
⚫The 2nd derivative is more useful for image enhancement than the 1st derivative
⚫ Stronger response to fine detail
⚫ Simpler implementation

⚫The first sharpening filter is the Laplacian

⚫Approach : defining a discrete formulation of second order derivative and then


constructing a filter mask based on formulation

⚫ Isotropic filters , whose response is independent of the direction of the


discontinuities in the image. OR

⚫ Isotropic filters are rotation invariant.


Simplest isotropic derivative operator-- The Laplacian
⚫ Development of the Laplacian method
⚫ The two dimensional Laplacian operator for continuous functions:

⚫ The Laplacian is a linear operator.


The Laplacian (cont…)
⚫So, the Laplacian can be given as follows:

⚫We can easily build a filter based on this


0 1 0

1 -4 1

0 1 0
Use of Second Derivatives for Enhancement
The Laplacian
The Laplacian (cont…)
⚫Applying the Laplacian to an image we get a new image that highlights edges and
other discontinuities and deemphasize regions with slowly varying intensities.
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Produce images that have greyish edge lines and other discontinuities , all
superimposed on black featureless background.

Original Laplacian Laplacian


Image Filtered Image Filtered Image
Scaled for Display
But That Is Not Very Enhanced!
⚫The result of a Laplacian filtering is not an enhanced image

⚫Subtract the Laplacian result from the original image


to generate our final sharpened enhanced image

Laplacian
Filtered Image
Scaled for
Display
Laplacian Image Enhancement

- =
Original Laplacian Sharpened
Image Filtered Image Image

⚫In the final sharpened image edges and fine detail are much more
obvious
Laplacian Image Enhancement
Simplified Image Enhancement
⚫The entire enhancement can be combined into a single filtering operation
Simplified Image Enhancement (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

⚫This gives us a new filter which does the whole job for us in one step

0 -1 0

-1 5 -1

0 -1 0
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Simplified Image Enhancement (cont…)


Variants On The Simple Laplacian
⚫There are lots of slightly different versions of the Laplacian that can be used:
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

0 1 0 1 1 1
Simple Variant of
1 -4 1 1 -8 1
Laplacian Laplacian
0 1 0 1 1 1

-1 -1 -1

-1 9 -1

-1 -1 -1
Unsharp Masking & Highboost Filtering
⚫Using sequence of linear spatial filters in order to get
Sharpening effect.

-Blur the original image


- Subtract the blurred version from the original (this is called
the mask)
-add resulting mask to original image
Unsharp Masking & Highboost Filtering

Avg.

-
+

+
K<1 de-emphasize the contribution of the unsharp
mask
Example of unsharp masking
st
1 Derivative for image sharpening – The Gradient
⚫ An image gradient is a directional change in the intensity or color in an
image. Image gradients may be used to extract information from images.

⚫ The derivative of an image can be computed using the magnitude of


gradient.

⚫For a function f(x, y) the gradient of f at coordinates (x, y) is given as the


column vector:
Magnitude: provides information about edge strength

Direction: Perpendicular to the direction


of the edge.

For practical reasons this can be


simplified as:
On the left, an intensity image of a cat. In the center, a gradient image in the x
direction measuring horizontal change in intensity. On the right, a gradient
image in the y direction measuring vertical change in intensity. Gray pixels have
a small gradient; black or white pixels have a large gradient.
⚫ Approximate gradient using finite differences:

sensitive to vertical edges

Δx

sensitive to horizontal edges


All the derivative masks should have the
following properties:

•Opposite sign should be present in the mask.

•Sum of mask should be equal to zero.

•More weight means more edge detection.


⚫ We can implement and using masks:

• Example: approximate gradient at z
5

good
(x+1/2,y)
approximation (x,y+1/2)
at (x+1/2,y) *
*

good approximation
at (x,y+1/2)
Roberts cross –gradient operators
A different approximation of the gradient:
good approximation

(x+1/2,y+1/2)
*

We can implement and using the


following masks
⚫ Example: approximate gradient at z5

• Other approximations

Sobe
l
st
1 Derivative Filtering (cont…)
• Other approximations
⚫There is some debate as to how best to calculate these gradients but we will use:

⚫which is based on these coordinates

z1 z2 z3

z4 z5 z6

z7 z8 z9
Sobel Operators
⚫Based on the previous equations we can derive the Sobel
Operators
-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1

⚫To filter an image it is filtered using both operators the results of which
are added together
Sobel Example
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

An image of a contact lens


which is enhanced in
order to make defects (at
four and five o’clock in the
image) more obvious

⚫Sobel filters are typically used for edge detection


Roberts

Sobel
st nd
1 & 2 Derivatives
⚫Comparing the 1st and 2nd derivatives we can conclude the following:
⚫ 1st order derivatives generally produce thicker edges
⚫ 2nd order derivatives have a stronger response to fine detail e.g.
thin lines
⚫ 1st order derivatives have stronger response to grey level step
⚫ 2nd order derivatives produce a double response at step changes in
grey level
Combining Spatial Enhancement Methods (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

(a)
Laplacian filter of bone scan
(a)
(b)
Sharpened version of bone
scan achieved by subtracting (c)
(a) and (b)
Sobel filter of bone scan (a)
(d)
Combining Spatial Enhancement Methods (cont…)
Result of applying a (h)
power-law trans. to (g)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Sharpened image which is


sum of (a) and (f)
(g)
The function of (c) minus (e)
which will be used as a mask (f)

(e)

Image (d) smoothed with a 5*5


averaging filter
Combining Spatial Enhancement Methods (cont…)
⚫Compare the original and final images
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
⚫ Histogram Processing

What is Histogram?

⚫ In Statistics, Histogram is a graphical representation showing a visual


impression of the distribution of data.

⚫ An Image Histogram is a type of histogram that acts as a graphical


representation of the lightness/color distribution in a digital image.

⚫ It plots the number of pixels for each value.


Introductory Example of Histograms
The (intensity or brightness) histogram shows how many
times a particular grey level (intensity) appears in an image.

For example, 0 - black, 7 – white

0 1 1 2 4
2 1 0 0 2
5 2 0 0 4
1 1 2 4 1
image histogram
Why Histogram?

⚫ Histograms are the basis for numerous spatial domain processing techniques.

⚫ Histogram manipulation can be used effectively for image enhancement.

⚫ Histograms can be used to provide useful image statistics.

⚫ Information derived from histograms are quite useful in other image


processing applications, such as image compression and segmentation.
Some Typical Histograms
⚫ The shape of a histogram provides useful information for contrast
enhancement.

Dark image

The horizontal axis of each histogram plot corresponds to gray level values, rk.
The vertical axis corresponds to values of h(rk)=nk or p(rk)=nk/n if the
values are normalized.
Bright image

Low contrast image


High contrast image
Why Histogram equalization?

It is a baby in the cradle!

Histogram information reveals that image is under-exposed


Another Example

Over-exposed image
Histogram Processing
1 4 5 0

3 1 5 1

Types of processing:
Histogram equalization
Histogram matching (specification)
Number of
Pixels

gray level
Histogram Equalization Example:
Histogram of an image represents the relative frequency of occurrence of various
gray levels in the image

We are looking for


this transformation !
Histogram Equalization
⚫ Basic idea: find map f(x) such that the histogram of the modified
(equalized) image is flat (uniform).

Preprocessing technique to enhance contrast in ‘natural’


images.

❖ Target: find gray level transformation function T to transform


image f such that the histogram of T(f) is ‘equalized’
⚫ Histogram equalization:

∙ The approach is to design a transformation T(.) such that the gray


values in the output is uniformly distributed in [0, L-1].

•Let us assume for the moment that the input image to be enhanced has
continuous gray values, with r = 0 representing black and r = L-1
representing white.

•We need to design a gray value transformation s = T(r), based on


the histogram of the input image, which will enhance the image.
0 ≤ r ≤ L-1
⚫ we assume that:
⚫ (1) T(r) is a monotonically increasing function for 0 ≤ r ≤ L-1
(preserves order from black to white).

1 1
⚫ (2) T(r) maps [0, L- ] into [0, L- ] (preserves the range of allowed
Gray values).

-1
∙Ιnverse transformation r = T (s) .

(1)’ T(r) is strictly monotonically increasing function for 0 ≤ r ≤ L-1


• Letpr(r) and ps(s) --- the probability density of the Gray values in
the input and output images.
∙ If pr(r) and T(r) are known, and r = T -1(s) satisfies condition 1,
we can write (result from probability theory):

∙One way to enhance the image is to design a transformation function


T(.) such that the gray values in the output is uniformly distributed
in [0,L- 1],
i.e. ps (s) = L-1, 0 ≤ s ≤L-1
∙Consider the transformation function

∙ Note that this is the cumulative distribution function (CDF) of r


and satisfies the previous two conditions.

From the previous equation and using the fundamental


theorem of calculus,
Put (3) in (1) we get,

0 ≤ s ≤ L -1

Thus, using a transformation function equal to the CDF of input gray


values r, we can obtain an image with uniform gray values.
Histogram Processing

⚫ The histogram of a digital image with gray levels from 0 to L-1 is a


discrete function h(rk)=nk, where:

⚫ rk is the kth gray level


⚫ nk is the # pixels in the image with that gray level
⚫ n is the total number of pixels in the image
⚫ k = 0, 1, 2, …, L-1
⚫ Normalized histogram: p(rk)=nk/n n=MN
⚫ Thus, p(rk) gives an estimate of the probability of occurrence of gray level
rk.
⚫ sum of all components = 1
How to implement histogram equalization?

Step 1:For images with discrete gray values, compute:

L: Total number of gray levels


nk: Number of pixels with gray value rk
n: Total number of pixels in the image
Step 2: Based on CDF, compute the discrete version of the
previous transformation :
Example:

∙ Consider an 8-level 64 x 64 image with gray values (0, 1, …,7). The normalized gray
values are (0, 1/7, 2/7, …, 1). The normalized histogram is given below:

NB: The gray values in output are also (0, 1/7, 2/7, …, 1).
# pixels Fraction of #
pixels

Gray value Normalized gray value


Histogram of output image

# pixels

Gray values

∙ Note that the histogram of output image is only approximately, and not
exactly, uniform. This should not be surprising, since there is no result
that claims uniformity in the discrete case.
Example Original image and its histogram
Histogram equalized image and its histogram
∙ Comments:
Histogram equalization may not always produce desirable results,
particularly if the given histogram is very narrow. It can produce
false edges and regions. It can also increase image “graininess” and
“patchiness.”
Histogram Equalization
Histogram Specification

desired

equaliz G equalize
e -1
G
Histogram Specification(Histogram Matching)
⚫ Histogram equalization yields an image whose pixels are (in theory)
uniformly distributed among all gray levels.

⚫ Sometimes, this may not be desirable. Instead, we may want a


transformation that yields an output image with a pre-specified histogram.
This technique is called histogram specification.
Given Information
⚫ (1) Input image from which we can compute its histogram .
⚫ (2) Desired histogram.

Goal
⚫ Derive a point operation, H(r), that maps the input image into an
output image that has the user-specified histogram.

⚫ Again, we will assume, for the moment, continuous-gray values.


Approach of derivation
z=H(r)

Input image Uniform image Output image

s=T(r) v=G(z)
⚫ Suppose, the input image has probability density in p(r) . We
⚫ want to find a transformation z = H (r) , such that the probability density of
the new image obtained by this transformation is pout(z) , which is not
necessarily uniform.
⚫ First apply the transformation

⚫ This gives an image with a uniform probability density.

⚫ If the desired output image were available, then the following


⚫ transformation would generate an image with uniform density:
From the gray values ν we can obtain the gray values z by
using the inverse transformation, z = G-1(v)

-1
Z=H(r)= G [ v=s =T(r)]
⚫ For discrete gray levels, we have
Algorithm for histogram specification:
⚫ (1) Equalize input image to get an image with uniform gray values, using the
discrete equation:

⚫ (2) Based on desired histogram to get an image with uniform gray values,
using the discrete equation:
Example:

∙ Consider an 8-level 64 x 64 previous image.

# pixels

Gray value
∙ It is desired to transform this image into a new image, using a
transformation Z=H(r)= G-1[T(r)], with histogram as specified below:

# pixels

Gray values
∙The transformation T(r) was obtained earlier (reproduced
below):

∙ Now we compute the transformation G as before.


Original image and its histogram

Histogram specified image and its histogram

You might also like