0% found this document useful (0 votes)
8 views

Digital Image Processing IMAGE ENHANCEMENT

Uploaded by

afiyaparveen786
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Digital Image Processing IMAGE ENHANCEMENT

Uploaded by

afiyaparveen786
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Digital Image

Processing IMAGE
ENHANCEMENT
Image Enhancement Definition

• Image Enhancement: is the process that improves the quality of the image for a specific application
Some Basic Intensity (Gray-level) Transformation Functions

• Grey-level transformation functions (also called, intensity functions), are


considered the simplest of all image enhancement techniques.

• The value of pixels, before and after processing, will be denoted by r and s,
respectively.

• These values are related by the expression of the form:

s = T (r)

where T is a transformation that maps a pixel value r into a pixel value s.


Some Basic Intensity (Gray-level) Transformation Functions

Consider the following figure, which shows three basic types of functions used frequently for image
enhancement:
Some Basic Intensity (Gray-level) Transformation Functions

• The three basic types of functions used frequently for image enhancement:
– Linear Functions:
• Negative Transformation
• Identity Transformation

– Logarithmic Functions:
• Log Transformation
• Inverse-log Transformation

– Power-Law Functions:
• nth power transformation
• nth root transformation
Linear Functions

• Identity Function

– Output intensities are identical to input intensities

– This function doesn’t have an effect on an image, it was included in the graph only for
completeness

– Its expression:
s=r
Linear Functions

• Image Negatives (Negative Transformation)

– The negative of an image with gray level in the range [0, L-1], where L = Largest value in an image, is
obtained by using the negative transformation’s expression:
s=L–1–r
Which reverses the intensity levels of an input image, in this manner produces the equivalent of a
photographic negative.

– The negative transformation is suitable for enhancing white or gray detail embedded in dark regions of
an image, especially when the black area are dominant in size
Logarithmic Transformations
• Log Transformation
The general form of the log transformation:

s = c log (1+r)

Where c is a constant, and r ≥ 0

– Log curve maps a narrow range of low gray-level values in the input image into a wider range of the
output levels.
– Used to expand the values of dark pixels in an image while compressing the higher-level values.
– It compresses the dynamic range of images with large variations in pixel values.
Logarithmic Transformations
Logarithmic Transformations

• Inverse Logarithm Transformation

– Do opposite to the log transformations

– Used to expand the values of high pixels in an image while compressing the darker-level values
Power-Law Transformations

• Power-law transformations have the basic form of:

s = c.rᵞ

Where c and ᵞ are positive constants

• Different transformation curves


are obtained by varying ᵞ (gamma)
Power-Law Transformations
• Variety of devices used for image capture, printing and display respond according to a power law. The
process used to correct this power-law response phenomena is called gamma correction.

For example, Cathode Ray Tube (CRT) devices have an intensity-tovoltage response that is a power
function, with exponents varying from approximately 1.8 to 2.5.With reference to the curve for g=2.5 in
Fig. 3.6, we see that such display systems would tend to produce images that are darker than intended.
This effect is illustrated in Fig. 3.7. Figure 3.7(a) shows a simple gray-scale linear wedge input into a CRT
monitor. As expected, the output of the monitor appears darker than the input, as shown in Fig. 3.7(b).
Gamma correction. In this case is straightforward. All we need to do is preprocess the input image
before inputting it into the monitor by performing the transformation. The result is shown in Fig.
3.7(c).When input into the same monitor, this gamma-corrected input produces an output that is close
in appearance to the original image, as shown in Fig. 3.7(d).
Piecewise-Linear Transformation Functions

• Principle: Rather than using a well defined mathematical function we can use arbitrary user
defined transforms

• Advantage: Some important transformations can be formulated only as a piecewise function.

• Disadvantage: Their specification requires more user input that previous transformations

Types of Piecewise transformations

– Contrast Stretching
– Gray-level Slicing
– Bit-plane slicing
Contrast Stretching
• One of the simplest piecewise linear functions is a contrast-stretching transformation, which is
used to enhance the low contrast images.

• Low contrast images may result from:


– Poor illumination
– Wrong setting of lens aperture during image acquisition
Contrast Stretching

• Figure 3.10(a) shows a typical transformation used for contrast stretching.

• The locations of points (r1, s1) and (r2, s2) control the shape of the transformation function.

• If r1 = s1 and r2 = s2, the transformation is a linear function that produces no changes in gray levels.

• If r1 = r2, s1 = 0 and s2 = L-1, the transformation becomes a Thresholding function that creates a binary
image.

• Intermediate values of (r1, s1) and (r2, s2) produce various degrees of spread in the gray levels of the
output image, thus affecting its contrast.

• In general, r1 ≤ r2 and s1 ≤ s2 is assumed, so the function is always increasing.


Gray-level Slicing or Intensity-level
Slicing
• This technique is used to highlight a specific range of
gray levels in a given image.

• It can be implemented in several ways, but the two


basic themes are:

– One approach is to display a high value for all gray


levels in the range of interest and a low value for all
other gray levels. This transformation, shown in Fig
3.11 (a), produces a binary image.
– The second approach, based on the transformation
shown in Fig 3.11 (b), brightens the desired range of
gray levels but preserves gray levels unchanged.
– Fig 3.11 (c) shows a gray scale image, and fig 3.11 (d)
shows the result of using the transformation in Fig
3.11 (a).
Bit-plane Slicing

• Pixels are digital numbers, each one composed of bits.

• For example, the intensity of each pixel in a 256 gray – scale image is composed of 8 bits (ie 1 byte)

• Instead of highlighting gray-level range, we could highlight the contribution made by each bit.

• This method is useful and used in image compression.


Bit-plane Slicing

• Assuming that each pixel is represented by 8 bits, the image is composed of eight 1-bit planes

• Plane 0 containing the lowest order bit of all pixels in the image and plane 7 all the higher order
bits

• Only the most significant bits contain the majority of visually significant data. The other bit planes
constitute the most suitable details

• Separating a digital image into its bits planes is useful for analyzing the relative importance played
by each bit of the image

• It helps in determining the adequacy of the number of bits used to quantize each pixel
Image Enhancement Methods:

• Spatial Domain Methods (Image Plane)


Techniques are based on direct manipulation of pixels in an image
• Frequency Domain Methods
Techniques are based on modifying the Fourier transform of the image.
• Combination Methods
There are some enhancement techniques based on various combinations
of methods from the first two categories
Spatial Filter A spatial filter is an image operation where each pixel value I(u, v) is changed
by a function of the intensities of pixels in a neighborhood of (u, v).

Spatial filtering term is the filtering operations that are performed directly on the pixels of
an image.

Mechanism of Spatial Filtering The process consists simply of moving the filter mask from
point to point in an image.

At each point (x,y) the response of the filter at that point is calculated using a predefined
relationship
Smoothing filters are used for

blurring noise reduction.

Blurring is used in preprocessing steps to removal of small details from an image prior to
object extraction and bridging of small gaps in lines or curves

Noise reduction can be accomplished by blurring


Spatial filters : Smoothing ( low pass)

Use: for blurring and noise reduction.

How it works?
The value of every pixel is replaced by the average of the gray levels in the neighborhood.

Type of smoothing filters:


1. Standard average
2. Weighted average.
3. Median filter
Standard averaging filter: (110 +120+90+91+94+98+90+91+99)/9 =883/9 = 98.1

Weighted averaging filter: (110 +2 x 120+90+2 x 91+4 x 94+2 x 98+90+2 x 91+99)/16 =97.81
Mask size determines the degree of smoothing and loss of detail.
Median filter(non linear)

Steps: 1.
Sort the pixels in ascending order: 90,90, 91, 94, 95, 98, 99, 110, 120
2. replace the original pixel value by the median : 95
Very effective for removing “salt and pepper” noise (i.e., random occurrences of black and
white pixels).
Spatial filters : Sharpening ( high pass)
Use: for highlighting fine detail or enhancing detail that
has been blurred.

SHARPENING – 1 ST DERIVATIVE
Apply the following laplace on the highlighted pixel
•154*4 – 158- 156-158-158 = -14

•So the value after filter = -14

•We call the resultant image: sharpened image.

•Filtered image=original +sharpened image

•The value in the filter image=154-14 =130


SHARPENING – 2nd DERIVATIVE

Apply the following laplace 2 nd derivative on the highlighted pixel

•154*4 – 158- 156-158-158 = -14

•Apply laplace to all pixels

•Then apply it again to our pixel:-14*4 – 10 -10 – (-6) -


4 =-74

•So the value after 2nd derivative filter =-74 •The


value of pixel in the filter image=154-74 = 80
HISTOGRAM PROCESSING

• The histogram of a digital image with gray levels in the range[0,L-1] is a discrete function,

where, rk is the kth gray level & nk is number of pixels in the image having gray level rk

• Histogram is normalized by dividing each of its values by the total no. of pixels in the
image denoted by ‘n’.
Thus normalized histogram is given by,

where, k = 0,1,2,….L-1

• Histograms are the basis for the numerous spatial domain processing techniques.

• Histogram manipulation is used effectively for image enhancement, also quite useful in
other image processing applications viz image compression & segmentation.
HISTOGRAM EQUALIZATION

Let us consider the transformation as, S = T(r) , 0 ≤ r ≤ 1

we assume that the transformation function T(r) satisfies the


following conditions :

a. T(r) is single valued and monotonically increasing in the


interval 0 ≤ r ≤ 1
b. 0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1

• The requirement in (a) guarantees that the inverse


transformation will exist, and monotonicity condition
preserves the increasing order from black to white in the
output image.

• Condition (b) guarantees that the output gray levels will


be in the same range as the input levels.

Fig : example for a gray level transformation function i.e.


single valued and monotonically increasing.
1.The discrete version of the transformation function can be given as :

Thus a processed (output) image is obtained by mapping each pixel with level rk in the input
image into a corresponding pixel with level Sk in the output image via the above equation.

• A plot of Pr(rk) Vs rk is called histogram. The transformation (mapping) given in above


equation is called histogram equalization or histogram linearization.

• Histogram equalization automatically determines a transformation function that seeks to


produce an output image has a uniform histogram.

• The method used to generate a processed image that has a specified histogram is called
histogram matching or histogram specification.

You might also like