0% found this document useful (0 votes)
6 views

DIP

Digital image processing focuses on enhancing pictorial information for human perception and enabling machine perception. It encompasses various applications such as noise filtering, content enhancement, and medical visualization, with historical roots dating back to the 1920s. The field has evolved significantly, utilizing advanced techniques for image acquisition, enhancement, restoration, and segmentation, supported by specialized hardware and software.

Uploaded by

integrasbiotek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

DIP

Digital image processing focuses on enhancing pictorial information for human perception and enabling machine perception. It encompasses various applications such as noise filtering, content enhancement, and medical visualization, with historical roots dating back to the 1920s. The field has evolved significantly, utilizing advanced techniques for image acquisition, enhancement, restoration, and segmentation, supported by specialized hardware and software.

Uploaded by

integrasbiotek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Digital Image Processing

2
Preview
§ Digital image processing methods are interested by two major
applications:
i. Improvement of pictorial information for human perception
and processing of image data for storage, transmission
§ Noise filtering
§ Content Enhancement
§ Contrast enhancement
§ Deblurring
§ Remote sensing
ii. Representation for autonomous machine perception
3
§ Objectives
i. To define the scope of the field that we call image
processing
ii. To give a historical perspective of the origins of this field
iii. To give an idea of the state of the art in image processing
by examining some of the principal areas in which it is
applied
iv. To discuss briefly the principal approaches used in digital
image processing
v. To give an overview of the components contained in a
typical, general-purpose image processing system
vi. To provide direction to the books and other literature
where image processing work normally is reported
What is Digital Image Processing?
§ An image may be defined as a two-dimensional function f (x , y)
where x and y are spatial (plane) coordinates, and the amplitude
of f at any pair of coordinates (x, y) is called the intensity or gray
level of the image at that point.
§ When x, y, and the amplitude values of f are all finite, discrete
quantities, we call the image a digital image.
§ The field of digital image processing refers to processing digital
images by means of a digital computer.
§ Digital image is composed of a finite number of elements, each of
which has a particular location and value.
§ These elements are referred to as picture elements, image elements,
and pixels. Pixel is the term most widely used to denote the elements
of a digital image.
4
5 The Origins of Digital Image Processing
§ Early 1920s: One of the first applications of digital imaging was in
the newspaper industry
§ In 1920s Submarine cables were used to transmit digitized newspaper
pictures between London and New York using Bartlane systems.
§ Specialized printing equipments were used to code the images and
reproduced at receiver using telegraphic printers.
§ In 1921, photographic printing press improved the resolution and
tonal quality of images.
§ Bartlane system was capable of coding 5 distinct brightness levels.
§ It increased to 15 by 1929.
6

Fig. telegraphic printer


7
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Fig. A digital picture produced using telegraphic printer


(in 1921, 1922 - Atlantic, 1929 – London to New York
8 § After 35 years of improvement in processing technique
§ In 1960s, Improvements in computing technology and the
onset of the space program during the period.
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

§ In 1964, Computer processing techniques were used to


improve picture of moon transmitted by Ranger 7 at JPL.
This was the basis of modern image processing technique.

Fig. The first picture of the moon by a U.S. spacecraft


9

§ 1970s: Digital image processing begins to be used in medical applications


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

§ 1979: Sir Godfrey N. Hounsfield & Prof. Allan M. Cormack share


the Nobel Prize in medicine for the invention of tomography,
the technology behind Computerised Axial Tomography (CAT) scans

Fig. Typical head slice CAT image


10

§ Today: The use of digital image processing techniques has


exploded and they are now used for all kinds of tasks in all
kinds of areas
§ Image enhancement/restoration
§ Artistic effects
§ Medical visualisation
§ Industrial inspection
§ Law enforcement
§ Human computer interfaces
Image Enhancement

11
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Fig. Noisy Image Fig. Filtered Image

Fig. Low Contrast Image Fig. Enhanced Image


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

12

Restoration

Fig. Motion Blurred


Fig. Deblurred
Image Enhancement
13
Medical Visualisation
14
X-Ray Imaging
GIS
15
§ Geographic Information Systems
§ Digital image processing techniques are used
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

extensively to manipulate satellite imagery


§ Terrain classification
§ Meteorology
16 GIS (cont…)
§ Night-Time Lights of the World
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

data set
§ Global inventory of human
settlement
§ Not hard to imagine the
kind of analysis that might
be done using this data
17
Industrial Inspection
§ Human operators are expensive,
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

slow and
unreliable
§ Make machines do the
job instead
§ Industrial vision systems
are used in all kinds of industries
§ Can we trust them?
Fundamental Steps in Digital Image Processing
Wavelets and
18
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Contd. AERCWCMS
19 FC

´ Image acquisition: This involves capturing an image using a


digital camera or scanner, or importing an existing image into
a computer.
´ Image enhancement: This involves improving the visual quality
of an image, such as increasing contrast, reducing noise, and
removing artifacts.
´ Image restoration: This involves removing degradation from an
image, such as blurring, noise, and distortion.
´ Color image processing: It is an area that has been gaining in
importance because of the significant increase in the use of
digital images over the internet. Color is used also as the basis
for extracting features of interest in an image.
´ Wavelets: These are the foundation for representing images in
various degrees of resolution. In particular, this material is used
for image data compression and for pyramidal representation,
in which images are subdivided successively into smaller
regions.
Contd.
20
´ Compression: As the name implies, it deals with techniques for
reducing the storage required to save an image, or the bandwidth
required to transmit it. Image compression is familiar (perhaps
inadvertently) to most users of computers in the form of image file
extensions, such as the jpg file extension used in the JPEG (Joint
Photographic Experts Group) image compression standard.
´ Morphological processing: It deals with tools for extracting image
components that are useful in the representation and description of
shape. The material in this chapter begins a transition from processes
that output images to processes that output image attributes.
´ Image segmentation: This involves dividing an image into regions or
segments, each of which corresponds to a specific object or feature
in the image.
Contd.
21
´ Feature extraction: It almost always follows the output of a
segmentation stage, which usually is raw pixel data, constituting either
the boundary of a region (i.e., the set of pixels separating one image
region from another) or all the points in the region itself. Feature
extraction consists of feature detection and feature description.
Feature detection refers to finding the features in an image, region, or
boundary. Feature description assigns quantitative attributes to the
detected features. For example, we might detect corners in a region,
and describe those corners by their orientation and location; both of
these descriptors are quantitative attributes.
´ Image pattern classification: It is the process that assigns a label (e.g.,
“vehicle”) to an object based on its feature descriptors.
Contd.
22
´ So far, we have said nothing about the need for prior knowledge or
about the interaction between the knowledge base and the
processing modules in Figure.
´ Knowledge about a problem domain is coded into an image
processing system in the form of a knowledge database. This
knowledge may be as simple as detailing regions of an image where
the information of interest is known to be located, thus limiting the
search that has to be conducted in seeking that information.
´ The knowledge base can also be quite complex, such as an
interrelated list of all major possible defects in a materials inspection
problem, or an image database containing high-resolution satellite
images of a region in connection with change-detection applications.
´ In addition to guiding the operation of each processing module, the
knowledge base also controls the interaction between modules.
Fundamental Steps in Digital Image Processing
Wavelets and
23
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
24
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
25
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
26
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
27
Colour Image Morphological
multi-resolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
28
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
29
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
30
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
31
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
32
Colour Image Morphological
multiresolution Compression
Processing Processing
processing

Image
Restoration Segmentation

Image Representation
Enhancement & Description

Image Object
Acquisition Recognition

Problem Domain
33 Components of DIP
34 Contd.
´ The first is a physical sensor that responds to the energy radiated by
the object we wish to image. The second, called a digitizer, is a
device for converting the output of the physical sensing device into
digital form. For instance, in a digital video camera, the sensors (CCD
chips) produce an electrical output proportional to light intensity. The
digitizer converts these outputs to digital data.
´ Specialized image processing hardware usually consists of the
digitizer just mentioned, plus hardware that performs other primitive
operations, such as an arithmetic logic unit (ALU), that performs
arithmetic and logical operations in parallel on entire images. One
example of how an ALU is used is in averaging images as quickly as
they are digitized, for the purpose of noise reduction. This type of
hardware sometimes is called a front-end subsystem, and its most
distinguishing characteristic is speed. In other words, this unit performs
functions that require fast data throughputs (e.g., digitizing and
averaging video images at 30 frames/s) that the typical main
computer cannot handle. One or more GPUs also are common in
image processing systems that perform intensive matrix operations.
Contd.
35
´ The computer in an image processing system is a general-purpose
computer and can range from a PC to a supercomputer. In dedicated
applications, sometimes custom computers are used to achieve a
required level of performance, but our interest here is on general-purpose
image processing systems. In these systems, almost any well-equipped
PC-type machine is suitable for off-line image processing tasks.
´ Software for image processing consists of specialized modules that
perform specific tasks. A well-designed package also includes the
capability for the user to write code that, as a minimum, utilizes the
specialized modules. More sophisticated software packages allow the
integration of those modules and general-purpose software commands
from at least one computer language.
´ Mass storage is a must in image processing applications. An image of size
1024 1024 × pixels, in which the intensity of each pixel is an 8-bit quantity,
requires one megabyte of storage space if the image is not compressed.
When dealing with image databases that contain thousands, or even
millions, of images, providing adequate storage in an image processing
system can be a challenge. Digital storage for image processing
applications falls into three principal categories: (1) short-term storage for
use during processing; (2) on-line storage for relatively fast recall; and (3)
archival storage, characterized by infrequent access. Storage is
measured in bytes (eight bits), Kbytes (103 bytes), Mbytes (106 bytes),
Gbytes (109 bytes), and Tbytes (1012 bytes).
Contd.
36
´ Image displays in use today are mainly color, flat screen monitors.
Monitors are driven by the outputs of image and graphics display cards
that are an integral part of the computer system. Seldom are there
requirements for image display applications that cannot be met by
display cards and GPUs available commercially as part of the computer
system. In some cases, it is necessary to have stereo displays, and these
are implemented in the form of headgear containing two small displays
embedded in goggles worn by the user.
´ Hardcopy devices for recording images include laser printers, film
cameras, heatsensitive devices, ink-jet units, and digital units, such as
optical and CD-ROM disks. Film provides the highest possible resolution,
but paper is the obvious medium of choice for written material. For
presentations, images are displayed on film transparencies or in a digital
medium if image projection equipment is used. The latter approach is
gaining acceptance as the standard for image presentations.
´ Networking and cloud communication are almost default functions in any
computer system in use today. Because of the large amount of data
inherent in image processing applications, the key consideration in image
transmission is bandwidth. In dedicated networks, this typically is not a
problem, but communications with remote sites via the internet are not
always as efficient. Fortunately, transmission bandwidth is improving
quickly as a result of optical fiber and other broadband technologies.
Image data compression continues to play a major role in the
transmission of large amounts of image data.
OVERLAPPING FIELDS WITH IMAGE
37 PROCESSING
Pros and Cons of DIP
38
Advantages of Digital Image Processing:
´ Improved image quality: Digital image processing algorithms can
improve the visual quality of images, making them clearer, sharper,
and more informative.
´ Automated image-based tasks: Digital image processing can
automate many image-based tasks, such as object recognition,
pattern detection, and measurement.
´ Increased efficiency: Digital image processing algorithms can process
images much faster than humans, making it possible to analyze large
amounts of data in a short amount of time.
´ Increased accuracy: Digital image processing algorithms can provide
more accurate results than humans, especially for tasks that require
precise measurements or quantitative analysis.
Pros and Cons of DIP
39
Disadvantages of Digital Image Processing:
´ High computational cost: Some digital image processing algorithms are
computationally intensive and require significant computational
resources.
´ Limited interpretability: Some digital image processing algorithms may
produce results that are difficult for humans to interpret, especially for
complex or sophisticated algorithms.
´ Dependence on quality of input: The quality of the output of digital image
processing algorithms is highly dependent on the quality of the input
images. Poor quality input images can result in poor quality output.
´ Limitations of algorithms: Digital image processing algorithms have
limitations, such as the difficulty of recognizing objects in cluttered or
poorly lit scenes, or the inability to recognize objects with significant
deformations or occlusions.
´ Dependence on good training data: The performance of many digital
image processing algorithms is dependent on the quality of the training
data used to develop the algorithms. Poor quality training data can result
in poor performance of the algorithm.
What is Pixel?
40
´ A pixel, short for “picture element,” is the smallest unit of a digital
image or display that can be controlled or manipulated. Pixels are the
smallest fragments of a digital photo. Pixels are tiny square or
rectangular elements that make up the images we see on screens,
from smartphones to televisions.
´ Every pixel in the image is marked by its coordinates and contains
information about color and brightness or sometimes opacity level has
a place for each and all pixels.
´ Understanding pixels is crucial in digital imaging and photography, as
they determine the resolution and quality of an image. An image
consists of several pixels that define its resolution. 1920×1080 would
typically be seen as a resolution height and width related to the Full
HD screen. In this instance, the total number of pixels is 1080 x 1920
(altogether it means more than two million dots which altogether form
an image on the screen).
Defining key terminologies
41
´ Pixel (Picture Element): A pixel is the smallest part of a computer
picture. It shows one spot in the whole photo. Every little square has
information about color, brightness and position. When these squares
are put together with others they make a complete picture that we
can see. Pixels are the parts that make up digital screens. They
arrange together to show letters, pictures and videos.
´ Resolution: Resolution means the number of little squares, called
pixels, in a digital photo. It’s usually measured by width and height
size. Using more details gives better results in pictures. Usual
measurements for resolution are pixels per inch (PPI) for pictures that
get printed and pixels per centimeter (PPCM). For example, a screen
that can show pictures at 1920 x 1080 has more tiny dots or pixels from
left to right and has 1920 pixels horizontally and 1080 pixels vertically.
´ Pixel Density: Display resolution is the number of tiny dots on a screen,
often shown as pixels per inch (PPI) for screens. It decides how clear a
picture looks, and more pixels make it sharper. Mobile phones with
good picture quality often have lots of tiny dots on the screen, making
images colorful and clear.
Defining key terminologies
42

´ Color Depth: Bit depth, also called color depth, means how
many bits show the color of each pixel. Usual values are 8-bit,
16-bit and 24-bit color levels. The more bits a pixel has, the
more colors it can show. This makes for a wider and deeper
range of colors.
´ Raster and Vector Graphics: Raster graphics, a type of image
creation, pixels are very important. These pictures are made
using lots of tiny squares called pixels. In contrast, vector
drawings use math equations to make shapes. This lets them
get bigger without losing picture quality. Vector graphics can’t
use pixels, so they are good for jobs like making logos and
drawing pictures.
´ Aspect Ratio: Aspect ratio means the balance between an
image’s width and height. Common aspect ratios include 4:3,
16:9, and 1:1. Different devices and mediums can have special
size rules, affecting how pictures are shown or taken.
Spatial Filtering and its Types
43
´ Spatial Filtering technique is used directly on pixels of an image. Mask
is usually considered to be added in size so that it has specific center
pixel. This mask is moved on the image such that the center of the
mask traverses all image pixels.
Classification
´ There are two types:
1. Smoothing Spatial Filter
2. Sharpening Spatial Filter
Contd.
44
General Classification:
Smoothing Spatial Filter
´ Smoothing filter is used for blurring and noise reduction in the image.
Blurring is pre-processing steps for removal of small details and Noise
Reduction is accomplished by blurring.
´ Types of Smoothing Spatial Filter
´ 1. Mean Filter (Linear Filter)
2. Order Statistics (Non-linear) filter
´ These are explained as following below.
´ Mean Filter: Linear spatial filter is simply the average of the pixels
contained in the neighborhood of the filter mask. The idea is replacing the
value of every pixel in an image by the average of the grey levels in the
neighborhood define by the filter mask. Below are the types of mean filter:
´ Averaging filter: It is used in reduction of the detail in image. All coefficients are
equal. Reduces noise but also blurs edges.
´ Weighted averaging filter: In this, pixels are multiplied by different coefficients.
Center pixel is multiplied by a higher value than average filter. Provides more
control over the smoothing process
Contd.
45
´ Order Statistics Filter: It is based on the ordering the pixels contained in
the image area encompassed by the filter. It replaces the value of the
center pixel with the value determined by the ranking result. Edges
are better preserved in this filtering.
´ Below are the types of order statistics filter:
´ Minimum filter: 0th percentile filter is the minimum filter. The value of the
center is replaced by the smallest value in the window.
´ Maximum filter: 100th percentile filter is the maximum filter. The value of the
center is replaced by the largest value in the window.
´ Median filter: Each pixel in the image is considered. First neighboring pixels
are sorted and original values of the pixel is replaced by the median of the
list. Effective in removing salt-and-pepper noise while preserving edges
46

Feature Linear Filters Non-linear Filters


Operation Convolution Based on pixel
ranking
Noise Reduction Effective for Effective for
Gaussian noise impulse noise (salt-
and-pepper)

Edge Preservation Blurs edges Preserves edges


better
Computational Generally lower Often higher
Complexity
Contd.
47
Sharpening Spatial Filter
´ It is also known as a derivative filter. The purpose of the sharpening
spatial filter is just the opposite of the smoothing spatial filter. Its main
focus is on the removal of blurring and highlighting the edges.
´ It is based on the first and second order derivative.
First Order Derivative:
´ Must be zero in flat segments.
´ Must be non zero at the onset of a grey level step.
´ Must be non zero along ramps.
´ First order derivative in 1-D is given by:
f' = f(x+1) - f(x)
Contd.
48
Second Order Derivative:
´ Must be zero in flat areas.
´ Must be non zero at the onset and end of a ramp.
´ Must be zero along ramps.
´ Second order derivative in 1-D is given by:
f'' = f(x+1) + f(x-1) - 2f(x)
Histogram Processing
49
´ A digital image is a two-dimensional matrix of two spatial coordinates,
with each cell specifying the intensity level of the image at that point.
So, we have an N x N matrix with integer values ranging from a
minimum intensity level of 0 to a maximum level of L-1, where L
denotes the number of intensity levels. Hence, the intensity levels of a
pixel r can take on values from 0,1,2,3,…. (L-1).
´ Generally, L = 2m, where m is the number of bits required to represent
the intensity levels. Zero level intensity denotes complete black or
dark, whereas L-1 level indicates complete white or absence of
grayscale.
Histogram Equalization:
´ The histogram of a digital image, with intensity levels between 0 and
(L-1), is a function h( rk ) = nk , where rk is the kth intensity level and nk is
the number of pixels in the image having that intensity level. We can
also normalize the histogram by dividing it by the total number of
pixels in the image. For an N x N image, we have the following
definition of a normalized histogram function:
Contd.
50
Histogram Equalization:
´ The histogram of a digital image, with intensity levels between 0 and
(L-1), is a function h( rk ) = nk , where rk is the kth intensity level and nk is
the number of pixels in the image having that intensity level. We can
also normalize the histogram by dividing it by the total number of
pixels in the image. For an N x N image, we have the following
definition of a normalized histogram function:
𝑛
𝜌 𝑟! = !$𝑁2
´ This p(rk) function is the probability of the occurrence of a pixel with
the intensity level rk. Clearly,
𝜌 𝑟! = 1
´ The histogram of a digital image with gray levels in the range [0, L-1] is
a discrete function.
´ Histogram Function:
𝐻 𝑟! = 𝑛!
Contd.
51
´ Points about Histogram:
Histogram of an image provides a global description of the
appearance of an image.
´ Information obtained from histogram is very large in quality.
´ Histogram of an image represents the relative frequency of
occurrence of various gray levels in an image.
Let’s assume that an Image matrix is given as:
´ This image matrix contains the pixel values at (i, j) position in the given
x-y plane which is the 2D image with gray levels.
Contd.
52
There are two ways to plot a Histogram of an image:
´ Method 1: In this method, the x-axis has grey levels/ Intensity values
and the y-axis has the number of pixels in each grey level. The
Histogram value representation of the above image is:
´ Explanation: The above image has 1, 2, 3, 4, 5, 6, and 8 as the intensity
values and the occurrence of each intensity value in the image matrix
is 2, 1, 3, 2, 2, 3 and 3 respectively so according to intensity value and
occurrence of that particular intensity we mapped them into a
Graph.
Contd.
53
Method 2: In this method, the x-axis represents the grey level, while the y-
axis represents the probability of occurrence of that grey level.
Probability Function: 𝜌 𝑟! = 𝑛! /𝑛
´ Below table shows the probability of each intensity level of an pixel.
´ Now we can create a histogram graph for each pixel and
corresponding occurrence probability.
Digital Image Fundamentals
Some Basic Relationship between
Pixels
´Neighbors of a pixel
A pixel p at coordinates (x,y) has four horizontal
and vertical neighbors whose coordinates are
given by
(x+1,y),(x-1,y),(x,y+1),(x,y-1)

This set of pixels, called the 4-neighbors of p is


denoted by N4(p). Each pixel is a unit
distance from (x,y) and the some of the
neighbors of p lie outside the digital image if
(x,y) is on the border of the image
*
*
The four diagonal neighbors of p have coordinates
(x+1,y+1), (x+1,y-1),(x-1,y+1),(x-1,y-1)
And are denoted by ND(P)
These points together with the 4-neighbors are called 8-
neighbors of p denoted by N8(p)
Adjacency

´ 4-Adjacency
´ 8-Adjacency
´ M- Adjacency
´ 4-Adjacency
Two pixels p and q with values from V are 4-
adjacent if q is in the set N4(p)
V=set of gray-level values used to define adjacency
´ 8-Adjacency
Two pixels p and q with values from V are 8-
adjacent if q is in the set N8(p)
´ M- Adjacency (Mixed Adjacency)
Two Pixels p and q with values from V are m-
adjacent if
(i) q is in N4(p), or
(ii) q is in ND(p) and the set N4(p) ∩ N4(q) has no pixels
whose values are from V
M-Connectivity ( Mixed Connectivity)

´ Two pixels P and Q are m-connected if


(i) Q is in N4 (P)
(ii) Q is in ND(P) and the set N4(P) ∩
N4(P) is empty
69 Gray Level Transformations

´ Gray level transformations are image processing techniques that


modify the intensity levels of pixels in an image. They are often
used to enhance image contrast and adjust brightness.
70 Contd.
Types of Gray Level Transformations
´ Linear Transformations:
´ Identity transformation: Each pixel value is mapped to itself.
´ Negative transformation: Inverts the pixel values.
´ Contrast stretching: Expands or compresses the intensity range.
´ Logarithmic Transformations:
´ Compresses dynamic range of an image.
´ Useful for images with high dynamic range.
´ Power-Law Transformations (Gamma Correction):
´ Corrects for display device characteristics.
´ Enhances or suppresses contrast in different image regions.
71 Contd.
Example: Contrast Stretching
Contrast stretching is a common technique to improve
image visibility by expanding the intensity range.
Formula:
s = T(r) = (L - 1)/(rmax - rmin) * (r - rmin)
Where:
•s: Output pixel value
•r: Input pixel value
•L: Number of gray levels (e.g., 256 for 8-bit
images)
•rmin, rmax: Minimum and maximum pixel values in
the image
72 Contd.

Example:
´ Consider an image with pixel values ranging from 50 to
200.
To stretch the contrast to the full range of 0 to 255, we can
apply the following:
L = 256
rmin = 50
rmax = 200
s = (256 - 1) / (200 - 50) * (r - 50)
This transformation will map pixel values from 50 to 0 and 200
to 255, effectively increasing the contrast of the image.
73 Contd.

Applications
´ Image enhancement: Improving image quality for
human perception.
´ Histogram equalization: Distributing pixel intensities
uniformly.
´ Image compression: Reducing image file size.
74 Image Sampling and
Quantization
´ In Digital Image Processing, signals captured from the
physical world need to be translated into digital form by
“Digitization” Process. In order to become suitable for
digital processing, an image function f(x,y) must be
digitized both spatially and in amplitude. This digitization
process involves two main processes called
´ Sampling: Digitizing the co-ordinate value is called
sampling.
´ Quantization: Digitizing the amplitude value is called
quantization
75 Continuous vs Digital Image
76 Contd.

´ Digital images are basically of three types:


monochrome or binary images, grayscale images,
and color images.
´ Let’s see how each image looks with the help of 4×4
samples from binary, grayscale, and RGB image:
77
78 Contd.

´ Image Sampling
´ Image sampling is the process of converting a
continuous analog image into a digital format by
dividing it into discrete pixels. Each pixel represents a
small area of the image.
´ Example:
´ Consider a continuous image of a person's face. To
digitize this image, we divide it into a grid of pixels.
Each pixel represents a small portion of the face, and
its value represents the average color or intensity
within that area.
79

´ Since an analogue image is continuous not just in its


co-ordinates (x axis), but also in its amplitude (y axis),
so the part that deals with the digitizing of co-
ordinates is known as sampling. In digitizing sampling is
done on independent variable. In case of equation y
= sin(x), it is done on x variable.
80 Contd.
• When looking at this image, we can see there are some random variations in the
signal caused by noise. In sampling we reduce this noise by taking samples. It is
obvious that more samples we take, the quality of the image would be more
better, the noise would be more removed. However, if you take sampling on the
x axis, the signal is not converted to digital format, unless you take sampling of
the y-axis too which is known as quantization.
81 Contd.

´ Here is an example for image sampling and how it can be


represented using a graph.
82 Contd.

´ Image Quantization
´ Image quantization is the process of reducing the
number of color levels or gray levels in an image. This is
done by mapping a range of continuous values to a
discrete set of values.
´ Example:
´ A grayscale image typically has 256 gray levels (8 bits
per pixel). Quantizing this image to 16 gray levels (4
bits per pixel) means reducing the number of possible
values for each pixel from 256 to 16. This results in a loss
of image quality but significantly reduces the image
file size.
83 Contd.

´ Quantization is opposite to sampling because it is


done on “y axis” while sampling is done on “x axis”.
´ Quantization is a process of transforming a real valued
sampled image to one taking only a finite number of
distinct values.
´ Under this process the amplitude values of the image
are digitized.
´ In simple words, when you are quantizing an image,
you are actually dividing a signal into
quanta(partitions).
Contd.
84
Now let’s see how quantization is done.
´ Here we assign levels to the values generated by sampling
process.
´ In the image showed in sampling, although the samples has
been taken, but they were still spanning vertically to a
continuous range of gray level values.
´ In the image shown below, these vertically ranging values have
been quantized into 5 different levels or partitions. Ranging
from 0 to 4 (black to white). This level could vary according to
the type of image you want.
85 Contd.

´ Combined Effect of Sampling and Quantization


´ Both sampling and quantization contribute to the
digital representation of an image. Sampling
determines the spatial resolution (number of pixels),
while quantization determines the color or gray level
resolution.
´ Example:
´ A low-resolution image with a small number of pixels
and a low number of gray levels will appear blocky
and have poor quality. A high-resolution image with a
large number of pixels and a high number of gray
levels will appear more detailed and have better
quality.
86 Contd.

Impact on Image Quality


´ Sampling: Higher sampling rates (more pixels) lead to
better image quality but larger file sizes.
´ Quantization: Higher quantization levels (more gray
levels) lead to better image quality but larger file sizes.
87 Sampling vs Quantization
Sampling Quantization
Digitization of co-ordinate
Digitization of amplitude values.
values.

x-axis(time) – discretized. x-axis(time) – continuous.

y-axis(amplitude) – continuous. y-axis(amplitude) – discretized.

Sampling is done prior to the Quantizatin is done after the


quantization process. sampling process.
It determines the spatial It determines the number of
resolution of the digitized grey levels in the digitized
images. images.
It reduces c.c. to a series of tent It reduces c.c. to a continuous
poles over a time. series of stair steps.
Values representing the time
A single amplitude value is
intervals are rounded off to
selected from different values of
create a defined set of possible
the time interval to represent it.
amplitude values.

You might also like