DIP
DIP
2
Preview
§ Digital image processing methods are interested by two major
applications:
i. Improvement of pictorial information for human perception
and processing of image data for storage, transmission
§ Noise filtering
§ Content Enhancement
§ Contrast enhancement
§ Deblurring
§ Remote sensing
ii. Representation for autonomous machine perception
3
§ Objectives
i. To define the scope of the field that we call image
processing
ii. To give a historical perspective of the origins of this field
iii. To give an idea of the state of the art in image processing
by examining some of the principal areas in which it is
applied
iv. To discuss briefly the principal approaches used in digital
image processing
v. To give an overview of the components contained in a
typical, general-purpose image processing system
vi. To provide direction to the books and other literature
where image processing work normally is reported
What is Digital Image Processing?
§ An image may be defined as a two-dimensional function f (x , y)
where x and y are spatial (plane) coordinates, and the amplitude
of f at any pair of coordinates (x, y) is called the intensity or gray
level of the image at that point.
§ When x, y, and the amplitude values of f are all finite, discrete
quantities, we call the image a digital image.
§ The field of digital image processing refers to processing digital
images by means of a digital computer.
§ Digital image is composed of a finite number of elements, each of
which has a particular location and value.
§ These elements are referred to as picture elements, image elements,
and pixels. Pixel is the term most widely used to denote the elements
of a digital image.
4
5 The Origins of Digital Image Processing
§ Early 1920s: One of the first applications of digital imaging was in
the newspaper industry
§ In 1920s Submarine cables were used to transmit digitized newspaper
pictures between London and New York using Bartlane systems.
§ Specialized printing equipments were used to code the images and
reproduced at receiver using telegraphic printers.
§ In 1921, photographic printing press improved the resolution and
tonal quality of images.
§ Bartlane system was capable of coding 5 distinct brightness levels.
§ It increased to 15 by 1929.
6
11
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
12
Restoration
data set
§ Global inventory of human
settlement
§ Not hard to imagine the
kind of analysis that might
be done using this data
17
Industrial Inspection
§ Human operators are expensive,
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
slow and
unreliable
§ Make machines do the
job instead
§ Industrial vision systems
are used in all kinds of industries
§ Can we trust them?
Fundamental Steps in Digital Image Processing
Wavelets and
18
Colour Image Morphological
multiresolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Contd. AERCWCMS
19 FC
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
24
Colour Image Morphological
multiresolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
25
Colour Image Morphological
multiresolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
26
Colour Image Morphological
multiresolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
27
Colour Image Morphological
multi-resolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
28
Colour Image Morphological
multiresolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
29
Colour Image Morphological
multiresolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
30
Colour Image Morphological
multiresolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
31
Colour Image Morphological
multiresolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
Fundamental Steps in Digital Image Processing
Wavelets and
32
Colour Image Morphological
multiresolution Compression
Processing Processing
processing
Image
Restoration Segmentation
Image Representation
Enhancement & Description
Image Object
Acquisition Recognition
Problem Domain
33 Components of DIP
34 Contd.
´ The first is a physical sensor that responds to the energy radiated by
the object we wish to image. The second, called a digitizer, is a
device for converting the output of the physical sensing device into
digital form. For instance, in a digital video camera, the sensors (CCD
chips) produce an electrical output proportional to light intensity. The
digitizer converts these outputs to digital data.
´ Specialized image processing hardware usually consists of the
digitizer just mentioned, plus hardware that performs other primitive
operations, such as an arithmetic logic unit (ALU), that performs
arithmetic and logical operations in parallel on entire images. One
example of how an ALU is used is in averaging images as quickly as
they are digitized, for the purpose of noise reduction. This type of
hardware sometimes is called a front-end subsystem, and its most
distinguishing characteristic is speed. In other words, this unit performs
functions that require fast data throughputs (e.g., digitizing and
averaging video images at 30 frames/s) that the typical main
computer cannot handle. One or more GPUs also are common in
image processing systems that perform intensive matrix operations.
Contd.
35
´ The computer in an image processing system is a general-purpose
computer and can range from a PC to a supercomputer. In dedicated
applications, sometimes custom computers are used to achieve a
required level of performance, but our interest here is on general-purpose
image processing systems. In these systems, almost any well-equipped
PC-type machine is suitable for off-line image processing tasks.
´ Software for image processing consists of specialized modules that
perform specific tasks. A well-designed package also includes the
capability for the user to write code that, as a minimum, utilizes the
specialized modules. More sophisticated software packages allow the
integration of those modules and general-purpose software commands
from at least one computer language.
´ Mass storage is a must in image processing applications. An image of size
1024 1024 × pixels, in which the intensity of each pixel is an 8-bit quantity,
requires one megabyte of storage space if the image is not compressed.
When dealing with image databases that contain thousands, or even
millions, of images, providing adequate storage in an image processing
system can be a challenge. Digital storage for image processing
applications falls into three principal categories: (1) short-term storage for
use during processing; (2) on-line storage for relatively fast recall; and (3)
archival storage, characterized by infrequent access. Storage is
measured in bytes (eight bits), Kbytes (103 bytes), Mbytes (106 bytes),
Gbytes (109 bytes), and Tbytes (1012 bytes).
Contd.
36
´ Image displays in use today are mainly color, flat screen monitors.
Monitors are driven by the outputs of image and graphics display cards
that are an integral part of the computer system. Seldom are there
requirements for image display applications that cannot be met by
display cards and GPUs available commercially as part of the computer
system. In some cases, it is necessary to have stereo displays, and these
are implemented in the form of headgear containing two small displays
embedded in goggles worn by the user.
´ Hardcopy devices for recording images include laser printers, film
cameras, heatsensitive devices, ink-jet units, and digital units, such as
optical and CD-ROM disks. Film provides the highest possible resolution,
but paper is the obvious medium of choice for written material. For
presentations, images are displayed on film transparencies or in a digital
medium if image projection equipment is used. The latter approach is
gaining acceptance as the standard for image presentations.
´ Networking and cloud communication are almost default functions in any
computer system in use today. Because of the large amount of data
inherent in image processing applications, the key consideration in image
transmission is bandwidth. In dedicated networks, this typically is not a
problem, but communications with remote sites via the internet are not
always as efficient. Fortunately, transmission bandwidth is improving
quickly as a result of optical fiber and other broadband technologies.
Image data compression continues to play a major role in the
transmission of large amounts of image data.
OVERLAPPING FIELDS WITH IMAGE
37 PROCESSING
Pros and Cons of DIP
38
Advantages of Digital Image Processing:
´ Improved image quality: Digital image processing algorithms can
improve the visual quality of images, making them clearer, sharper,
and more informative.
´ Automated image-based tasks: Digital image processing can
automate many image-based tasks, such as object recognition,
pattern detection, and measurement.
´ Increased efficiency: Digital image processing algorithms can process
images much faster than humans, making it possible to analyze large
amounts of data in a short amount of time.
´ Increased accuracy: Digital image processing algorithms can provide
more accurate results than humans, especially for tasks that require
precise measurements or quantitative analysis.
Pros and Cons of DIP
39
Disadvantages of Digital Image Processing:
´ High computational cost: Some digital image processing algorithms are
computationally intensive and require significant computational
resources.
´ Limited interpretability: Some digital image processing algorithms may
produce results that are difficult for humans to interpret, especially for
complex or sophisticated algorithms.
´ Dependence on quality of input: The quality of the output of digital image
processing algorithms is highly dependent on the quality of the input
images. Poor quality input images can result in poor quality output.
´ Limitations of algorithms: Digital image processing algorithms have
limitations, such as the difficulty of recognizing objects in cluttered or
poorly lit scenes, or the inability to recognize objects with significant
deformations or occlusions.
´ Dependence on good training data: The performance of many digital
image processing algorithms is dependent on the quality of the training
data used to develop the algorithms. Poor quality training data can result
in poor performance of the algorithm.
What is Pixel?
40
´ A pixel, short for “picture element,” is the smallest unit of a digital
image or display that can be controlled or manipulated. Pixels are the
smallest fragments of a digital photo. Pixels are tiny square or
rectangular elements that make up the images we see on screens,
from smartphones to televisions.
´ Every pixel in the image is marked by its coordinates and contains
information about color and brightness or sometimes opacity level has
a place for each and all pixels.
´ Understanding pixels is crucial in digital imaging and photography, as
they determine the resolution and quality of an image. An image
consists of several pixels that define its resolution. 1920×1080 would
typically be seen as a resolution height and width related to the Full
HD screen. In this instance, the total number of pixels is 1080 x 1920
(altogether it means more than two million dots which altogether form
an image on the screen).
Defining key terminologies
41
´ Pixel (Picture Element): A pixel is the smallest part of a computer
picture. It shows one spot in the whole photo. Every little square has
information about color, brightness and position. When these squares
are put together with others they make a complete picture that we
can see. Pixels are the parts that make up digital screens. They
arrange together to show letters, pictures and videos.
´ Resolution: Resolution means the number of little squares, called
pixels, in a digital photo. It’s usually measured by width and height
size. Using more details gives better results in pictures. Usual
measurements for resolution are pixels per inch (PPI) for pictures that
get printed and pixels per centimeter (PPCM). For example, a screen
that can show pictures at 1920 x 1080 has more tiny dots or pixels from
left to right and has 1920 pixels horizontally and 1080 pixels vertically.
´ Pixel Density: Display resolution is the number of tiny dots on a screen,
often shown as pixels per inch (PPI) for screens. It decides how clear a
picture looks, and more pixels make it sharper. Mobile phones with
good picture quality often have lots of tiny dots on the screen, making
images colorful and clear.
Defining key terminologies
42
´ Color Depth: Bit depth, also called color depth, means how
many bits show the color of each pixel. Usual values are 8-bit,
16-bit and 24-bit color levels. The more bits a pixel has, the
more colors it can show. This makes for a wider and deeper
range of colors.
´ Raster and Vector Graphics: Raster graphics, a type of image
creation, pixels are very important. These pictures are made
using lots of tiny squares called pixels. In contrast, vector
drawings use math equations to make shapes. This lets them
get bigger without losing picture quality. Vector graphics can’t
use pixels, so they are good for jobs like making logos and
drawing pictures.
´ Aspect Ratio: Aspect ratio means the balance between an
image’s width and height. Common aspect ratios include 4:3,
16:9, and 1:1. Different devices and mediums can have special
size rules, affecting how pictures are shown or taken.
Spatial Filtering and its Types
43
´ Spatial Filtering technique is used directly on pixels of an image. Mask
is usually considered to be added in size so that it has specific center
pixel. This mask is moved on the image such that the center of the
mask traverses all image pixels.
Classification
´ There are two types:
1. Smoothing Spatial Filter
2. Sharpening Spatial Filter
Contd.
44
General Classification:
Smoothing Spatial Filter
´ Smoothing filter is used for blurring and noise reduction in the image.
Blurring is pre-processing steps for removal of small details and Noise
Reduction is accomplished by blurring.
´ Types of Smoothing Spatial Filter
´ 1. Mean Filter (Linear Filter)
2. Order Statistics (Non-linear) filter
´ These are explained as following below.
´ Mean Filter: Linear spatial filter is simply the average of the pixels
contained in the neighborhood of the filter mask. The idea is replacing the
value of every pixel in an image by the average of the grey levels in the
neighborhood define by the filter mask. Below are the types of mean filter:
´ Averaging filter: It is used in reduction of the detail in image. All coefficients are
equal. Reduces noise but also blurs edges.
´ Weighted averaging filter: In this, pixels are multiplied by different coefficients.
Center pixel is multiplied by a higher value than average filter. Provides more
control over the smoothing process
Contd.
45
´ Order Statistics Filter: It is based on the ordering the pixels contained in
the image area encompassed by the filter. It replaces the value of the
center pixel with the value determined by the ranking result. Edges
are better preserved in this filtering.
´ Below are the types of order statistics filter:
´ Minimum filter: 0th percentile filter is the minimum filter. The value of the
center is replaced by the smallest value in the window.
´ Maximum filter: 100th percentile filter is the maximum filter. The value of the
center is replaced by the largest value in the window.
´ Median filter: Each pixel in the image is considered. First neighboring pixels
are sorted and original values of the pixel is replaced by the median of the
list. Effective in removing salt-and-pepper noise while preserving edges
46
´ 4-Adjacency
´ 8-Adjacency
´ M- Adjacency
´ 4-Adjacency
Two pixels p and q with values from V are 4-
adjacent if q is in the set N4(p)
V=set of gray-level values used to define adjacency
´ 8-Adjacency
Two pixels p and q with values from V are 8-
adjacent if q is in the set N8(p)
´ M- Adjacency (Mixed Adjacency)
Two Pixels p and q with values from V are m-
adjacent if
(i) q is in N4(p), or
(ii) q is in ND(p) and the set N4(p) ∩ N4(q) has no pixels
whose values are from V
M-Connectivity ( Mixed Connectivity)
Example:
´ Consider an image with pixel values ranging from 50 to
200.
To stretch the contrast to the full range of 0 to 255, we can
apply the following:
L = 256
rmin = 50
rmax = 200
s = (256 - 1) / (200 - 50) * (r - 50)
This transformation will map pixel values from 50 to 0 and 200
to 255, effectively increasing the contrast of the image.
73 Contd.
Applications
´ Image enhancement: Improving image quality for
human perception.
´ Histogram equalization: Distributing pixel intensities
uniformly.
´ Image compression: Reducing image file size.
74 Image Sampling and
Quantization
´ In Digital Image Processing, signals captured from the
physical world need to be translated into digital form by
“Digitization” Process. In order to become suitable for
digital processing, an image function f(x,y) must be
digitized both spatially and in amplitude. This digitization
process involves two main processes called
´ Sampling: Digitizing the co-ordinate value is called
sampling.
´ Quantization: Digitizing the amplitude value is called
quantization
75 Continuous vs Digital Image
76 Contd.
´ Image Sampling
´ Image sampling is the process of converting a
continuous analog image into a digital format by
dividing it into discrete pixels. Each pixel represents a
small area of the image.
´ Example:
´ Consider a continuous image of a person's face. To
digitize this image, we divide it into a grid of pixels.
Each pixel represents a small portion of the face, and
its value represents the average color or intensity
within that area.
79
´ Image Quantization
´ Image quantization is the process of reducing the
number of color levels or gray levels in an image. This is
done by mapping a range of continuous values to a
discrete set of values.
´ Example:
´ A grayscale image typically has 256 gray levels (8 bits
per pixel). Quantizing this image to 16 gray levels (4
bits per pixel) means reducing the number of possible
values for each pixel from 256 to 16. This results in a loss
of image quality but significantly reduces the image
file size.
83 Contd.