0% found this document useful (0 votes)
14 views

Lecture 1

Uploaded by

Haisam Abbas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Lecture 1

Uploaded by

Haisam Abbas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

EENG 860 Special Topics: Digital Image Processing

Lecture 1: Introduction

Dr. Ahmadreza Baghaie


Department of Electrical and Computer Engineering
New York Institute of Technology

Spring 2020

Readings: Chapter 1, Chapter 2 (sections 2.1 to 2.4)


1 / 38
How to read: laid-back with pen and paper for occasional notes!
Table of Content


What is Digital Image Processing (DIP)?

The Origins of DIP

DIP Applications

Fundamental Steps in DIP

Components of a DIP System

Elements of Visual Perception

Light and the Electromagnetic Spectrum

Image Sensing and Acquisition

Image Sampling and Quantization

2 / 38
What is Digital Image Processing (DIP)?


What is an image?
– A two-dimensional function, f(x, y), with x and y as spatial
coordinates, and the amplitude of f at any pair of coordinates (x, y) as
the intensity or gray level of the image at that point.

When x, y and intensity values f are all finite, discrete quantities, image is
a digital image.

Digital Image Processing (DIP) is to process digital images by digital
computers.

Each digital image is composed of a finite number of elements called
picture elements, image elements, pels, or pixels.

How is DIP different than image analysis and computer vision?
– DIP is when both input and output of an algorithm are images? Maybe!
3 / 38
What is Digital Image Processing (DIP)?


Better categorization:
– Low-level processes: both input and output are images

Noise reduction

Contrast enhancement

Image sharpening
– Mid-level processes: inputs are images, outputs are image attributes

Segmentation

Classification of objects
– High-level processes: “making sense” of the ensemble of recognized
objects, for performing cognitive functions.


DIP consists of processes whose inputs and outputs are images, and of
4 / 38
processes that extract attributes from images.
The Origins of DIP


Earliest applications of digital images were in the newspaper industry,
when pictures were sent by submarine cables across the Atlantic Ocean.


Specialized printing equipment coded pictures for transmission, then
reconstructed at the receiving end.


Early systems were coding images in five distinct levels of gray. By end of
1920s the capability increased to 15 levels.

5 / 38
The Origins of DIP


The basis for digital computers dates back to 1940s with introduction of the
concepts for memories that could hold programs and data, and conditional
branching.
– Invention of transistors at Bell Lab, 1948;
– Common Business-Oriented Language (COBOL) and Formula
Translator (FORTRAN) programming languages, 1950s and 1960s;
– Inventions of Integrated Circuits (IC) by Texas Instruments, 1958;
– Operating systems, 1960s;
– Microprocessors by Intel, 1970s;
– Personal computers by IBM, 1981;
– Large-scale integration (LI) and very-large-scale integration (VLSI) in
1970s and 1980s;
– Ultra-large-scale integration (ULSI), present. 6 / 38
The Origins of DIP


Early examples of DIP dates back to Jet Propulsion Lab in 1964 for
processing images of moon captured by Ranger 7.


The invention of Computerized Axial Tomography (CAT), or Computerized
Tomography (CT) in 1970s is one of the most important events in the
application of DIP in medical diagnosis.

7 / 38
DIP Applications


The areas of applications of DIP are numerous.

To categorize them, let’s focus on the images according to their sources.
– Electromagnetic;
– Acoustic;
– Ultrasonic;
– Electronics;
– Synthetic.

8 / 38
DIP Applications


Electromagnetic (EM) waves are propagating sinusoidal waves of varying
wavelength, or stream of mass-less particles traveling in a wavelike
pattern, moving at the speed of light.

Each mass-less particle contains a certain amount or bundle of energy
called a photon.

Energy of a photon:
hc
E=h f =
λ
−34
h : Planck constant (6.63 x 10 Js)
f : frequency
8
c : speed of light (3 x 10 m/ s)
λ : wavelength of photon

9 / 38
DIP Applications

10 / 38
DIP Applications

11 / 38
DIP Applications

12 / 38
DIP Applications

13 / 38
DIP Applications

Gamma X -ray Optical Infrared Radio

14 / 38
DIP Applications

15 / 38
Fundamental Steps in DIP

* Note 1: this is from the international version of the book. The US version is slightly different.
* Note 2: not all sections of chapters are discussed.

16 / 38
Fundamental Steps in DIP

* Note 1: this is from the international version of the book. The US version is slightly different.
* Note 2: not all sections of chapters are discussed.

17 / 38
Components of a DIP System

Cloud
Cloud

Network

Image displays Computer Mass storage

Specialized
Image processing
Hardcopy image processing
software
hardware

Image sensors

Problem
domain

18 / 38
Components of a DIP System

Cloud
Cloud

Network

Image displays Computer Mass storage

Specialized
Image processing
Hardcopy image processing
software
hardware

Image sensors

Problem
domain

19 / 38
Elements of Visual Perception
Human Eye

Like a sphere with a diameter of 20 mm

Key components
– Lens & muscles
– Retina (with receptors)
– Cones in fovea.

6-7 millions of cone cells.

Very sensitive to color.

Photopic or bright-light vision.
– Rods in the rest of retina.

75-150 millions of rod cells.

No color sensitivity.

Sensitive to low-level illumination.

Scotoptic or dim-light vision
20 / 38
Elements of Visual Perception
Human Eye


In regular cameras, the lens has a fixed focal length, and focusing
achieved by varying the distance between the lens and the imaging plane.

In human eye, however, the distance between the center of the lens and
the retina is fixed (approximately 17 mm), and the focal length is changed
by varying the shape of the lens (14-17 mm)!

21 / 38
Elements of Visual Perception
Human Eye

The light intensity
– Measured by energy of the
incoming light
– Human eyes can adapt to an
enormous range of intensities.

The subjective brightness
– Perceived by human eyes
– A logarithmic function of the
light intensity.

Adaptation is based on brightness
levels.

22 / 38
Elements of Visual Perception
Human Eye


Perceived brightness depends on two
phenomena:
– Visual system tends to undershoot
or overshoot around the boundary
of regions in different intensities.

– A region’s perceived brightness


does not depend only on its
intensity (simultaneous contrast).

23 / 38
Light and the Electromagnetic Spectrum


In 1666, Isaac Newton discovered that when a beam of sunlight passes
through a glass prism, it is decomposed into a continuous spectrum of
colors ranging from violet to red.

24 / 38
Light and the Electromagnetic Spectrum


The colors perceived in an object are determined by the nature of the light
reflected by it.

Monochromatic (achromatic) light, is a light that is void of color,
represented only by its intensity (gray level), ranging from black to white.

Chromatic light spans the electromagnetic energy spectrum from 0.43 to
0.79 micro-meter.

Radiance: total amount of energy that flows from the light source,
measured in watts (W).

Luminance: amount of energy an observer perceives from a light source,
measured in lumens (lm).

Brightness: a subjective descriptor of light perception, impossible to
measure, representing the achromatic notion of intensity.

25 / 38
Image Sensing and Acquisition


Images are generated by the combination
of an “illumination” source and the reflection
or absorption of energy from the source by
the elements of the “scene”.

Illumination can be from a source of
electromagnetic energy, or from less
traditional sources such as ultrasound,
acoustics, or even computer-generated.

Scene elements can be familiar objects, or
molecules, rock formations, human brain
etc.

26 / 38
Image Sensing and Acquisition

27 / 38
Image Sensing and Acquisition

28 / 38
Image Formation Model


An image is denoted by a function f(x, y), which the value of f at spatial
coordinates (x, y) is a scalar quantity proportional to energy radiated by a
physical source.

The values of f are non-negative, and finite: 0 ≤ f(x,y) < inf.

Function f(x, y) is characterized by two components:
– Illumination: the amount of source illumination incident on the scene
being viewed, represented by i(x, y).
– Reflectance: the amount of illumination reflected by the objects in the
scene, r(x,y).

f ( x , y )=i( x , y)∗r ( x , y )
0≤i( x , y)<inf
0≤r (x , y)≤1

In some cases, for example X-ray imaging, we have transmissivity instead
of reflectance. 29 / 38
Image Sampling and Quantization


To create a digital image, we need to convert the continuous sensed data
into a digital format.

Two processes are required:
– Sampling: digitization in the spatial domain
– Quantization: digitization in the function domain

30 / 38
Image Sampling and Quantization


Assuming f(s, t) as a continuous image
function, using sampling and digitization,
we create the image f(x, y), containing M
rows and N columns.

The spatial coordinate values are shown
by integers as: x=0, 1, 2, …, M-1 and y=0,
1, 2, …, N-1.

For image f(x, y), we have L number of
intensity levels, represented as a power of
2. For example, in an 8-bit image, we have
256 intensity levels:
L=2k

31 / 38
Image Sampling and Quantization

32 / 38
Spatial and Intensity Resolution


Spatial resolution: the size of the
smallest perceptible details in an
image

May be measured by the number
of pixels per unit distance.

Spatial resolution is dependent on
the sampling rate.

33 / 38
Spatial and Intensity Resolution


Intensity resolution: the smallest discernible change in the intensity level.

Measured in the number of bits used for quantization.

34 / 38
Spatial and Intensity Resolution


Both are digitization-dependent:
– Spatial resolution depends on the number of samples (N)
– Intensity resolution depends on the number of bits (k)


Different artifacts:
– Too low spatial resolution results in jagged lines
– Too low intensity resolution results in false contouring


Sensitivity:
– Spatial resolution is more sensitive to the shape variations
– Intensity resolution is more sensitive to the lighting variations

35 / 38
Spatial and Intensity Resolution


Left: Least geometric details but more lighting information

Middle: More details but less lighting information

Right: Most geometric details but least lighting information
What would be good digitization schemes for these images?

36 / 38
Spatial and Intensity Resolution


The iso-preference curves
– Change N and k values and compare
the quality of the images obtained.
– Each curve shows the images with
the same quality judged by observers


We can see:
– Images with more shape detail (e.g.,
crowd) need fewer intensity levels to
achieve the same quality
– Images with less shape detail (e.g.,
face) are more sensitive to the
intensity resolution but less sensitive
37 / 38
to spatial resolution
Questions?
[email protected]

38 / 38

You might also like