0% found this document useful (0 votes)
4 views

Structure of Eye

Uploaded by

mukilan1978
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Structure of Eye

Uploaded by

mukilan1978
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

Human Visual System & Digital

Image
Digital Image Fundamentals
• Human visual system
• A simple image model
• Sampling and quantization
• Color models and Color imaging

2
Why to study Human Visual System

In many image processing applications, the objective is to help a


human observer perceive the visual information in an image. Therefore,
it is important to understand the human visual system.

The human visual system consists mainly of


the eye (image sensor or camera)
optic nerve (transmission path)
brain (image information processing unit or computer).

It is one of the most sophisticated image processing and analysis


systems.
Why to study Human Visual System(Cont..)

• There is a large difference between the image we display and


the image we actually perceive, i.e. the luminance and the
perceived brightness of a pixel.
• Brightness also depends on other factors, such as contrast
around the pixel and various other cognitive processes
• Human visual system can perform a number of image
processing tasks in a manner vastly superior to any computer
vision system.
• If we want to mimic such processing, the way our eyes and
brain work needs to be carefully studied and understood.
Human Visual System
• Brightness adaptation
• Brightness discrimination
• Weber ratio
• Mach band pattern
• Simultaneous contrast

5
Structure of Human Eye

The lens and the ciliary muscle focus the reflected lights from
objects into the retina to form an image of the objects.
Elements of visual perception

The amount of light entering the


eye is controlled by the pupil, which
dilates and contracts accordingly.
The cornea and lens, whose shape
is adjusted by the ciliary body,
focus the light on the retina, where
receptors convert it into nerve
signals that pass to the brain.

7
Human Eye Structure

• The eye is nearly a sphere


• Average Diameter of approximately 20 mm
• Three membranes:-

The outer cover


+ Cornea (tough, transparent tissue)
+ Sclera (opeque membrane)

The Choroid (below the sclera)


+ciliary body
+iris diaphragm (pupil) contracts or expands to control the amount of light

The Retina (the inner most membrane of the eye)


Lens & Retina

• Lens: Infrared & ultraviolet light are absorbed appreciably by


proteins within the lens structure &, in excessive amounts,
can cause damage to the eye

• Inne m memb ane f he e e hich line in ide f he all en i e


posterior portion. When the eye is properly focused, light from an object
outside the eye is imaged on the retina
Receptors

• Pattern vision is afforded by the distribution of discrete light


receptors over the surface of the retina
• Receptors are divided into 2 classes
Cones
Rods
Cones
• 6-7 million, located primarily in the central position of the
retina(the fovea, muscles controlling the eye rotate the
eyeball until the image falls on the fovea
• Highly sensitive to color
• Each is connected to its own nerve end thus luman can
resolve fine details
• Cone vision is called photopic or bright light vision
Rods

• 75-150 million, distributed over the retina surface

• Several rods are connected to a single nerve end reduce the


amount of detail discernible

• Serve to give a general, overall picture of the field of view

• Sensitive to low level of illumination

• Rod vision os called scotopic or dim-light vision


Components Human Eye
When the eye is properly focused, light from an object
outside the eye is imaged on the retina

1. The lens contains 60-70% water, 6% of fat.


2. The iris diaphragm controls amount of light that enters the eye.
3. Light receptors in the retina
• - About 6-7 millions cones for bright light vision called photopic
- Density of cones is about 150,000 elements/mm2.
- In each eye located in the fovea, one cone-one nerve end
- Cones involve in color vision.
- Cones are concentrated in fovea about 1.5x1.5 mm2.
- About 75-150 millions rods for dim light vision called scotopic
- Rods are sensitive to low level of light and are not involved
color vision.
• 4. Blind spot is the region of emergence of the optic nerve from the eye.
• Lens: Infrared & ultraviolet light are absorbed
appreciably by protiens within the lens structure & in
amounts, can cause damage to the eye

• Retina: Innermost membrane of the eye which lines


in ide f he all en i e ei n i n When he e e i
properly focused, light from an object outside the eye is
imaged on the retina
Distribution of Visible units in Human Eye

• Blind spot (Receptors are absent in this area)


• Fovea: a circular indentation in the retina of about 1.5 mm in diameter
• The density of cones in the fovea is approx 150000 elements per square mm

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Human Visual System
• Image formation in the eye
• The focal length varies form aprox 17 mm to about 14 mm,
as the refractive power of the lens increses from the min to
its max.
Object farther away than 3 meter lowest refractive power and focal
length 17 mm
Nearby object Max refractive power and focal
length 14 mm
It is easy to calculate the size of the retinal image of any object
Image length h = 17(mm) x (15/100) = 2.55 mm

17
• Principal difference b/w the lens of the eye & an ordinary
optical lens is that the former is flexible
• Radius of the curvature the anterion surface its postirior
surface
• Shape is controlled by tension in the fibles of the ciliary
body
• Focus on distinct objects, the controlling muscles cause
the lens to be relatively flattened
• These muscles allow the lens to become thicker in order to
focus on objects near the eye
Image Formation in the Human Eye

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Human Visual System

Human simultaneous luminance vision range


(5 orders of magnitude)

log (cd/m2)

20
Brightness Adaptation & Discrimination

• The human visual system can perceive (the ability to see, hear,
or become aware of something through the senses)
approximately 1010 different light intensity levels
• However, at any one time we can only discriminate between a
much smaller number brightness adaptation
• The ability of the eye to discriminate between changes in light
intensity at any specific brightness adaptation level is
governed by Weber s la which states that the just
noticeable difference ΔI is proportional to the background
luminance I
• Similarly, the perceived intensity of a region is related to the
light intensities of the regions surrounding it
Human Visual System
• Brightness adaptation

HVS can adapt to light intensity range on the


order of 1010 from the scotopic to the glare
limits

Subjective brightness(Brightness as perceived


by the human visual systems) is a logarithmic
function of the light intensity incident on the
eye

22
Brightness adaptation
• Simutaneous range is smaller than Total adaptation range

The lambert (symbol L) is a unit of


luminance named for Johann
Heinrich Lambert (1728 - 1777), a
German mathematician, physicist
and astronomer. A related unit of
luminance, the foot-lambert, is
used in the lighting, cinema and
flight simulation industries. The SI
unit is the candela per square
metre (cd/m²).

(Images from Rafael C. Gonzalez and Richard E.


23 Wood, Digital Image Processing, 2nd Edition.
Human Visual System
• Brightness adaptation

The HVS cannot operate on such range (106 orders of


magnitude) simultaneously

It accomplishes this through (brightness) adaptation

The total intensity level the HVS can discriminate


simultaneously is rather small in comparison (about 4 orders
of magnitude)

24
Brightness adaptation
For a given
observation
Sensitivity of the HVS
condition, the
for the given current sensitivity
adaptation level level is call the
brightness
Anything below Bb will adaptation level
be perceived as
indistinguishable
blacks •The range of brightness that
the eye can adapt to is
enormous, roughly around
1010 to 1.
•Photopic vision alone has a
range of around 106 to 1.
•Brightness adaptation:
example Ba”
•mL: millilambert

25
Brightness Adaptation and Discrimination

• Subjective brightness (Intensity as perceived by the human visual system)


is a logarithmic function of the light intensity incident on the eye.
• Photopic vision: range 106
• Transition from scotopic to photopic vision : range from 0.001 to 0.1
milliambert -3 to -1 mL in the log scale
• Visual system cannot operate over such a total range simultaneously.
Brightness adaptation

• Brightness adaptation level (the current sensitivity level of the visual


system) at level Ba is (Bb to Ba)
Brightness Adaptation and Discrimination

• Brightness discrimination is the ability of the eye to discriminate


between changes in light intensity at any specific adaptation
level.
• The quantity Ic/I, where Ic is the increment of illumination
discriminable 50% of the time with background illumination I, is
called the Weber ratio. A small value of Weber ratio, means
good brightness discrimination.

27
Brightness Adaptation and Discrimination

• Brightness discrimination is poor at low levels of illumination.The


two branches in the curve indicate that at low levels of
illumination vision is carried out by the rods, whereas at high
level by the cones.
Perceived Brightness

• Two phenomena clearly demonstrate that


perceived brightness is not a simple function
of intensity.
Mach bands
Simultaneous contrast
Brightness Adaptation of Human Eye :
Mach Band Effect
Intensity

Position
Perceived Brightness
• Perceived brightness is not a simple function of
intensity Mach band pattern

Intensities of surrounding points effect


perceived brightness at each point.

In this image, edges between bars


appear brighter on the right side and
darker on the left side.

31
Mach Band Effect (Cont)

A
B
Intensity

Position
In area A, brightness perceived is darker while in area B is
brighter. This phenomenon is called Mach Band Effect.
Brightness Adaptation of Human Eye :
Simultaneous Contrast

Simultaneous contrast. All small squares have exactly the same


Intensity but they appear progressively darker as background becomes
lighter.
Perceived Brightness
• Perceived brightness is not a simple function of
intensity Simultaneous contrast

In this phenomena, called simultaneous


contrast, a spot may appears to the eye
to become darker as the background
gets lighter.

34
Optical Illusions
Examples for Human Perception Phenomena

Examples for
Human
Perception
Phenomena
Light And The Electromagnetic Spectrum

Light is just a particular part of the


electromagnetic spectrum that can be sensed by
the human eye
The electromagnetic spectrum is split up
according to the wavelengths of different forms
of energy
Light and the Electromagnetic Spectrum

The colors that humans perceive in an object are determined by the


nature of the light reflected from the object.
Wavelength = velocity (c)/frequency(v)
Energy of eletromagnetic spectrum E = hv
Achromatic or monochromatic light is void of color, and is described by its
intensity (gray level).
Chromatic light spans the electromagnetic energy spectrum from 0.43 to
0.79 m, and is described by
Radiance: Total amount of energy that flows from the light source, and
measured in watts (W)
Luminance: Measured in lumens (lm), gives a measure of the amount
of energy an observer perceives from a light source
Brightness: Subjective descriptor of light perception that is practically
impossible to measure.
A simple image model
• Two-dimensional light-intensity
function

f(x,y) = l(x,y) r(x,y)

l(x,y) - illumination component


r(x,y) – reflectance component

38
A simple image model
• l(x,y) - illumination range

log (cd/m2)
• r(x,y) – typical reflectance indixes
black velvet (0.01)
stainless steel (0.65)
white paint (0.80)
silver plate (0.90)
snow (0.93)

39
Image Sensing and Acquisition

Electromagnetic energy source and sensor that can detect


the energy of the electromagnetic source are needed to
generate an image.
EM source will illuminate the objects that need to be imaged
and then a sensor will detect the reflected energy from the
objects.
Different objects will have different degree of reflections
and absorption of the electromagnetic energy.
These differences in reflections and absorption are the
reasons for objects to appear distinct in the images.
Image Sensing and Acquisition
Image Acquisition Using a Single Sensor
Image Acquisition Using Sensor Strip
Image Sensors : Array Sensor

Charge-Coupled Device (CCD)

w Used for convert a continuous


image into a digital image

w Contains an array of light sensors

w Converts photon into electric charges


accumulated in each sensor unit

CCD KAF-3200E from Kodak.


(2184 x 1472 pixels,
Pixel size 6.8 microns2)
Gate
Vertical Transport Register

Gate

Photosites
Vertical Transport Register
Device

Gate
Horizontal Transportation Register

Vertical Transport Register

Amplifier Output Gate


Output
Image Sensor: Inside Charge-Coupled
Image Sensor: How CCD works

i h g Image pixel

f e d

c b a
i h g

f e d
i h g
Horizontal transport
register c b a
f e d

Vertical shift c b a Output

Horizontal shift
Image Acquisition
Nowadays most visible and near IR electromagnetic imaging
is done with 2-dimensional charged-coupled devices (CCDs).

47
A Simple Image Formation Model

Mathematical representation of monochromatic


images.
Two dimensional function f(x,y), where f is the gray level
of a pixel at location x and y.
The values of the function f at different locations are
proportional to the energy radiated from the imaged
object.
A Simple Image Formation Model

0< f(x,y) < Nonzero and Finite

f(x,y)=i(x,y)*r(x,y) Reflectivity
f(x,y)=i(x,y)*t(x,y) Transmissivity
i(x,y) - illumination component
0< i(x,y) < r(x,y) – reflectance component

0 r(x,y) and t(x,y) 1


A Simple Image Formation Model

i(x,y) : Sun on clear day 90,000 lm/m2

: Sun on cloudy day 10,000 lm/m2


: Full moon 0.1 lm/m2
: Commercial office 1,000 lm/m2

r(x,y) :Black Velvet 0.01


:Stainless Steel 0.65
:Flat-white Wall Paint 0.80
:Silver-plated Metal 0.90
:Snow 0.93

H.R. Pourreza
Image Sampling and Quantization

The output of most sensors is continuous in amplitude and


spatial coordinates.
Converting an analog image to a digital image require
sampling and quantization
Sampling: is digitizing the coordinate values
Quantization: is digitizing the amplitude values
Sampling
• Digitization of the spatial coordinates, sample (x, y) at
di cre e al e of (0, 0), (0, 1), .

• f(x, y) is 2-D array

f (0,0) f (0,1) f (0, M 1)


f (1,0) f (1,1) f (1, M 1)
f ( x, y)

f ( N 1,0) f ( N 1,1) f ( N 1, M 1)

53
Quantization
• Digitization of the light intensity function

• Each f(i,j) is called a pixel

• The magnitude of f(i,j) is represented digitally


with a fixed number of bits - quantization

54
Image Acquisition

Sampling: digitizing the 2-dimensional spatial coordinate values


Quantization: digitizing the amplitude values (brightness level)

55
Sampling and Quantization

Image sampling: discretize an image in the spatial domain


Spatial resolution / image resolution: pixel size or number of pixels

56
Sampling and Quantization
• How many samples to take?

Number of pixels (samples) in the image


Nyquist rate

• How many gray-levels to store?

At a pixel position (sample), number of levels of


color/intensity to be represented

57
How to choose the spatial resolution
Spatial resolution = Sampling locations
Original image
Sampled image

Under sampling, we lost some image details!


Sampling and Quantization
• How many samples to take?

59
Definition of Aliasing
• Aliasing refers to the effect produced when a signal is imperfectly reconstructed
from the original signal.
• Aliasing occurs when a signal is not sampled at a high enough frequency to
create an accurate representation. This effect is shown in the following example
of a sinusoidal function:

• In this example, the dots represent the sampled data and the curve represents
the original signal. Because there are too few sampled data points, the resulting
pattern produced by the sampled data is a poor representation of the original.
Aliasing is relevant in fields associated with signal processing, such as digital audio,
digital photography, and computer graphics.
Aliasing(mixing of two different signals)
• Spatial aliasing in the form of a Moiré pattern.
• In signal processing and related disciplines, aliasing refers to an effect that
causes different signals to become indistinguishable (or aliases of one
another) when sampled. It also refers to the distortion or artifact that
results when the signal reconstructed from samples is different from the
original continuous signal.
• Aliasing can occur in signals sampled in time, for instance digital audio is
referred to as temporal aliasing. Aliasing can also occur in spatially
sampled signals, for instance digital images. Aliasing in spatially sampled
signals is called spatial aliasing.

Properly sampled image of brick wall. Spatial aliasing in the form of a Moiré pattern.
Sampling and Quantization
• How many samples to take?

The Nyquist Rate

Samples must be taken at a rate that is twice the


frequency of the highest frequency component to be
reconstructed.

Under-sampling: sampling at a rate that is too coarse,


i.e., is below the Nyquist rate.

Aliasing: artefacts that result from under-sampling.

62
How to choose the spatial resolution :
Nyquist Rate
Original image

Sampled image

1mm

2mm

No detail is lost!
Minimum Nyquist Rate:
Spatial resolution
Period Spatial resolution must be less or equal
(sampling rate)
half of the minimum period of the image
or sampling frequency must be greater or
= Sampling locations
Equal twice of the maximum frequency.
Aliased Frequency
x1 (t ) sin( 2 t ), f 1
1 x2 (t ) sin(12 t ), f 6
0.5

-0.5

-1
0 0.5 1 1.5 2
Sampling rate:
5 samples/sec
1

0.5

-0.5

-1
0 0.5 1 1.5 2

Two different frequencies but the same results !


Sampling and Quantization
• How many gray-levels to store?

65
Effect of Spatial Resolution

256x256 pixels 128x128 pixels

64x64 pixels 32x32 pixels


Effect of Spatial Resolution

(Images from Rafael C. Gonzalez and Richard E


Wood, Digital Image Processing, 2nd Edition.
Aliasing and Moiré Pattern
• All signals (functions) can be shown to be made up of a linear
combination sinusoidal signals (sines and cosines) of different
frequencies.
• For physical reasons, there is a highest frequency component
in all real world signals.
• Theoretically,
if a signal is sampled at more than twice its highest frequency
component, then it can be reconstructed exactly from its samples.
But, if it is sampled at less than that frequency (called undersampling),
then aliasing will result.
This causes frequencies to appear in the sampled signal that were not
in the original signal.
The Moiré pattern shown in Figure 2.24 is an example. The vertical low
frequency pattern is a new frequency not in the original patterns.
Moire Pattern Effect : Special Case of
Sampling

Moire patterns occur when frequencies of two superimposed


periodic patterns are close to each other.

(Images from Rafael C. Gonzalez and Richard E


Wood, Digital Image Processing, 2nd Edition.
Aliasing and Moiré Pattern

• Note that subsampling of a digital image will cause undersampling if the


subsampling rate is less than twice the maximum frequency in the digital
image.
• Aliasing can be prevented if a signal is filtered to eliminate high
frequencies so that its highest frequency component will be less than
twice the sampling rate.
• Gating function: exists for all space (or time) and has value zero
everywhere except for a finite range of space/time. Often used for
theoretical analysis of signals. But, a gating signal is mathematically
defined and contains unbounded frequencies.
• A signal which is periodic, x(t) = x(t+T) for all t and where T is the period,
has a finite maximum frequency component. So it is a bandlimited signal.
• Sampling at a higher sampling rate (usually twice or more) than necessary
to prevent aliasing is called oversampling.
Effect of Spatial Resolution

(Images from Rafael C. Gonzalez and Richard E


Wood, Digital Image Processing, 2nd Edition.
Can we increase spatial resolution by
interpolation ?

Down sampling is an irreversible process.


(Images from Rafael C. Gonzalez and Richard E
Wood, Digital Image Processing, 2nd Edition.
Image Quantization

Image quantization:
discretize continuous pixel values into discrete numbers

Color resolution/ color depth/ levels:


- No. of colors or gray levels or
- No. of bits representing each pixel value
- No. of colors or gray levels Nc is given by

b
Nc 2
where b = no. of bits
Quantization function

Nc-1

Nc-2
Quantization level

1
0
Light intensity
Darkest Brightest
Effect of Quantization Levels

256 levels 128 levels

64 levels 32 levels
Effect of Quantization Levels (cont.)

16 levels 8 levels

In this image,
it is easy to see
false contour.

4 levels 2 levels
How to select the suitable size and pixel depth of images

The word suitable is subjective: depending on subject .

Low detail image Medium detail image High detail image

Lena image Cameraman image

To satisfy human mind


1. For images of the same size, the low detail image may need more pixel depth.
2. As an image size increase, fewer gray levels may be needed.
(Images from Rafael C. Gonzalez and Richard E
Wood, Digital Image Processing, 2nd Edition.
Human vision: Spatial Frequency vs
Contrast
Human vision: Distinguish ability for
Difference in brightness

Regions with 5% brightness difference


Sampling Theorem and Aliasing Effect

• Shannon sampling theorem states that if a function is sampled


at a rate equal to or greater than twice its highest frequency,
it is possible to recover completely the original function from
its samples.
Basic Relationship of Pixels

(0,0) x

(x-1,y-1) (x,y-1) (x+1,y-1)

(x-1,y) (x,y) (x+1,y)


y

(x-1,y+1) (x,y+1) (x+1,y+1)

Conventional indexing method


Neighbors of a Pixel

Neighborhood relation is used to tell adjacent pixels. It is


useful for analyzing regions.

(x,y-1) 4-neighbors of p:

(x 1,y)
(x-1,y) p (x+1,y)
(x+1,y)
N4(p) = (x,y 1)
(x,y+1)
(x,y+1)

4-neighborhood relation considers only vertical and


horizontal neighbors.

Note: q N4(p) implies p N4(q)


Neighbors of a Pixel (cont.)

(x-1,y-1) (x,y-1) (x+1,y-1) 8-neighbors of p:

(x 1,y 1)
(x-1,y) p (x+1,y)
(x,y 1)
(x+1,y 1)
(x 1,y)
(x-1,y+1) (x,y+1) (x+1,y+1) (x+1,y)
N8(p) = (x 1,y+1)
(x,y+1)
(x+1,y+1)

8-neighborhood relation considers all neighbor pixels.


Neighbors of a Pixel (cont.)

(x-1,y-1) (x+1,y-1) Diagonal neighbors of p:

(x 1,y 1)
p
(x+1,y 1)
ND(p) = (x 1,y+1)
(x+1,y+1)
(x-1,y+1) (x+1,y+1)

Diagonal -neighborhood relation considers only diagonal


neighbor pixels.
Some Basic Relationships Between
Pixels
• Neighbors of a pixel
There are three kinds of neighbors of a pixel:
• N4(p) 4-neighbors: the set of horizontal and vertical neighbors
• ND(p) diagonal neighbors: the set of 4 diagonal neighbors
• N8(p) 8-neighbors: union of 4-neighbors and diagonal neighbors

O O O O O O
O X O X O X O
O O O O O O
Some Basic Relationships Between
Pixels
• Path:
The length of the path
Closed path
• Connectivity in a subset S of an image
Two pixels are connected if there is a path between them that lies
completely within S.
• Connected component of S:
The set of all pixels in S that are connected to a given pixel in S.
• Region of an image
• Boundary, border or contour of a region
• Edge: a path of one or more pixels that separate two regions
of significantly different gray levels.
Connectivity
Connectivity is adapted from neighborhood relation.
Two pixels are connected if they are in the same class (i.e. the
same color or the same range of intensity) and they are
neighbors of one another.

For p and q from the same class


w 4-connectivity: p and q are 4-connected if q N4(p)
w 8-connectivity: p and q are 8-connected if q N8(p)
w mixed-connectivity (m-connectivity):
p and q are m-connected if q N4(p) or
q ND(p) and N4(p) N4(q) =
Adjacency
A pixel p is adjacent to pixel q is they are connected.
Two image subsets S1 and S2 are adjacent if some pixel
in S1 is adjacent to some pixel in S2

S1
S2

We can define type of adjacency: 4-adjacency, 8-adjacency


or m-adjacency depending on type of connectivity.
Path
A path from pixel p at (x,y) to pixel q at (s,t) is a sequence
of distinct pixels:
(x0,y0), (x1,y1), (x2,y2), , (xn,yn)
such that
(x0,y0) = (x,y) and (xn,yn) = (s,t)
and
(xi,yi) is adjacent to (xi-1,yi-1), i = 1, ,n

q
p

We can define type of path: 4-path, 8-path or m-path


depending on type of adjacency.
Path (cont.)
8-path m-path

p p p

q q q

m-path from p to q
8-path from p to q
solves this ambiguity
results in some ambiguity
Some Basic Relationships Between Pixels

• An example of adjacency:
Some Basic Relationships Between
Pixels
• Adjacency:
Two pixels that are neighbors and have the same grey-
level (or some other specified similarity criterion) are
adjacent
Pixels can be 4-adjacent, diagonally adjacent, 8-adjacent,
or m-adjacent.
• m-adjacency (mixed adjacency):
Two pixels p and q of the same value (or specified
similarity) are m-adjacent if either
• (i) q and p are 4-adjacent, or
• (ii) p and q are diagonally adjacent and do not have any common
4-adjacent neighbors.
• They cannot be both (i) and (ii).
Distance
For pixel p, q, and z with coordinates (x,y), (s,t) and (u,v),
D is a distance function or metric if

w D(p,q) 0 (D(p,q) = 0 if and only if p = q)

w D(p,q) = D(q,p)

w D(p,z) D(p,q) + D(q,z)

Example: Euclidean distance

De ( p, q) ( x s )2 + ( y t )2
Distance (cont.)
D4-distance (city-block distance or Manhattan) is defined as

D4 ( p, q) x s+y t

2 1 2

2 1 0 1 2

2 1 2

Pixels with D4(p) = 1 is 4-neighbors of p.


Distance (cont.)
D8-distance (chessboard distance) is defined as

D8 ( p, q) max( x s , y t )

2 2 2 2 2

2 1 1 1 2

2 1 0 1 2
2 1 1 1 2

2 2 2 2 2

Pixels with D8(p) = 1 is 8-neighbors of p.


Zooming and Shrinking Digital Images

• Zooming: increasing the number of pixels in


an image so that the image appears larger
Nearest neighbor interpolation
• For example: pixel replication--to repeat rows and
columns of an image
Bilinear interpolation
• Smoother
Higher order interpolation

• Image shrinking: subsampling


Zooming and Shrinking Digital Images

Nearest neighbor
Interpolation
(Pixel replication)

Bilinear
interpolation
Face Recognition
Face Expression

Figure 1 Different Facial Expressions of same person [23]


Classification of Remotely Sensed Data
Fingerprint recognition
Optical Character Recognition

A
B
C
D
E

Hidden
Layer Output
Layer

Input
Layer
102
MRI Image
Signature recognition

• Each person's signature is different.


• There are structural similarities which are
difficult to quantify.
• One company has manufactured a machine
which recognizes signatures to within a high
level of accuracy.
Makes forgery even more difficult.

104
• Image formation

105
Re l f Image ce ing
• Image processing theory and practices

Why this is possible?


How ?
Theory
Practice

And much more


106
• Edge detection and image segmentation

How ?
Theory
Practice

107

You might also like