0% found this document useful (0 votes)
5 views

IP Theory- Unit 1&2

The document provides an overview of image processing concepts, including definitions of digital images, pixels, and the digitization process. It discusses image enhancement techniques such as contrast adjustment, thresholding, and spatial filtering, along with their applications and importance. Additionally, it covers pixel relationships, connectivity, and various histogram techniques for analyzing and improving image quality.

Uploaded by

gipalaj384
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

IP Theory- Unit 1&2

The document provides an overview of image processing concepts, including definitions of digital images, pixels, and the digitization process. It discusses image enhancement techniques such as contrast adjustment, thresholding, and spatial filtering, along with their applications and importance. Additionally, it covers pixel relationships, connectivity, and various histogram techniques for analyzing and improving image quality.

Uploaded by

gipalaj384
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

IP Theory Unit 1:

• What is IP?
Assume a human seeing the a traffic signal on the road. Immediately his eyes will start
capturing the content and brain shall interpret what the signals signify. In digital image
processing, the same image shall be fed in as an input to the system. The eye is the camera
and brain is the processing system in this case. The system shall interpret and understand
the content to let the further actions happen. The algorithm developed shall play a major
role in understanding the contents with higher accuracy.

• What is Digital Image?


An image is made up of countable number of rows and columns of pixels. A pixel is one of
the small dots or squares that make up an image on a computer screen. Each pixels
represents a fixed range of brightness for pixels.

• What is an Image?
An image, is a binary representation of visual information.
Images are mathematically represented as a two dimensional function, f(x,y)
where,
x and y are called the spatial coordinates
f is the intensity or gray level of the image at that point

• What is a pixel?
A digital image is an image composed of picture elements also known as PIXELS. Each pixel
has a finite, discrete quantity representing the numerical representation for intensity or
grey level.

• What is a pixel bit depth?


Pixel depth is the number of bits that represent each pixel in the image.
If L is the number of Levels L = 2k , where k is the number of bits (bit depth)
If k is number of bits required to represent each pixel
Dynamic range or gray scale range of the image is [0,L] , i.e [0, 2k -1]
The bit depth of a image/camera defines the number of distinct shades that are available
for each pixel.
Many imaging applications don’t require a bit depth of more than 8 bits
However, when color accuracy is needed, high bit depth is key in detecting the slightest
deviation from specific color tones. 8 Bit depth provides better image.

• What is an analog image?


Analog images are the type of images that we, as humans, see.
What we see in an analog image is various levels of brightness (or shades of gray) and
colors.
It is generally continuous and not broken into many small individual pieces.
Analog image is for human viewing
f (x,y) takes continuous range of values to represent position and intensity.
Example: image produced on a CRT monitor
Analog image requires large amount of storage
• What is a digital image?
In real life every thing is analog
For storage, processing and transmission, we convert it to digital
f (x,y) requires finite, discrete quantities
Advantage of digital image is it can be processed, stored, cost effective, versatile image
manipulation

• What is the digitization process?


Each pixel is assigned an integer value after quantization.
Each number represents a different shade of grey.
The collection of these pixels will form the image.
In the above example, there are 256 quantization level

• Resolution of an Image

• Spatial Resolution
Spatial resolution of an image is the physical size of the image in form of number of pixels
in the image smallest discernible detail in an image depends on the number of pixels.
An image can be either down sampled to reduce resolution or upsampled to increase
resolution in spatial domain.

• Down Sampling
In the down-sampling technique, the number of pixels in the given image is reduced
depending on the sampling frequency.
Due to this, the resolution and size of the image decrease. An image can be either down
sampled to reduce resolution or upsampled to increase resolution in spatial domain.

• Up Sampling
The number of pixels in the down-sampled image can be increased by using up-sampling
interpolation techniques. The up-sampling technique increases the resolution as well as the
size of the image.
• Intensity Level
refers to the number of gray level / number of intensity levels. It is given in terms of the
number of bits used to store each intensity level. For example: 8 bit , 16 bit
Also called Gray level Resolution
It represents value of each pixel: Binary image (1-bit), Monochrome images (8 bit grey scale),
Colour images (24 bit colour scale)

• Colour Image Representation


Each pixel is a combination of Red, Blue and Green color
Each pixel is represented by three colors (Red, Green, Blue)
Therefore each pixel of a color image has 3 values
Ex: at location (6,75), pixel value is (100, 20, 234)
White: 255,255,255; Black: 0,0,0; Red: 255,0,0; Green:0,255,0; Blue: 0,0,255

• Relation between pixels


Neighborhood, Adjacency, Connectivity, Distance measure

• 4 Neighbours of a Pixel
Any pixel p(x, y) has two vertical and two horizontal neighbours, given by
Two horizontal → (x, y+1), (x, y-1)
Two vertical → (x+1, y), (x-1, y)

• Diagonal Neighbours of a Pixel


The four diagonal neighbors of p(x,y) are given by,
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1 ,y-1)
This set is denoted by ND(P).

• 8 Neighbours of a Pixel
The points ND(P) and N4(P) are together known as 8-neighbors of the
point P, denoted by N8(P).
Some of the points in the N4, ND and N8 may fall outside image when P lies on the border of
image.

• Connectivity/ Adjacency: Being connected means the pixels have same value / grey level
with its neighboring pixel
We see horizontal / vertical neighbor for 4 way adjacency
We see diagonal pixels for 8 way adjacency
a) 4-connectivity: Two or more pixels are said to be 4-connected if they are 4-adjacent with
each others.
b) 8-connectivity: Two or more pixels are said to be 8-connected if they are 8-adjacent with
each others.
c) m-connectivity: Two or more pixels are said to be m-connected if they are m-adjacent
with each others.

• Note: The role of m-adjacency is to define a single path between pixels. It is used in many
image analysis and processing algorithms. There is a ambiguity in path from p to q (i.e p and
q are not m connected). Always select the 4 adjacent path over the diagonal path.
IP Theory Unit 2:
• What is image enhancement?
o Highlighting interesting detail in images
o Emphasize, sharpen or smoothen image features
o Removing noise from images
o Making images more visually appealing
o Enhance otherwise hidden information

• Contrast
Contrast refers to the differences of the gray level amplitudes within an image.
The difference in brightness between light and dark areas of an image.
Contrast determines the number of shades in the image.

• Why is contrasting required?


In some images, the features of interest occupy only relatively narrow range of the gray
scale. Contrast refers to the differences of the gray level amplitudes within an image. The
difference in brightness between light and dark areas of an image. Contrast determines the
number of shades in the image.

• Point Processing
In point processing we work on single pixel. Some examples of point processing are
• Digital negative
• Contrast Stretching
• Thresholding
• Grey level slicing
• Bit level slicing
• Log Transformation
• Power law transformation

• Digital Negative- eg: displaying x ray images


Suited for enhancing white or grey detail embedded in dark region i.e. black area
predominates
• Thresholding
Suited for enhancing white or grey detail embedded in dark region i.e. black area
predominates. Thresholding can be used to identify the region in which
a person is interested (ex. You want to make the background white in this example)
Thresholding converts the image into maximum constrast (i.e only black and white image)

• Grey Level Slicing


Similar to thresholding except that it select a band of grey levels. This technique is used to
highlight a specific range of gray levels in given image.
Two approaches:
• Display high value for range of interest and discard background
• Display high value for range of interest, and preserve background

• Log Transformation
In some images it happens that some pixels have very large values so large that the other
low value pixels are obscured. Example in daytime we cannot see stars because intensity of
sun is very large and that of stars is low so eye cannot adjust to the dynamic range
Thus, we decrease the dynamic range of the image. We use LOG OPERTOR to do this.

• Why do we use Log Transformation?


Two adjacent pixels with intensities 0 and 15 in the input image, both of them would be
almost indistinguishable. Human eyes will not be able to perceive such a subtle change in
the grayscale intensity. We apply log transform to give you a better perspective, intensity
values of 0 and 15 in the input are mapped to 0 and 127 in the output.

• Why do we need power law transformation?


The non-linearities encountered by any captured, printing, displaying device can be
corrected by gamma (power law correction)
• Contrast Stretching and its importance
Many a times we obtain a low contrast image due to poor illumination or wrong setting of
lens aperture. Idea of contrast stretching is to make dark portion darker and bright portion
brighter.
Slope <1 → dark image darker; Slope > 1→bright image brighter
One can assign different shapes based on application

• Bit Plane Slicing


Bit plane slicing gives the contribution made by each bit into making the final image.
Isolate each bit of pixel intensity
Higher-order bits usually contain most of the significant visual information
Lower-order bits contain subtle details

• Histogram
Histogram is a graphical representation of the intensity distribution of an image.
In simple terms, it represents the number of pixels for each intensity value (grey level) of
the image. Histogram is widely useful in image processing
X axis → intensity level y axis → number of pixels with this intensity level. Based on intensity
characteristics, histograms are classified as histogram of light image, dark image, low
contrast image, high contrast image. Notice that the high contrast image has the most
evenly spaced histogram.

• Histogram Usage
Histogram is used for various image processing applications such as :
o Manipulating contrast or brightness
o Improve quality by normalizing the histogram values to a flat profile

• Types of Histograms
Histogram equalization, Histogram matching (specification), Local enhancement

• Histogram Equalization
Histogram equalization also known as histogram flattening. Preprocessing technique to
enhance contrast in ‘natural’ images (sharpen an image). Redistributes to generate equal
number of pixels for every gray-value. Spreads the frequencies of an image. Therefore it is
called as equalization.
• Spatial Filtering
Let us say you want to sharpen or blur a image. One of the way of doing it is by spatial
filtering.
By spatial filtering we can :
• blur the images (this is equivalent to low pass filtering operation) or
• sharpen the images (this is equivalent to high pass filtering operation)

Spatial Filtering is just moving the filter mask from point to point in an image. The filter mask
is also called as Template, window, kernel. Filtering creates a new pixel value with
coordinates equal to the coordinates of the center of the neighborhood whose value is the
result of filtering operation. A filtered image is obtained as the center of the filter visits each
pixel in the input image. Sharpness/detail begins to disappear with increase in size of mask.

The output intensity value at (x,y) depends on the input intensity value at (x,y) and the
neighboring pixel intensity values around (x,y). Spatial masks (also called window, filter,
kernel, template) are convolved over the entire range for local enhancement (spatial
filtering). The size of the masks (3x3, 5x5, 7x7..) determines the number of neighboring
pixels which influence the output value at (x,y). The values (coefficients) of the mask
determine the nature and properties of enhancing technique.

• Note: For filtering or morphological operations: we coincide the mask coefficient with the
image pixel such that center of mask w(0,0) coincide with the center of the image pixels
f(x,y)

• Spatial Domain Low Pass Filter/ Averaging Filter


Replace the value of every pixel in the image by the average of intensity levels in its
neighborhood defined by a mask. This process results in image with reduced "sharp"
transitions in intensity. Averaging is analogous to integration. Useful in reducing noise from
images and for highlighting gross detail.

• Advantages of Spatial Domain LPF


noise reduction at expense of blurring
Undesirable effects on the edges are removed
Results in image with reduced sharp transitions

• Disadvantages of Spatial Domain LPF


Attenuates impulse noise but does not remove impulse noise.
Edges, which are desirable feature of an image, are also blurred

• Weighted Smoothing Filter


Elements of mask have different weights.
Provides more effective smoothing
Pixels closer to the central pixel are more important
Often referred to as a weighted averaging

• Weighted Average Filter over Average Filtering


In weighted average filter, we gave more weight to the center value, due to which the
contribution of center becomes more than the rest of the values. Due to weighted average
filtering, we can control the blurring of image.

You might also like