0% found this document useful (0 votes)
2 views15 pages

June23_compressed

The document covers various concepts in image processing, including image acquisition, transformation, and enhancement techniques. It explains the differences between optical, analog, and digital image processing, as well as the importance of image digitization and quantization. Additionally, it discusses Bayes' theorem, noise models, and the HSI color model, providing insights into linear discriminant functions and their properties.

Uploaded by

athulmanohar27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views15 pages

June23_compressed

The document covers various concepts in image processing, including image acquisition, transformation, and enhancement techniques. It explains the differences between optical, analog, and digital image processing, as well as the importance of image digitization and quantization. Additionally, it discusses Bayes' theorem, noise models, and the HSI color model, providing insights into linear discriminant functions and their properties.

Uploaded by

athulmanohar27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1. (a) Explain the term “Image Acquisition.

” Compare optical image processing,


analog image processing and digital image processing.
a) Image Acquisition

📘 Definition:

Image acquisition refers to the process of capturing an image using a sensor or


device and converting it into a form suitable for processing. It is the first step in
digital image processing.

📷 Steps Involved:

1. Physical sensing of the scene (using a camera, scanner, etc.)

2. Conversion of the sensed data into digital form (sampling and quantization)

Q Comparison of Image Processing Types:

Feature Optical Analog Digital

Uses lenses, mirrors, Uses electrical Uses digital


Medium
prisms signals computers
Physical Signal-based Algorithms on
Processing Method
manipulation of light filtering digitized image

Flexibility Low Medium High

Accuracy and
Low Moderate High
Repeatability

Holography, Image editing,


Examples TV broadcasting
interferometry computer vision

(b) Explain the term “Image transformation”. Also, give its importance in context of
digital image processing. Write key steps for image transformation from the spatial
Image Transformation

📘 Definition:

Image transformation is the process of converting an image from one domain to


another, commonly from the spatial domain to the frequency domain.

Q Importance in Digital Image Processing:


• Simplifies tasks like enhancement, filtering, and compression.

• Reveals hidden features (e.g., periodic patterns).

• Enables efficient operations via Fourier, DCT, Wavelet transforms.

□ Key Steps (Spatial to Frequency Domain):

1. Represent image as a 2D matrix of intensity values.

2. Apply a mathematical transformation like DFT, DCT, or Wavelet Transform.

3. Operate in the frequency domain (e.g., filter or compress).


4. Inverse transform the result to reconstruct the image.

(c) Explain Discrete Cosine Transform (DCT). Give advantage of DCT over DFT.
Compute DCT matrix of order

(c) Discrete Cosine Transform (DCT)

📘 Definition:

DCT expresses a signal (image) as a sum of cosine functions at different


frequencies.
Xk=∑n=0N−1xncos[πN(n+12)k],k=0,1,…,N−1X_k = \sum_{n=0}^{N-1} x_n \cos
\left[ \frac{\pi}{N}(n + \frac{1}{2})k \right],\quad k = 0,1,\dots,N-
1Xk=n=0∑N−1xncos[Nπ(n+21)k],k=0,1,…,N−1

🆚 DCT vs DFT:

Feature DCT DFT

Output Real values Complex values

Symmetry Exploited Even symmetry General (no symmetry constraint)

Compression Usage Widely used (e.g., JPEG) Less efficient for compression

□ DCT Matrix of Order 2:

Let N=2N = 2N=2. The DCT matrix CCC is:

C=[121212−12]C = \begin{bmatrix} \sqrt{\frac{1}{2}} & \sqrt{\frac{1}{2}} \\


\sqrt{\frac{1}{2}} & -\sqrt{\frac{1}{2}} \\ \end{bmatrix}C=212121−21
Or normalized form:

C=[121212−12]C = \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\


\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \end{bmatrix}C=[212121−21]

(d) What are smoothening filters ? How do they differ from the sharpening filters ?
Briefly discuss the role of low pass filters and high pass filter in image restoration.

(d) Smoothing Filters

📘 Definition:

Smoothing filters are used to reduce noise and blur details in images. They
operate by averaging or modifying pixel values based on neighbors.

📏 Types:

• Mean filter

• Gaussian filter

• Median filter (non-linear)

🔁 Smoothing vs Sharpening Filters:

Feature Smoothing Sharpening

Purpose Reduce noise, blur Enhance edges and details

Frequency Range Removes high frequencies Enhances high frequencies

Filter Type Low-pass filter High-pass filter

□ Role in Image Restoration:

• Low-pass filters remove high-frequency noise (e.g., Gaussian noise).

• High-pass filters restore details and sharpen blurred regions.


• Combined usage helps in restoring degraded images effectively.

(e) Explain Bayes’ theorem with suitable example. Also, discuss the role of Bayes’
theorem in Bayes’ classifier. Give the properties of Bayes’ classifier.
(e) Bayes’ Theorem & Bayes’ Classifier
📘 Bayes’ Theorem:

P(A∣B)=P(B∣A)⋅P(A)P(B)P(A|B) = \frac{P(B|A) \cdot


P(A)}{P(B)}P(A∣B)=P(B)P(B∣A)⋅P(A)

• P(A∣B)P(A|B)P(A∣B): Posterior probability (target class given evidence)

• P(B∣A)P(B|A)P(B∣A): Likelihood (evidence given class)

• P(A)P(A)P(A): Prior (initial belief)

• P(B)P(B)P(B): Evidence

🧠 Example:

Let AAA: person has a disease


Let BBB: person tests positive
Bayes' theorem gives the probability of disease given a positive test, considering
test accuracy and disease rate.

🤖 Bayes’ Classifier:

• A statistical classifier that assigns a data point to the class with the highest
posterior probability.

• Uses:

Class=argmaxiP(ωi∣x)=argmaxiP(x∣ωi)P(ωi)P(x)\text{Class} = \arg \max_i


P(\omega_i|x) = \arg \max_i
\frac{P(x|\omega_i)P(\omega_i)}{P(x)}Class=argimaxP(ωi∣x)=argimaxP(x)P(x∣ωi)P(
ωi)

🎯 Properties of Bayes’ Classifier:

• Minimum error rate under ideal conditions (known distributions)

• Handles uncertainty using probabilities

• Effective with Gaussian and discrete data

• Can be Naive Bayes when features are assumed independent

2. (a) What is image digitization ? Explain the role of quantization in the image
digitization, with the help of an example.
(a) What is Image Digitization?

📘 Definition:

Image digitization is the process of converting an analog image (continuous-tone)


into a digital form (discrete values), so it can be processed by digital systems.
It consists of two key steps:

1. Sampling – Converting the spatial domain (height and width) into discrete
grid locations (pixels).
2. Quantization – Converting the intensity (brightness) values at each sampled
point into a finite set of discrete levels.

🎯 Role of Quantization:

Quantization involves mapping a continuous range of intensity values into a finite


number of levels, determined by the bit depth.

📌 Example:

Suppose an analog pixel has intensity value: 183.8


If using 3-bit quantization (i.e., 8 levels: 0 to 7):

• Range 0–255 is divided into 8 intervals:

o Each level ≈ 32
• 183.8 → falls in level 5 (160–191.99 range) → stored as 5
This process introduces quantization error, but makes the image storable in digital
memory.
(b) Given, the dimension of an image as 5 × 8 inches

Sampling Requirement Calculation

🖼 Given:

• Image size = 5 inches × 8 inches

• Sampling resolution = 500 dots per inch (dpi) in both directions

📏 Samples Required:

Samples along width=5×500=2500\text{Samples along width} = 5 \times 500 =


2500Samples along width=5×500=2500
Samples along height=8×500=4000\text{Samples along height} = 8 \times 500 =
4000Samples along height=8×500=4000
Total samples (pixels)=2500×4000=10,000,000 pixels\text{Total samples (pixels)} =
2500 \times 4000 =
\boxed{10,000,000\text{ pixels}}Total samples (pixels)=2500×4000=10,000,000 pixel
s

⚠ Pixelization Error:
• Pixelization occurs when the sampling resolution is too low, making the
image appear blocky or jagged.
• Caused by undersampling, where fine details cannot be captured.

• More evident when the image is zoomed in or enlarged.

(c) What do you understand by the term image enhancement ? Specify the
objectives of image enhancement techniques. Explain the types of image
enhancement techniques with a suitable example for each. and the frequency of 500
dots per inch exists in each direction. Determine the number of samples required to
preserve the information in the image. Also, discuss the term ‘Pixelization error.’
(c) Image Enhancement

📘 Definition:

Image enhancement refers to improving the visual appearance or highlighting


certain features of an image to make it more suitable for analysis or interpretation.

🎯 Objectives:

• Enhance visual clarity for human viewers


• Improve feature visibility (e.g., edges, textures)

• Make the image suitable for automated processing

🧠 Types of Enhancement Techniques:

Type Description Example


Histogram Equalization: Improves
Spatial Domain Operates directly on pixels
contrast

Frequency Operates on transformed High-Pass Filtering: Sharpens


Type Description Example

Domain image image

Point Contrast Stretching: Enhances


Adjusts pixel individually
Processing contrast

Q Examples:

1. Histogram Equalization:

Redistributes pixel intensities to spread them more evenly — especially useful for
dark or low-contrast images.
2. Contrast Stretching:

Linearly expands a narrow range of intensities into a wider range.


Example: 100–150 → stretched to 0–255
3. Laplacian Sharpening (Frequency Domain):

Enhances edges by emphasizing high-frequency components.

3. (a) How do wavelets differ from waves ? Explain the properties, possessed by
a function to be called as wavelet. Compare wavelet transform with the
Fourier transform.
(a) Wavelets vs Waves

📘 Difference Between Wavelets and Waves:

Feature Waves Wavelets

Continuous, infinite in
Nature Localized in time and frequency
duration

Compact and short (e.g., Haar,


Basis Function Sine or cosine
Daubechies)
Good for non-stationary, transient
Use in Analysis Good for periodic signals
signals

Frequency
Good Moderate
Localization

Time Localization Poor Excellent

📘 Properties of a Wavelet Function:


For a function ψ(t)\psi(t)ψ(t) to be called a wavelet, it must satisfy:

1. Finite Energy:

∫−∞∞∣ψ(t)∣2dt<∞\int_{-\infty}^{\infty} |\psi(t)|^2 dt < \infty∫−∞∞∣ψ(t)∣2dt<∞

2. Zero Mean:

∫−∞∞ψ(t)dt=0\int_{-\infty}^{\infty} \psi(t) dt = 0∫−∞∞ψ(t)dt=0

3. Orthogonality (for orthogonal wavelets):


Different shifts/scales must be independent.

🔁 Wavelet Transform vs Fourier Transform:

Feature Fourier Transform Wavelet Transform

Time and Frequency (multi-


Domain Frequency only
resolution)

Basis Sine and Cosine Scaled and shifted wavelets

Time Localization Poor Good (due to short windowing)

Useful For Stationary signals Non-stationary / transient signals

Output Global frequency Hierarchical coefficients (approx. +


Representation spectrum detail)

(b) Explain image degradation model with suitable block diagram. How noise relates
to image degradation ? Explain any one noise model.

Image Degradation Model

📘 Definition:

Image degradation refers to the process by which an image is distorted during


acquisition, transmission, or storage.

📊 Mathematical Model:

g(x,y)=H(x,y)∗f(x,y)+η(x,y)g(x, y) = H(x, y) * f(x, y) + \eta(x,


y)g(x,y)=H(x,y)∗f(x,y)+η(x,y)

Where:
• f(x,y)f(x, y)f(x,y): Original image

• g(x,y)g(x, y)g(x,y): Degraded image

• H(x,y)H(x, y)H(x,y): Degradation function (e.g., blur)

• η(x,y)\eta(x, y)η(x,y): Noise

• ∗*∗: Convolution operator

Block Diagram:

scss

CopyEdit
Original Image (f(x,y))


Degradation Function (H) —→ Noise (η)

↓ ↓

Blurred + Noisy →→→→→→→→→→ Degraded Image (g(x,y))

⚠ How Noise Relates to Degradation:


Noise adds random variation to pixel intensities, lowering image quality and making
restoration difficult. Noise often results from sensors, transmission errors, or
quantization.

💥 Example: Gaussian Noise Model:

• Additive noise with a normal distribution

• Probability distribution:

P(z)=12πσ2e−(z−μ)22σ2P(z) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(z -


\mu)^2}{2\sigma^2}}P(z)=2πσ21e−2σ2(z−μ)2

• μ\muμ: Mean, σ2\sigma^2σ2: Variance

Used to simulate sensor and environmental noise.

(c) Explain HSI colour model with suitable diagram.


(c) HSI Colour Model
📘 Definition:

The HSI model separates color into components that are more natural for human
perception:
• Hue (H): Color type (angle in the color wheel: red, green, etc.)

• Saturation (S): Color purity (0 to 1)

• Intensity (I): Brightness (0 to 1 or 0 to 255)

🖼 Diagram of HSI Model:

mathematica

CopyEdit
Hue

(Color)
/ \

Saturation Saturation

(Gray to Pure) (Gray to Pure)


\ /

\ /

Intensity

(Black ↔ White)
Or as a cone/cylinder:

• Angle → Hue

• Radius → Saturation

• Height → Intensity

🎯 Importance:

• More intuitive than RGB for human interpretation

• Common in image segmentation, enhancement, and filtering


4. (a) Explain Linear Discrimination Function. How does it relate or differ from
piecewise linear discriminant functions ? Write properties of Linear
Discriminant Analysis (LDA).
Linear Discriminant Function (LDF)

📘 Definition:

A Linear Discriminant Function (LDF) is a linear function used to classify input


data by projecting it onto a line and separating classes using a decision boundary.

g(x)=wTx+w0g(\mathbf{x}) = \mathbf{w}^T \mathbf{x} + w_0g(x)=wTx+w0

• x\mathbf{x}x: Feature vector

• w\mathbf{w}w: Weight vector

• w0w_0w0: Bias (threshold)

If g(x)>0g(\mathbf{x}) > 0g(x)>0, one class is chosen; if g(x)<0g(\mathbf{x}) <


0g(x)<0, another class is chosen.

🔁 LDF vs. Piecewise Linear Discriminant Function:

Feature Linear Discriminant Piecewise Linear Discriminant

Boundary
Single hyperplane Combination of multiple linear segments
Type

Linearly separable
Suitable For Complex/non-linear classification problems
data

More flexible and handles complex class


Flexibility Limited
shapes

Example LDA Decision trees with linear splits

📌 Properties of Linear Discriminant Analysis (LDA):

1. Maximizes Class Separability – Projects data to maximize distance between


means and minimize intra-class variance.
2. Reduces Dimensionality – Projects high-dimensional data to a lower
dimension.
3. Linear Decision Boundary – Assumes classes are linearly separable.
4. Assumes Equal Covariance – Works best when class covariances are
equal.

(b) Explain the following :


Clustering Methods

(i) Partition based clustering

📘 (i) Partition-Based Clustering:

• Divides data into non-overlapping subsets (clusters).

• Each data point belongs to exactly one cluster.

• Goal: Minimize intra-cluster distance and maximize inter-cluster distance.

🔹 Example: K-means, PAM (Partitioning Around Medoids)

(ii) K-means clustering

(ii) K-means Clustering:

• Iterative algorithm that partitions data into k clusters.

• Steps:

1. Select kkk initial centroids.


2. Assign each point to the nearest centroid.

3. Recompute centroids based on current assignments.

4. Repeat steps 2–3 until convergence.

🔹 Pros: Fast, simple


🔹 Cons: Sensitive to initial choice, assumes spherical clusters

(c) Describe the following quantities, used to represent any colour :


(c) Color Representation Quantities

(i) Hue

📘 (i) Hue:

• Defines the color type (e.g., red, blue, green).


• Measured as an angle (0°–360°) on the color wheel:

o Red = 0°, Green = 120°, Blue = 240°


• Example: Pure yellow has a hue around 60°

(ii) Saturation

📘 (ii) Saturation:

• Measures the purity or vividness of a color.

• Ranges from 0 (gray) to 1 (fully saturated).

• Low saturation = pale or washed-out color

• High saturation = rich and intense color

5. Write short notes any five of on the following :


(a) Image representation using vector model

(a) Image Representation Using Vector Model

An image can be represented as a vector by converting its 2D pixel matrix into a 1D


column or row vector.

For an image of size

𝑀×𝑀

M×N, the total vector length is

𝑀×𝑀

M×N.

This model is useful in pattern recognition, machine learning, and image


compression, where mathematical operations on vectors (like dot products) are
applied.

(b) True colour images

o (b) True Colour Images

True color images use 24-bit RGB representation, with 8 bits for each Red, Green,
and Blue channel.
This allows up to 16.7 million colors (2²⁴).

Such images are capable of representing real-world photographs and detailed


scenes with high accuracy.

Common formats: JPEG, PNG, BMP.

(c) Spatial resolution

o (c) Spatial Resolution

Spatial resolution refers to the amount of detail in an image and is typically


measured in pixels per inch (PPI) or dots per inch (DPI).

Higher resolution = more pixels = finer detail.

It depends on the sampling rate during digitization.

Affects the quality and sharpness of the image.

(d) Ideal low pass filter


o (d) Ideal Low Pass Filter

A frequency-domain filter that preserves low-frequency components and completely


removes high-frequency components.

Used for image smoothing or blurring.


(e) Gaussian noise

o (e) Gaussian Noise

A type of random noise that follows a normal (Gaussian) distribution.

Common in digital images due to thermal noise, sensor interference, or transmission


errors.

Probability distribution:

Often removed using Gaussian or median filters.

(f) Band Pass filter

o (f) Band Pass Filter

Allows frequencies within a specific range (band) to pass through while attenuating
frequencies outside that range.

Combines the effect of low-pass and high-pass filters.

Used in feature extraction, edge detection, and isolating mid-frequency patterns in


images.

You might also like