0% found this document useful (0 votes)
3 views8 pages

Q

The document discusses various image processing techniques including contrast stretching, intensity slicing, and digital color representation. It also covers image restoration methods, performance metrics for compression, and the effects of sampling and quantization. Additionally, it explains the differences between lossy and lossless compression, as well as the importance of image segmentation and edge detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views8 pages

Q

The document discusses various image processing techniques including contrast stretching, intensity slicing, and digital color representation. It also covers image restoration methods, performance metrics for compression, and the effects of sampling and quantization. Additionally, it explains the differences between lossy and lossless compression, as well as the importance of image segmentation and edge detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Q.1. Discuss about contrast stretching and intensity slicing.

Contrast Stretching:
• This is a technique used to improve the visibility of features in an image.
• Some images may look dull because most of the pixel values are in a narrow range (e.g., between 50–100 on
a scale of 0–255).
• Contrast stretching expands these values to cover a wider range, like 0 to 255.
• This makes dark areas darker and bright areas brighter, making the image clearer.
Intensity Slicing:
• This is used to highlight certain ranges of intensity in an image.
• For example, you can assign a specific color or brightness to pixels between intensity values 100 and 150.
• It’s often used in medical imaging or satellite imaging to identify specific objects or regions based on
intensity.

Q.2. How are color images represented digitally?


• Digital color images are usually represented using the RGB color model:
o R = Red
o G = Green
o B = Blue
• Each pixel in the image has three components: Red, Green, and Blue values.
• Each value is typically stored as an 8-bit number, meaning the pixel can have 256 intensity levels per color.
• So, one pixel has 3 × 8 = 24 bits, which gives over 16 million possible colors.
• Other models like HSV (Hue, Saturation, Value) or CMYK may be used in printing or specific applications.

Q.3. What is unsharp masking?


• Unsharp masking is a method used to sharpen images and make details more visible.
• It works in three steps:
1. Blur the image (using a filter like Gaussian blur).
2. Subtract the blurred image from the original image. This gives us the “mask” (contains
edges/details).
3. Add the mask back to the original image to make edges more visible.
• It is commonly used in photo editing and medical imaging.

Q.4. What are intensity transform functions?


• These functions change the intensity (brightness) of pixels in an image.
• They are used to enhance image quality or change how it looks.
• Types of intensity transforms:
o Linear Transform: Direct scaling of values.
o Logarithmic Transform: Enhances dark regions more.
o Gamma Correction: Corrects brightness for display devices.
o Histogram Equalization: Improves contrast by spreading intensity values evenly.

Q.5. What are the performance metrics for evaluating image compression?
1. Compression Ratio (CR):
o It shows how much the image size is reduced.
o CR = Original size / Compressed size.
2. Mean Squared Error (MSE):
o Measures the average squared difference between the original and compressed images.
o Lower MSE means better quality.
3. Peak Signal-to-Noise Ratio (PSNR):
o It tells how close the compressed image is to the original.
o Higher PSNR = better image quality.
4. Bitrate:
o Number of bits used per pixel in the compressed image.

Q.6. What is directional derivative? Where is it used?


• A directional derivative measures how fast intensity changes in a specific direction.
• In images, it is used to detect edges, because edges are areas where intensity changes quickly.
• It is part of gradient-based edge detection, like the Sobel operator.

Q.7. Differentiate between lossy and lossless compression techniques.

Feature Lossless Compression Lossy Compression

Data Loss No data is lost Some data is lost

Quality Original image can be restored exactly Slight loss in quality

Compression Ratio Lower Higher

Examples PNG, TIFF JPEG, MP3

Use Cases Medical images, documents Photos, videos, streaming media

Q.8. State the conditions for Region Splitting and Merging Processes.
• These methods divide an image into homogeneous regions.
• Splitting: If a region is not similar enough (based on intensity or texture), it is split into smaller parts.
• Merging: If neighboring regions are similar, they are merged into one.
• Conditions:
1. Homogeneity criteria (intensity range, texture, etc.)
2. Threshold values to decide splitting or merging.
• Used in segmentation tasks, like detecting objects in images.

Q.9. What is spatial filtering?


• A process where each pixel is modified using its neighboring pixel values.
• A kernel or filter mask is moved across the image.
• Types:
o Smoothing filters (e.g., mean, Gaussian) remove noise.
o Sharpening filters (e.g., Laplacian) highlight edges.
• Common in image cleaning and enhancement.

Q.10. How do frequency domain filters work?


• In this method, the image is converted from the spatial domain to the frequency domain using the Fourier
Transform.
• Once in the frequency domain:
o Low-pass filters remove high-frequency noise (blurring).
o High-pass filters keep high-frequency edges (sharpening).
• After filtering, the image is converted back to spatial domain.
• Useful in noise removal, edge enhancement, and image analysis.

1. Define pixelization and false contouring effect in images.


Pixelization:
• Pixelization happens when an image is displayed at a lower resolution than its original.
• The individual pixels become visible, and the image looks blocky or rough.
• This occurs due to zooming in too much or low-resolution formats.
False Contouring:
• This is a visual effect where smooth changes in intensity appear as abrupt bands or contours.
• Common in low bit-depth images (e.g., 8-bit grayscale).
• Caused by quantization errors during image compression or digitization.

2. Explain shifting and convolution properties of 2-D discrete Fourier transform (DFT).
Shifting Property:
• Shifting an image in the spatial domain results in phase change in the frequency domain.
• Formula:
f(x−x0,y−y0)→F(u,v)e−j2π(ux0/M+vy0/N)f(x - x_0, y - y_0) \rightarrow F(u, v) e^{-j2\pi(ux_0/M + vy_0/N)}
Convolution Property:
• Convolution in spatial domain is equivalent to multiplication in frequency domain:
f(x,y)∗h(x,y)↔F(u,v)⋅H(u,v)f(x, y) * h(x, y) \leftrightarrow F(u, v) \cdot H(u, v)
• This is useful for filtering operations like blurring or sharpening.

3. What is the difference between a box low pass filter and a Gaussian low pass filter?

Filter Type Box (Mean) Filter Gaussian Filter

Kernel shape All weights are equal Weights follow a Gaussian distribution

Sharpness Blurs image but introduces artifacts Smooth blurring, better edge preservation

Frequency Response Poor control over high frequencies Better frequency selectivity

Edge handling Less effective More natural blurring

4. What is bit plane slicing in image thresholding?


• Bit plane slicing involves separating an image into binary layers based on bit significance.
• An 8-bit image has 8 planes: from bit 0 (least significant) to bit 7 (most significant).
• Example:
o Bit 7 contains the most significant information (basic structure).
o Lower bits add finer details or noise.
• Helps in compression, watermarking, and feature extraction.

5. Explain the model of image degradation and restoration process in brief.


Degradation Model:
g(x,y)=h(x,y)∗f(x,y)+η(x,y)g(x, y) = h(x, y) * f(x, y) + \eta(x, y)
• f(x,y)f(x, y): Original image
• h(x,y)h(x, y): Degradation function (like blur)
• η(x,y)\eta(x, y): Noise
• g(x,y)g(x, y): Observed degraded image
Restoration:
• Process to recover the original image from the degraded one.
• Methods: Inverse filtering, Wiener filtering, etc.

6. What do you understand by inverse filtering in image processing?


• Inverse filtering tries to recover the original image by undoing the effects of degradation.
• Frequency domain approach:
F(u,v)=G(u,v)H(u,v)F(u, v) = \frac{G(u, v)}{H(u, v)}
• Where:
o G(u,v)G(u, v) = Degraded image in frequency domain
o H(u,v)H(u, v) = Degradation function
• Problems:
o Sensitive to noise
o Not effective if H(u,v)H(u, v) is near zero (division instability)

7. What is wraparound error in frequency domain filtering?


• Happens when Fourier Transform is applied on finite images.
• Due to circular convolution, pixels from one edge may "wrap around" to the opposite edge.
• This creates artificial discontinuities in the filtered image.
• Solution: Use padding (zero-padding) before applying the transform.

8. Explain interpixel and psychovisual redundancies with a suitable example of each.


Interpixel Redundancy:
• Adjacent pixels often have similar values.
• Can be compressed by predicting one pixel from neighbors.
• Example: Run Length Encoding (RLE)
Psychovisual Redundancy:
• Human eye is less sensitive to fine details or color changes.
• Data that is not noticeable to humans can be discarded.
• Example: JPEG compression discards high-frequency components.

9. What is the need of image segmentation? Explain edge, point, and line detection processes in brief.
Need for Segmentation:
• To divide an image into meaningful regions or objects.
• Helps in object recognition, tracking, and measurement.
Edge Detection:
• Detects boundaries between regions.
• Techniques: Sobel, Canny, Prewitt.
Point Detection:
• Identifies isolated pixels with different intensity.
• Uses a mask to highlight such pixels.
Line Detection:
• Identifies straight lines (horizontal, vertical, diagonal).
• Masks are used to detect intensity patterns that match lines.
10. What is the difference between image enhancement and image restoration?

Feature Image Enhancement Image Restoration

Purpose Improve visual appearance Recover original image

Knowledge No need for degradation model Uses degradation model

Techniques Contrast stretching, histogram equalization Inverse filtering, Wiener filtering

Focus Subjective (visual quality) Objective (accuracy of original)

Q1: Explain what you would expect the result of Sampling and Quantization operations on the image (Fig.1).

Explanation:

1. Sampling:
• Sampling refers to selecting discrete pixel positions from the continuous image space.
• It controls the spatial resolution of the image.
• Result:
o If the sampling rate is high, the image retains detail and appears smooth.
o If the sampling rate is low, the image may become blocky or lose fine details, especially around
edges.
o The object in Fig.1 would appear with jagged edges or lose shape definition if under-sampled.

2. Quantization:
• Quantization refers to assigning intensity levels to the sampled values.
• It controls the intensity resolution (i.e., how many shades of gray or color levels are available).
• Result:
o If quantization uses many levels (e.g., 256 gray levels), the image appears smooth and natural.
o If it uses few levels (e.g., 2 or 4), the image will show posterization, with sharp jumps between
brightness levels.
o In Fig.1, poor quantization may result in loss of shading and banding artifacts in smooth areas (e.g.,
the highlight on the object).

Overall Expected Result on Fig.1:


• After sampling, the image might appear pixelated if done at a low resolution.
• After quantization, the smooth brightness gradient on the object could become blocky or reduced to fewer
shades, making the object look less realistic.
Q4: Explain the image restoration technique to remove the blur caused by uniform linear motion.

Blur Caused by Uniform Linear Motion:


• This blur occurs when the camera or object moves in a straight line during the exposure.
• The image becomes smeared in the direction of motion.
• This is modeled as linear motion blur and can be mathematically represented using a Point Spread Function
(PSF).

Image Restoration Technique:

1. Motion Blur Modeling:


• The degraded image g(x,y)g(x, y) is given by:
g(x,y)=f(x,y)∗h(x,y)+n(x,y)g(x, y) = f(x, y) * h(x, y) + n(x, y)
where:
o f(x,y)f(x, y): original image
o h(x,y)h(x, y): motion blur PSF
o n(x,y)n(x, y): noise

2. Restoration using Inverse Filtering (if no noise):


• In frequency domain:
F(u,v)=G(u,v)H(u,v)F(u, v) = \frac{G(u, v)}{H(u, v)}
o G(u,v)G(u,v): Fourier transform of blurred image
o H(u,v)H(u,v): Fourier transform of blur kernel (PSF)
o F(u,v)F(u,v): estimated original image

Issue: Not stable if H(u,v)H(u, v) has values close to zero → amplifies noise.

3. Wiener Filtering (with noise):


• More robust restoration technique.
• Formula:
F(u,v)=[H∗(u,v)∣H(u,v)∣2+Sn(u,v)Sf(u,v)]G(u,v)F(u, v) = \left[ \frac{H^*(u, v)}{|H(u, v)|^2 + \frac{S_n(u, v)}{S_f(u, v)}}
\right] G(u, v)
where:
o H∗(u,v)H^*(u,v): complex conjugate of PSF
o Sn/SfS_n/S_f: noise-to-signal power ratio
• Advantage: Balances deblurring with noise suppression.

Summary:
• Uniform motion blur can be modeled using a known PSF.
• Inverse filtering or Wiener filtering is applied in the frequency domain.
• Wiener filtering is preferred for noisy images.

The image has black vertical bars on a white background.


Let’s see what happens with each filter:

(a) Horizontal Gradient Operation


• This looks for changes from left to right.
• The black bars have sharp edges on their left and right sides.
• So this filter will highlight the sides of the bars.
Result: You'll see lines where the edges of the bars are.

(b) Vertical Gradient Operation


• This checks for changes from top to bottom.
• Since the bars are the same from top to bottom (no change), nothing big is found.
Result: The image will look mostly blank.

(c) Horizontal Sobel Operator


• This is like the horizontal gradient, but a bit smarter and smoother.
• It also finds the left and right edges of the bars.
Result: Clear edges on the sides of the bars will be visible.

(d) Vertical Sobel Operator


• This checks for changes up and down.
• Again, since the bars don’t change vertically, it finds nothing.
Result: The image will be blank or very dark.

You might also like