Q
Q
Contrast Stretching:
• This is a technique used to improve the visibility of features in an image.
• Some images may look dull because most of the pixel values are in a narrow range (e.g., between 50–100 on
a scale of 0–255).
• Contrast stretching expands these values to cover a wider range, like 0 to 255.
• This makes dark areas darker and bright areas brighter, making the image clearer.
Intensity Slicing:
• This is used to highlight certain ranges of intensity in an image.
• For example, you can assign a specific color or brightness to pixels between intensity values 100 and 150.
• It’s often used in medical imaging or satellite imaging to identify specific objects or regions based on
intensity.
Q.5. What are the performance metrics for evaluating image compression?
1. Compression Ratio (CR):
o It shows how much the image size is reduced.
o CR = Original size / Compressed size.
2. Mean Squared Error (MSE):
o Measures the average squared difference between the original and compressed images.
o Lower MSE means better quality.
3. Peak Signal-to-Noise Ratio (PSNR):
o It tells how close the compressed image is to the original.
o Higher PSNR = better image quality.
4. Bitrate:
o Number of bits used per pixel in the compressed image.
Q.8. State the conditions for Region Splitting and Merging Processes.
• These methods divide an image into homogeneous regions.
• Splitting: If a region is not similar enough (based on intensity or texture), it is split into smaller parts.
• Merging: If neighboring regions are similar, they are merged into one.
• Conditions:
1. Homogeneity criteria (intensity range, texture, etc.)
2. Threshold values to decide splitting or merging.
• Used in segmentation tasks, like detecting objects in images.
2. Explain shifting and convolution properties of 2-D discrete Fourier transform (DFT).
Shifting Property:
• Shifting an image in the spatial domain results in phase change in the frequency domain.
• Formula:
f(x−x0,y−y0)→F(u,v)e−j2π(ux0/M+vy0/N)f(x - x_0, y - y_0) \rightarrow F(u, v) e^{-j2\pi(ux_0/M + vy_0/N)}
Convolution Property:
• Convolution in spatial domain is equivalent to multiplication in frequency domain:
f(x,y)∗h(x,y)↔F(u,v)⋅H(u,v)f(x, y) * h(x, y) \leftrightarrow F(u, v) \cdot H(u, v)
• This is useful for filtering operations like blurring or sharpening.
3. What is the difference between a box low pass filter and a Gaussian low pass filter?
Kernel shape All weights are equal Weights follow a Gaussian distribution
Sharpness Blurs image but introduces artifacts Smooth blurring, better edge preservation
Frequency Response Poor control over high frequencies Better frequency selectivity
9. What is the need of image segmentation? Explain edge, point, and line detection processes in brief.
Need for Segmentation:
• To divide an image into meaningful regions or objects.
• Helps in object recognition, tracking, and measurement.
Edge Detection:
• Detects boundaries between regions.
• Techniques: Sobel, Canny, Prewitt.
Point Detection:
• Identifies isolated pixels with different intensity.
• Uses a mask to highlight such pixels.
Line Detection:
• Identifies straight lines (horizontal, vertical, diagonal).
• Masks are used to detect intensity patterns that match lines.
10. What is the difference between image enhancement and image restoration?
Q1: Explain what you would expect the result of Sampling and Quantization operations on the image (Fig.1).
Explanation:
1. Sampling:
• Sampling refers to selecting discrete pixel positions from the continuous image space.
• It controls the spatial resolution of the image.
• Result:
o If the sampling rate is high, the image retains detail and appears smooth.
o If the sampling rate is low, the image may become blocky or lose fine details, especially around
edges.
o The object in Fig.1 would appear with jagged edges or lose shape definition if under-sampled.
2. Quantization:
• Quantization refers to assigning intensity levels to the sampled values.
• It controls the intensity resolution (i.e., how many shades of gray or color levels are available).
• Result:
o If quantization uses many levels (e.g., 256 gray levels), the image appears smooth and natural.
o If it uses few levels (e.g., 2 or 4), the image will show posterization, with sharp jumps between
brightness levels.
o In Fig.1, poor quantization may result in loss of shading and banding artifacts in smooth areas (e.g.,
the highlight on the object).
Issue: Not stable if H(u,v)H(u, v) has values close to zero → amplifies noise.
Summary:
• Uniform motion blur can be modeled using a known PSF.
• Inverse filtering or Wiener filtering is applied in the frequency domain.
• Wiener filtering is preferred for noisy images.