0% found this document useful (0 votes)
7 views

exam_compression_CHAT_GPT

Uploaded by

Saddly Smile
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

exam_compression_CHAT_GPT

Uploaded by

Saddly Smile
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Multimedia Compression: Exams

Master’s in Telecommunication
University Paris-Saclay

Exam 1
Total Score: 20/20

Exercise 1: Differential Pulse Code Modulation (DPCM) [6 points]


Open Question:

1. Explain the main concept of Differential Pulse Code Modulation (DPCM). How
does it differ from standard PCM? [3 points]

2. Discuss the role of quantization in DPCM and its impact on coding gain. [3 points]

Exercise 2: Transform Coding [8 points]


Problem Solving:

1. Given a signal modeled as an AR(1) process with xn = ρxn−1 + zn , where zn is a


zero-mean white noise with variance σz2 = 1, compute the prediction gain GDP CM
if ρ = 0.9. [4 points]

2. Describe the role of the Karhunen-Loève Transform (KLT) in transform coding.


How does it achieve optimal decorrelation? [4 points]

Exercise 3: JPEG Compression [6 points]


Open Question:

1. Describe the role of DCT in JPEG compression. Why is it preferred over KLT? [3
points]

2. Explain the use of the Zig-Zag scan and its contribution to entropy coding. [3
points]

1
Exam 2
Total Score: 20/20

Exercise 1: Scalar Quantization [7 points]


Problem Solving:

1. Consider a scalar quantizer with ∆ = 2. Compute the quantization error variance


σq2 for a uniform source in the interval [−10, 10]. [3 points]

2. Discuss the relationship between the quantization step size ∆ and the overall dis-
tortion in transform coding. [4 points]

Exercise 2: Bit Allocation [7 points]


Open Question:

1. Derive the optimal bit allocation formula for transform coefficients using the La-
grange multiplier method. [4 points]

2. Discuss the Pareto condition for optimal bit allocation and its implications in prac-
tical scenarios. [3 points]

Exercise 3: Coding Gain in Transform Coding [6 points]


Open Question:

1. Define the coding gain GT C and explain its dependence on the energy compaction
property of the transform. [3 points]

2. Provide an example of a transform with good coding gain and justify its perfor-
mance. [3 points]

2
Solutions: Exam 1
Exercise 1: Differential Pulse Code Modulation (DPCM)
• Answer: DPCM encodes the prediction residual instead of the original signal. It
reduces redundancy by predicting the next sample based on previous ones. Quan-
tization affects the residual signal, influencing the coding gain. [6 points total]

Exercise 2: Transform Coding


• Solution: Using GDP CM = σx2 /σd2 :

σd2 = σz2 /(1 − ρ2 ), GDP CM = 1/(1 − 0.92 ) ≈ 5.26.

[4 points]

• KLT achieves optimal decorrelation by diagonalizing the autocorrelation matrix of


the source. This ensures energy compaction and efficient bit allocation. [4 points]

Exercise 3: JPEG Compression


• DCT transforms spatial pixel values into frequency coefficients, compacting en-
ergy. It’s computationally efficient and approximates KLT for correlated sources.
[3 points]

• The Zig-Zag scan groups low-frequency coefficients first, enhancing entropy coding
efficiency by clustering non-zero values. [3 points]

3
Solutions: Exam 2
Exercise 1: Scalar Quantization
• Solution: Quantization error variance for uniform distribution is σq2 = ∆2 /12. For
∆ = 2, σq2 = 4/12 = 1/3. [3 points]

• Increasing ∆ increases distortion but reduces rate. Optimal ∆ balances this trade-
off. [4 points]

Exercise 2: Bit Allocation


• Using Lagrange multipliers, derive Rk = Ravg + 21 log(σyk
2
/geometric mean of variances).
[4 points]

• The Pareto condition ensures equal slope distortion curves, optimizing the alloca-
tion. [3 points]

Exercise 3: Coding Gain in Transform Coding


• Coding gain GT C depends on variance compaction. GT C = σx2 /arithmetic mean of variances.
[3 points]

• Example: DCT achieves high gain for correlated sources, as it compacts energy into
fewer coefficients. [3 points]

You might also like