0% found this document useful (0 votes)
9 views

Exam_Fa22

Uploaded by

Bevis Wi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Exam_Fa22

Uploaded by

Bevis Wi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Exam

Time: 12:50 – 15:10

November 24, 2022

Name: Student ID:

Policy: (Read before You Start to Work)

• The exam is closed book. However, you are allowed to bring four A4-size cheat
sheets (single-sheet, two-sided).
• If you access to any other materials such as books, computing devices, internet con-
nected devices, etc., it is regarded as cheating, and the exam will not be graded.
Moreover, we will file the case to the University Office.
• No discussion is allowed during the exam. Everyone has to work on his/her own.
• Please turn in this copy (exam sheets) when you submit your solution sheets.
• Please follow the seat assignment when you are seated.
• Only those written on the solution sheets will be graded. Those written on the exam
sheets will not be graded.
• You can use Mandarin or English to write your solutions.

Note: (Read before You Start to Work)

• Part of the points will be given even if you cannot solve the problem completely.
Write down your derivation and partial solutions in a clear and systematic way.
• You can make any additional reasonable assumptions that you think are necessary in
answering the questions. Write down your assumptions clearly.
• You should express your answers as explicit and analytic as possible.
• You can reuse any known results from our lectures (restricted to materials from
the lecture slides L1–L6) and homework problems (HW1–HW4) without re-
proving them. Other than those, you need to provide rigorous arguments, unless the
problem mentions specifically.

Total Points: 100. Good luck!


Exam Information Theory, Fall 2022 I-Hsiang Wang

1. (True or False) [36]

A puzzled student makes the following claims. For each claim, either prove it or disprove it.

a) For any continuous random variable X and a function g(·) : R → R such that both
differential entropies h(X) and h(g(X)) exist, it is always true that

h(X) ≥ h(g(X)) . [8]

b) For any jointly distributed random variables (X, Y ) ∈ X × Y and a function g(·) on X ,
it is always true that
I(X; Y |g(X)) ≤ I(X; Y ) . [8]
(1) (2)
c) Consider two data processing systems WY |X and WY |X . For the first one, the input
X ∼ PX and the output Y ∼ PY . For the second one, X ∼ QX and the output Y ∼
QY . While the two data processing system may not be identical, data processing cannot
increase information, and hence it is always true that

D(PX kQX ) ≥ D(PY kQY ) . [8]

d) For a discrete memoryless source S ∼ PS , at any length n ∈ N and δ > 0, the sequence
with the highest probability generated from the DMS is δ-weakly typical. [6]
e) For a lossy source coding problem of a discrete memoryless source S ∼ PS , suppose that
zero-distortion can be attained, that is, Dmin := minŝ(s) E[d(S, ŝ(S))] = 0. Then, it is
always true that R(0) = H(S). [6]

1
Exam Information Theory, Fall 2022 I-Hsiang Wang

2. (Hypothesis Testing with Rejection) [8]

In hypothesis testing, sometimes all the hypotheses are not convincing enough. In such situa-
tions, maybe it is better to reject all of them, that is, saying “I don’t know.” We consider the
simple case with two hypotheses: for θ = 0, 1,
i.i.d.
Hθ : Xi ∼ Pθ , i = 1, 2, ..., n.

A deterministic decision making algorithm is given by

φ : X n → {r, 0, 1},

where “r” denotes the rejection option.


The performance metrics include:

• Total probability of rejection:

πr(n) (φ) := P0⊗n {φ(X n ) = r} + P1⊗n {φ(X n ) = r}.

• Total probability of error:

πe(n) (φ) := P0⊗n {φ(X n ) = 1} + P1⊗n {φ(X n ) = 0}.

Find
πr(n) (φ) + πe(n) (φ) .

min
n
φ:X →{r,0,1}

Express your answer in terms of the total variation distance TV(P0 , P1 ).

2
Exam Information Theory, Fall 2022 I-Hsiang Wang

3. (Extremal information measures) [16]

a) Let X be a continuous random variable with a probability density function and h(X)
exists. Under the constraint that E[X] = µ and E[X 2 ] = r2 where r2 > µ2 , find the
maximum value of h(X) and a maximizing distribution. [6]
b) Let P(N) denote the collection of all probability distributions over N and G(p) ∈ P(N)
be a geometric distribution with parameter p ∈ (0, 1):

X ∼ G(p) ⇐⇒ Pr{X = n} = (1 − p)pn−1 , n ∈ N = {1, 2, . . .}.

Under the constraint that P ∈ P(N) and EX∼P [X] = ∞


P
x=1 xP (x) = µ > 1, find the
minimum value of D(P kG(p)) and a minimizing distribution. [10]

3
Exam Information Theory, Fall 2022 I-Hsiang Wang

4. (Sum Channel) [16]

Consider l DMC’s n o
(i)
(X (i) , PY |X , Y (i) ) i = 1, 2, . . . , l ,
(i)
where DMC (X (i) , PY |X , Y (i) ) has channel capacity C(i) , for 1 ≤ i ≤ l. The channel input
alphabets are disjoint, and so are the channel output alphabets, that is,

X (i) ∩ X (j) = Y (i) ∩ Y (j) = ∅, ∀ i 6= j.

The sum channel (X ⊕ , P⊕ ⊕


Y |X , Y ) associated to these channels is defined as follows:

• Input alphabet is the union X ⊕ := ∪li=1 X (i) of the individual input alphabets.
• Output alphabet is the union Y ⊕ := ∪li=1 Y (i) of the respective output alphabets.
• At each time slot the transmitter chooses to use one and only one of the l channels to
transmit a symbol, that is,
( (i)
PY |X (y|x), if x ∈ X (i) and y ∈ Y (i)
P⊕ (y|x) :=
0, otherwise

a) Introduce a random variable I indicating which DMC is used in the sum channel, that is,

I = i if X ∈ X (i) , i = 1, 2, ..., l.

Show that for the sum channel P⊕


Y |X , I(X; Y ) = I(X; Y |I) + H(I). [4]

b) Find the capacity of the sum channel in terms of {C(i) | i = 1, 2, . . . , l}. [6]
c) Find the optimal input probability distribution for the sum channel in terms of the optimal
input probability distributions for the individual channels. [6]

4
Exam Information Theory, Fall 2022 I-Hsiang Wang

5. (Compressing a Uniform Source) [24]

A DMS S is uniformly distributed over a finite set S = {1, 2, . . . , 2m}. Consider the following
source coding problems and derive the optimal compression ratios (R∗ for the lossless case and
R(D) for the lossy case).

a) A lossless source coding problem. [6]


b) A lossy source coding problem with a reconstruction alphabet Ŝ = S and the Hamming
distortion measure. [10]
c) A lossy source coding problem with a reconstruction alphabet Ŝ = S and the following
distortion measure:

d(s, ŝ) = 1{s and ŝ have the same parity} .

Recall that the parity is the property of an integer of whether it is even or odd. [8]

You might also like