Tutorial Questions
Tutorial Questions
probability of false alarm is to be 10−2 calculate the detection threshold. If the expected return from a target at
extreme range is 4 V, what is the probability that this target will be detected? [5.21, 0.2945]
9.2. (a) Calculate the entropy of the source in Problem 9.1(a). [11⁄2 bit/symbol]
(b) Calculate the entropy of the sources in Problem 9.1(c). [13⁄4 bit/symbol, 2 bit/symbol)]
(c) What is the maximum entropy of an 8 symbol source and under what conditions is this situation achieved?
What are the entropy and redundancy if P(x 1 ) = 1⁄2, P(x i ) = 1/8 for i = 2, 3, 4 and P(x i ) = 1/32 for i = 5, 6, 7, 8?
[3 bit/symbol, 2.25 bit/symbol]
9.6. Calculate the loss in information due to noise, per transmitted digit, if a random binary signal is transmitted
through a channel, which adds zero mean Gaussian noise, with an average signal-to-noise ratio of: (a) 0 dB; (b)
5 dB; (c) 10 dB. [0.6311; 0.2307; 0.0094 bit/binit]
S.1 A 625-line black and white television picture may be considered as being composed of 550 picture elements
(pixels) per line. Assume that each pixel is equiprobable among 64 distinguishable brightness levels. If this is
to be transmitted by raster scanning at a 25 Hz frame rate, calculate using the Hartley Shannon theorem the min-
imum bandwidth required to transmit the video signal assuming a 35 dB signal to noise ratio on reception. [4.43
MHz.]
P 1 = 1 × I 1 ⊕1 × I 2 ⊕1 × I 3
P 2 = 1 × I 1 ⊕1 × I 2 ⊕0 × I 3
P 3 = 0 × I 1 ⊕1 × I 2 ⊕1 × I 3
(a) Construct the generator matrix G for this code. (b) Construct all the possible codewords generated by this
matrix. (c) Determine the error-correcting capabilities for this code. [single] (d) Prepare a suitable decoding
table. (e) Decode the received words 101100, 000110 and 101010.
[111100, 100110, 101011]
10.5. Given a code with the parity check matrix:
1110 100
H = 1101 010
1011 001
(a) Write down the generator matrix showing clearly how you derive it from H. (b) Derive the complete weight
structure for the above code and find its minimum Hamming distance. How many errors can this code correct?
How many errors can this code detect? Can it be used in correction and detection modes simultaneously? [3, 1,
2, No] (c) Write down the syndrome table for this code showing how the table may be derived by consideration
of the all-zeros codeword. Also comment on the absence of an all-zeros column from the H matrix. (d) Decode
the received sequence 1001110, indicate the most likely error pattern associated with this sequence and give the
correct codeword. Explain the statement ‘most likely error pattern’. [0000010, 1001100]
10.6. When generating a (7,4) cyclic block code using the polynomial x 3 + x 2 + 1: (a) What would the gener-
ated codewords be for the data sequences 1000 and 1010? [1000110, 1010001] (b) Check that these codewords
would produce a zero syndrome if received without error. (c) Draw a circuit to generate this code and show
how it generates the parity bits 110 and 001 respectively for the two data sequences in part (a). (d) If the code-
word 1000110 is corrupted to 1001110, i.e. an error occurs in the fourth bit, what is the syndrome at the
receiver? Check this is the same syndrome as for the codeword 1010001 being corrupted to 1011001. [101]
10.7. A (7,4) block code has the parity check matrix as:
1 1 0 1 1 0 0
H = 1 1 1 0 0 1 0
1 0 1 1 0 0 1
This code can correct a single error. (a) Derive the generator matrix for this code and encode the data 1110. (b)
Derive a syndrome decoding for the code as described above and decode the received data 1101110. (c) Calcu-
late the maximum number of errors a (15,11) block code can correct. [1110010, 1]
10.8. Given the convolutional encoder defined by P 1 (x) = 1 + x + x 2 and P 2 (x) = 1 + x 2 , and assuming data is
fed into the shift register one bit at a time, draw the encoder: (a) tree diagram; (b) trellis diagram; (c) state tran-
sition diagram. (d) State what the rate of the encoder is. (e) Use the Viterbi decoding algorithm to decode the
received block of data, 10001000. Note: there may be errors in this received vector. Assume that the encoder
starts in state a of the decoding trellis in Figure 10.20 and, after the unknown data digits have been input, the
encoder is driven back to state a with two ‘flushing’ zeros. [0000]
10.9 For a convolutional encoder defined by P 1 (x) = 1 + x + x 2 , P 2 (x) = x + x 2 and, P 3 (x) = 1 + x: (a) State the
constraint length of the encoder and the coding rate. (b) The coder is used to encode two data bits followed by
two flushing zeros. Encode the data sequences: (i) 10 (ii) 11. Assume that the encoder initially contains all
zeros in the shift register and the left hand bit is the first bit to enter the encoder. (c) Take the two encoded bit
sequences from parts (b)(i) and (b)(ii) above and invert the second and fifth bit to create received codewords
with two errors in each. Decode the altered sequences using a trellis diagram and show whether or not the code
can correct the errors you have introduced. [3, 1/3, b(i) 101111110000, b(ii) 101010001110, both decode cor-
rectly]
-8-
insert 1 .....................
- 11 -
insert 2 .........................
- 12 -
mean
∞
X = ∫ x p(x) dx (3.16)
−∞
Uniform distribution
1
, (x 1 < x ≤ x 2 )
p X (x) = x 2 − x 1
0, (x ≤ x 1 , x > x 2 )
Gaussian distribution
−(x)2
1
p X (x) = e 2σ 2 (4.59)
σ √
2π
Exponential distribution
1 −(x − a)
e b , x≥a
p X (x) = b
0 , x<a
Rayleigh distribution
−r
2
r
p R (r) = e 2σ
2
(4.64)
σ2
Speech signal SNR
SNRQ = 4. 8 + 6n − α dB (5.23)
≈ 6(n − 1)
- 13 -
1 1 S ⁄2
1
∞
PD = ∫v p(v|s + n) dv (7.21)
th
∞
P FA = ∫ p(v|n) dv
v th
(7.22)
Information:
I = − log2 P(m) (9.1)
Entropy:
H = Σ − P(m) log2 P(m) (9.3)
1
H = ΣΣ P( j, i) log2 (bit/symbol) (9.6)
i j P( j|i)
1
H = Σ P(i) Σ P( j|i) log2 (bit/symbol) (9.7)
i j P( j|i)
P(iTX |i RX )
I RX (i RX ) = log2 (bits) (9.12)
P(iTX )
Equivocation:
1
E = ΣP( j RX )ΣP(iTX | j RX ) log2 (bit/symbol) (9.16)
j i P(iTX | j RX )
H
Efficiency = (9.21)
L
L = ΣP(m)l m (9.20)
Error correction:
P (J errors) = P eJ (1 − P e ) N −J N
CJ (10.1/3.8)
R
P( > R errors) = 1 − Σ P(J errors) (10.2)
J=0
Hamming bound:
2n
2k ≤ (10.4)
1 + n + nC2 + nC3 + . . . + nCt
- 14 -
1
⁄2 1
1⁄2 C
P e(PRK ) = −
N
1 erf (T o B) (11.12)
2
S
Rmax = B log2 1 + (11.38(a))
N
For M-symbol alphabets:
R s = R b / log2 M
π E
⁄2 1
= 1 − erf sin
M N0
P e(MPSK ) (11.39(a))
π C
⁄2 1
M N
(11.39(b))
Pe
Pb = (11.40(a))
log2 M
Eb
⁄2 1
1 π
M √
= 1 − erf sin N0
P b(MPSK ) log2 M (11.41(a))
log2 M
π C
⁄2 1
1
= 1 − erf (T o B) sin
1⁄2
P b(MPSK ) (11.41(b))
log2 M M N
√
M 1⁄2 − 1 〈E〉
⁄2 1
3
P e(MQAM) = 2 1 − erf (11.43)
M 1⁄2
2(M − 1) N 0
√
M 1⁄2 − 1 3 T o B C ⁄2
1
2(M − 1) N
√
M 1⁄2 − 1 3 log2 M E b ⁄2
1
2
P b(MQAM) = 1 − erf (11.46(a))
log2 M M 1⁄2 2(M − 1) N 0
- 15 -
√
M 1⁄2 − 1 3 T o B C ⁄2
1
2
P b(MQAM) = 1 − erf (11.46(b))
log2 M M 1⁄2 2(M − 1) N
Noise and link budgets
1W = 0 dBW = 30 dBm
N = kTB (12.8)
T ph (1 − G l )
Te = (12.37)
Gl
T
f = 1+ e (12.45)
To
f2 − 1 f −1
f cascade = f 1 + + 3 + etc (12.49)
G1 G1 G2
4π a e
GR = (12.62)
λ2
2
1 4π R
L = (12.70)
GT G R λ
Networks:
P cfs = (1 − P b )n s (18.1)
P cf = (1 − P b )n (18.2)
Stop and wait:
t out ≥ 2t p + t proc + t s (18.5)
t T = t I + t out (18.6)
t t
a = T = 1 + out (18.7)
tI tI
P retrans = P f + P ACK ∼
− Pf (18.8)
tT
tV = (18.9)
1 − P retrans
1 − P retrans
D = ni (18.11)
tT
D ni 1 − P retrans
= (18.12)
C n a
Go back N :
t out = 2t p + 2t I (18.14)
tT t 2t p
a= = 1 + out = 3 + (18.15)
tI tI tI
P retrans = 2P f ≈ 2nP b (18.16)
1 + (N − 1)P retrans
tV = t I (18.17)
1 − P retrans
D ni 1 − P retrans
= (18.18)
C n 1 + (N − 1)P retrans
Selective repeat:
- 16 -
tI
tV = (18.19)
1 − P retrans
D ni
= (1 − P retrans ) (18.20)
C n
Queue (infinite)
( λ T )k −λ T
P (k arrivals in T ) = P(k) = e , k = 0, 1, 2, . . . , ∞ (19.1/3)
k!
λ
ρ=
µ
P(k) = ρ k P(0), k≥0 (19.4)
∞
Σ P(k) = 1 (19.5)
k=0
∞
P(0) Σ ρ k = 1 (19.6)
k=0
P(0) = 1 − ρ (19.7)
P(k) = ρ (1 − ρ ),
k
k≥0 (19.8)
S=λ (19.9)
L = λW (19.10)
∞ ρ
L = Σ kP(k) = (19.11)
k=0 1−ρ
1 ρ 1
W = = (19.12)
λ 1−ρ µ−λ
1
Q = W − =ρW (19.13)
µ
Finite length queues:
ρ N (1 − ρ )
P L = P(N ) = , ρ≠1 (19.18)
1 − ρN + 1
L = SW (19.19)
L 1 N
W = = Σ kP(k) (19.20)
S S k=0
S = λ (1 − P L ) (19.X)
Constants
k = 1. 38 × 10−23 J/K
c = 3 × 108 m/s
- 17 -
For x equal to, or greater than, 4 the following approximation may normally be used:
e−x
2
erf(x) ∼
− 1−
π x
√
Some complementary error function values for large x are:
x erfc x
4.0 1. 59 × 10−8
4.1 6. 89 × 10−9
4.2 2. 93 × 10−9
4.3 1. 22 × 10−9
4.4 5. 01 × 10−10
4.5 2. 01 × 10−10
4.6 7. 92 × 10−11
4.7 3. 06 × 10−11
4.8 1. 16 × 10−11
4.9 4. 30 × 10−12
Please recall that before calculating erf (z) you may need to use this conversion formulae relating the voltage x and mean value m to the
standard deviation of the noise σ :
x−m
z=
2σ
√
And the CD or error probability is then given by:
1
P(x) = P e = [ 1 + / − erf (z) ]
2
The +/ − sign is used in front of the erf function depending on whether the CD is to be calculated for values less/greater than x.
pmg/seccy/PER/TEACHING/COMNOTES/handout2008