Introduction to Communication Systems 1st Edition Madhow Solutions Manual pdf download
Introduction to Communication Systems 1st Edition Madhow Solutions Manual pdf download
https://testbankdeal.com/product/introduction-to-communication-
systems-1st-edition-madhow-solutions-manual/
https://testbankdeal.com/product/contemporary-communication-
systems-1st-edition-mesiya-solutions-manual/
https://testbankdeal.com/product/fundamentals-of-communication-
systems-1st-edition-proakis-solutions-manual/
https://testbankdeal.com/product/introduction-to-digital-
communication-2nd-edition-ziemer-solutions-manual/
https://testbankdeal.com/product/introduction-to-law-5th-edition-
hames-test-bank/
Organizational Behavior A Critical Thinking Approach 1st
Edition Neck Test Bank
https://testbankdeal.com/product/organizational-behavior-a-critical-
thinking-approach-1st-edition-neck-test-bank/
https://testbankdeal.com/product/geosystems-an-introduction-to-
physical-geography-updated-canadian-4th-edition-christopherson-test-
bank/
https://testbankdeal.com/product/experience-human-development-13th-
edition-papalia-solutions-manual/
https://testbankdeal.com/product/contemporary-issues-in-
accounting-2nd-edition-rankin-solutions-manual/
https://testbankdeal.com/product/project-management-in-practice-6th-
edition-meredith-solutions-manual/
Dosage Calculations 9th Edition Pickar Test Bank
https://testbankdeal.com/product/dosage-calculations-9th-edition-
pickar-test-bank/
Solutions to Chapter 6 Problems
Introduction to Communication Systems, by Upamanyu Madhow
Problem 6.1
(a) Let Z = (y ∗ h)(t0 ), where h(t) = s(−t). Then Z ∼ N(m, v 2 ) if 1 sent, and Z ∼ N(0, v 2 ) if 0
sent, where Z 1
2 2 2 2 4
v = σ ||h|| = σ 4 t2 dt = σ 2
0 3
Z 1
1
Z
m = (s ∗ h)(t0 ) = s(t)s(t − t0 )dt = t(1 − t)dt =
0 6
Thus, for the ML decision rule
r !
|m| 1 Eb
Pe = Q =Q
2v 8 N0
the usual formula for the performance of optimal reception of on-off keying in AWGN.
(c) For h(t) = I[0,2] , we again have the same model for the decision statistic Z = y ∗ h(t0 ), but
with v 2 = σ 2 ||h||2 = 2σ 2 , and m = (s ∗ h)(t0 ). The performance improves with |m|, which is
maximized at t0 = 2 (m = 1) or t0 = 4 (m = −1). We therefore get that, for the ML decision
rule, !
r
|m| 3Eb
Pe = Q =Q
2v 8N0
(d) Note that we can approximate the matched filter s(−t) using linear combinations of two
shifted versions of h(t) = I[0,2] , by approximating triangles by rectangles. That is, the matched
filter shape is approximated as h̃(t) = h(t + 2) − h(t + 4). Thus, we can use the decision statistic
v 2 = σ 2 ||h̃||2 = 4σ 2
m = (s ∗ h̃)(0) = 2
Eb
As before, we can enforce the scaling with N0
to get
r !
|m| 3Eb
Pe = Q =Q
2v 4N0
3 dB better than the performance in (c), and 10 log10 43 = 1.25 dB worse than the optimal receiver
in (b).
Problem 6.2 (a) We have
2
e−(y−2) /18
p(y|1) = √
18π
2
e−(y+2) /8
p(y|0) = √
8π
The optimal rule consists of comparing the log likelihood ratio to a threshold. The log likelihood
ratio can be written as
2 1 2 1
log L(y) = log p(y|1)−log p(y|0) = −(y − 2) /18 − log(18π) − −(y + 2) /8 − − log(8π)
2 2
5y 2 + 52y + 20
log L(y) = − log 3/2
72
(b) For π0 = 1/4, we compare log L(y) to the threshold log ππ10 = − log 3. Simplifying, we obtain
the MPE rule
H1
>
5y 2 + 52y − 72 log 2 − 20 = −69.9
<
H0
(c) The conditional error probability is given by
The roots of the quadratic 5y 2 + 52y + 69.9 are α1 = −1.6 and α2 = −8.8, hence we can write
the conditional error probability as
Problem 6.3: The conditional densities and decision regions are sketched in Figure 1 (not to
scale). The threshold γ satisfies p(γ|1) = p(γ|0), or
1 2 1
√ e−γ /2 =
2π 4
p
which yields γ = 3 log(2) − log(π) ≈ 0.97.
Problem 6.4 (a) We have p(z|0) = 12 e−|z| and p(z|1) = 12 e−|z−4| , so that the log likelihood ratio,
sketched in Figure 2, is given by
p(z|1) 4, z>4
K(z) = log = |z| − |z − 4| = 2z − 4, −4 ≤ z ≤ 0
p(z|0)
−4, z<0
2
p(z|1)
1/4 p(z|0)
−2 −γ γ 2 z
Γ1 Γ0 Γ1 Γ0 Γ1
K(z)
4
0 4 z
−4
3
(b) The conditional error probability given 1 is
R1 R1
Pe|1 = P [Z < 1|H1] = −∞ p(z|1)dz = −∞ 12 e−|z−4| dz
R1
= 21 −∞ ez−4 dz = e−3 /2 = 0.025
(c) The region z < 1 can be written as K(z) < −2. The MPE rule compares K(z) to log ππ10 .
Thus, the rule in (b) is the MPE rule if log ππ01 = −2, which yields π0 = e21+1 ≈ 0.12.
1
P(e|0)
0.9 P(e|1)
0.8
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
P[H0]
Figure 3: Conditional error probabilities for the MAP rule (as a function of π0 ) for Problem 6.5.
where µ1 = 1/4, µ0 = 1. Plugging in, we obtain log L(y) = 43 y − log 4. The MAP/MPE rule
H1
> π0 π0
log L(y) log = log
< π1 1 − π0
H0
therefore simplifies to
H1
> 4 4π0
y log = γM AP MAPrule
< 3 1 − π0
H0
Since y ≥ 0, the MAP rule always says H1 if γM AP ≤ 0, which happens if the argument of the
4π0
log is less than (or equal to) one: 1−π 0
≤ 1, or π0 ≤ 51 .
(b) From (a), we see that for π0 ≤ 51 , the conditional error probabilities are given by Pe|0 = 1
and Pe|1 = 0, since the MAP rule always says H1 .
For π0 > 15 , we have γM AP > 0, and the conditional error probabilities are given by
4/3
−µ0 γM AP − 34 log
4π0 1 − π0
Pe|0 = P [Y > γM AP |H0 ] = e =e 1−π0
=
4π0
4
1/3
−µ1 γM AP − 13 4π0 1 − π0
Pe|1 = P [Y ≤ γM AP |H1 ] = 1 − e =1−e log =1−
1 − π0 4π0
These are plotted as a function of π0 in Figure 3.
1/3 p(y|1)
1/4 p(y|0)
y
Γ1 Γ0 −3/4 Γ1 3/4 Γ0 Γ1
Problem 6.7 (a) Densities must integrate to one. For p(y|1), he area under the triangle equals
1
2
× c × 6 = 1, so that c = 31 .
H1
>
(b) The ML rule p(y|1) p(y|0 is illustrated in Figure 4, with threshold given by equating
<
H0
conditional densities: 3 (1 − |y|/3) = 1/4, which gives |y| = 34 .
1
The high conditional error probabilities indicate that the conditional densities are not too differ-
ent from each other, which we do see visually.
1/8 p(y|1)
1/10
p(y|0)
y
Γ1 −5 Γ0 Γ1 Γ0 5 Γ1
−γ γ
H1
>
Problem 6.8 (a) The ML rule p(y|1) p(y|0) is illustrated in Figure 5, with thresholds ±γ
<
H0
5
given by equating conditional densities: 18 e−|y|/4 = 1/10, which gives |y| = γ = 4 log 45 = 0.8926.
(b) Conditional error probabilities are given by
1 4 5
Pe|0 = P [|Y | < γ|H0 ] = 2 × γ × = log = 0.1785
10 5 4
R 5 1 −y/4
Pe|1 = P [γ < |Y | < 5|H0] = 2 γ 8 e dy
−γ/4 −5/4 4 −5/4
=e −e = 5 −e = 0.5135
Thus, even the ML rule can have highly asymmetric conditional error probabilities.
H1
>
noindent Problem 6.9 (a) The thresholds for the MPE rule π1 p(y|1) π0 p(y|0) with π0 are
<
H0
1 −|y|/4 5
given by (2/3) 8 e = (1/3)1/10, which gives |y| = γM P E = 4 log 2 = 3.665.
(c) Conditional error probabilities as a function of the threshold are given by the same expressions
as in Problem 6.8, but with a different value plugged in for the threshold:
1 4 5
Pe|0 = P [|Y | < γM P E |H0 ] = 2 × γM P E × = log = 0.733
10 5 2
R5
Pe|1 = P [γM P E < |Y | < 5|H0] = 2 γM P E 18 e−y/4 dy
= e−γM P E /4 − e−5/4 = 52 − e−5/4 = 0.1135
The average error probability is given by
1 2 1
Pe (MP E) = π0 Pe|0 + π1 Pe|1 = × 0.733 + × 0.1135 = 0.32 for π0 =
3 3 3
For comparison, the average error probability for the ML rule is given by averaging the conditional
error probabilities in Problem 6.8(b):
1 2 1
Pe (ML) = π0 Pe|0 + π1 Pe|1 = × 0.1785 + × 0.5135 = 0.40 for π0 =
3 3 3
Since H0 is less likely, the MPE rule lets Pe|0 get large in order to drive down Pe|1 , and achieves,
as expected, a smaller average error probability. Of course, since the hypotheses are not so well
separated to start with, the error probability for both the MPE and ML rules are rather poor.
Problem 6.10 (a) The likelihood ratio is given by
y
p(y|1) e−m1 my1 /y! m1
L(y) = = −m0 y = e−(m1 −m0 )
p(y|0) e m0 /y! m0
The ML rule compares the log likelihood ratio to zero. Taking the log above, we have log L(y) =
m1
y log m0
− (m1 − m0 ), so that the ML rule reduces to comparing y to a threshold γ = mlog1 −m
m1 .
0
m0
For m1 = 100, m0 = 10, we have γ ≈ 39.1, so that
1, y > 39
δM L (y) =
0, y ≤ 39 = t
where t = ⌊γ⌋ in general.
(b) The conditional error probabilities are given by
t
X my 1 −m1
Pe|1 = P [Y ≤ t|H1 ] = e
y=0
y!
6
t
X my 0 −m0
Pe|0 = 1 − P [Y ≤ t|H0 ] = 1 − e
y=0
y!
Problem 6.11 (a) The conditional densities are translations of the noise density by the signal
amplitude: p(y|0) = pN (y − A) = λ2 e−λ|y−A| and p(y|1) = pN (y + A) = λ2 e−λ|y+A| . Let us compute
and simplify the log likelihood ratio:
2Aλ , y < −A
log L(y) = −λ|y − A| + −λ|y + A| = −2yλ , −A ≤ y ≤ A
−2Aλ , y > A
H1 H0
> >
The ML decision rule is given by log L(y) 0, and simplifies to y 0.
< <
H0 H1
(b) The conditional error probabilities are given by
Z 0 Z 0
λ −λ|y−A| λ −λ(A−y) 1
Pe|0 = P [Y < 0|H0] = e dy = e dy = e−λA
−∞ 2 −∞ 2 2
By symmetry, Pe|1 is also given by the same expression. Note that, instead of computing the
integral, we could just note that the conditional error probabilities are given by tail of an expo-
nential density (scaled down by 21 because it is two-sided).
H1
>
(c) The MPE rule is expressed in terms of the log likelihood ratio as log L(y) log ππ10 = − log 2
<
H0
1
for π0 = 3 . This corresponds to a threshold rule in y, setting −2Ay = − log 2,
H0
> 1 1
y log 2 MPE rule for π0 =
< 2λ 3
H1
(d) The LLR is the sum of the information from the observation and from the prior:
P [0|Y = y] π0 p(y|0) π0
LLR = log = log = − log L(y) + log = 2A(A/2) − log 2 = A2 − log 2
P [1|Y = y] π1 p(y|1 π1
p(y|0)
(Note that log p(y|1) = − log L(y) for our definition of likelihood ratio.)
Problem 6.12 (a) The conditional densities and ML regions are sketched in Figure 6. We
have p(y|0) = 51 e−y/5 I[0,∞) (y) and p(y|1) = 10 1
I[0,10] (y). The ML decision regions are given
by Γ1 = {y : p(y|1) > p(y|0)} and Γ0 = {y : p(y|1) < p(y|0)}, with crossover point given by
1 −y/5 1
5
e = 10 , which gives y = 5 log 2 ≈ 3.47. We can also write the ML rule as
1, 5 log 2 < y < 10
δM L (y) =
0, else
(b) The conditional error probability is given by
1
Pe|0 = P [Y ∈ Γ1 |H0 ] = P [5 log 2 < Y < 10|H0 ] = e−5 log 2/5 − e−10/2 = − e−2 = 0.3647
2
7
1/5 p(y|0)
p(y|1)
1/10
y
Γ Γ1 Γ
0 0
5 log 2 = 3.47
1/10 p(y|1)
Height = 2/25
y
−4 4 6 16
Base = 8
Figure 7: Graphical computation of conditional error probability Pe|1 for Problem 6.13(a).
1 |x|
Problem 6.13 We have p(y|0) = pN (y) = 1− 10
I[−10,10] and p(y|1) = pN (y − 6) =
10
1
10
1 − |x−6|
10
I[−4,16] . The conditional error probability Pe|1 = P [Y < 4|H1 ] is given by the area
1 2
of the triangle shown in Figure 7. The height H = p( 4|1) = 10
(1 − 2/10) = 25
, so
1 1 2 8
Pe|1 = × base × height = × 8 × =
2 2 25 25
(b) It is easy to check that the likelihood ratio is monotone nondecreasing in y. Thus, an MPE
rule will be a threshold rule in y, and the threshold is y = 4 if π0 p(y|0) = π1 p(y|1) for y = 4 for
valid prior probabilities (i.e., 0 ≤ π0 , π1 ≤ 1 with π0 + π1 = 1) Plugging in, we have
1 1
π0 (1 − 4/10) = π1 (1 − 2/10)
10 10
which simplifies
6π0 = 8π1 = 8(1 − π0 )
8
yielding a valid prior π0 = 47 .
Problem 6.14 (a) The ML rule
H1
>
hy, s1i − ||s1 ||2 /2 hy, s0i − ||s0 ||2 /2
<
H0
specializes to
H1
>
hy, s1i 0
<
H0
for s0 = −s1 . Thus, it can be implemented by correlating the received signal against s1 (t) and
then deciding
based on the sign of the output. The conditional error probability is given by
||s1 −s0 || R1
Pe|0 = Q 2σ
2
. Now, ||s1 − s0 || = ||2s1 || = 2||s1||, ||s1 || = 2 0 (1 − t)2 dt = 2/3, and
σ 2 = 0.1, so that !
p
2 2/3 p
Pe|0 = Q √ =Q 20/3 = 0.0049
2 0.1
(b) Conditioned on s0 sent, y(t) = Rs0 (t)+n(t). Let us computeRthe signal and noise contributions
−0.5 −0.5
separately. The noise output n0 = −1 n(t)dt has variance σ 2 −1 12 dt = σ 2 /2. The other noise
components are independent and have identical variance, since they are computed over disjoint
intervals of equal length. Thus, ni are i.i.d., N(0, σ 2 /2). The signal contributions are given by
2
integrating s0 (t) over these intervals. It is easy to see, therefore, that y ∼ N(s0 , σ2 I), where
s0 = −(1/8, 3/8, 3/8, 1/8)T .
(c) Since the signals under the two hypotheses are negatives of each other, we infer from (b)
2
that the signal model after discretization is y ∼ N(s1 , σ2 I) conditioned on s1 sent, and y ∼
2
N(s0 = −s1 , σ2 I), where s1 = (1/8, 3/8, 3/8, 1/8)T . Thus, we are hypothesis testing in discrete
time WGN with variance σ̃ 2 = σ 2 /2. Thus, the ML rule is given by
H1
>
hy, s1 i − ||s1 ||2 /2 hy, s0i − ||s0 ||2 /2
<
H0
which specializes to
H1
1 >
hy, s1 i = (y0 + 3y1 + 3y2 + y3 ) 0
8 <
H0
Simplifying, our ML rule is
H1
>
y0 + 3y1 + 3y2 + y3 0
<
H0
9
The conditional error probability is given by the standard expression for WGN: Pe|0 = Q ||s12σ̃ −s0 ||
,
where we must use the variance of the discrete time WGN obtained after discretization. As be-
1 5
fore, ||s1 − s0 || = ||2s1 || = 2||s1 ||. We have ||s1 ||2 = 64 (1 + 9 + 9 + 1) = 16 , so that
p !
2 5/16
Pe|0 = Q p = Q(2.5) = 0.0062
2 0.1/2
(only a small degradation relative to ML performance for the original continuous time system).
(d) For observation Z = y0 + y1 + y2 + y3 , adding the signal components and the noise variances,
we obtain that Z ∼ N(1, 4σ̃ 2 = 2σ 2 ) if s1 is sent, and Z ∼ N(−1, 2σ 2 ) if s0 is sent. This falls
within the basic Gaussian format, and the conditional error probability is given by
√
|m1 − m0 | 1 − (−1)
Pe|0 = Q =Q √ = Q( 5) = 0.0127
2v 2 2σ 2
(e) Smallest error probability is in (a), largest is in (d). We could have rank ordered without
explicit computation. The probability in (a) is the best achievable, since we are implementing the
ML rule for the original continuous time observation. We lose some performance in discretizing as
in (b), because we are not doing optimal processing of the continuous time signal (which we know
is to correlate with s1 ). We lose further performance in (d), because we are not implementing
the ML rule for the discretized signal y.
Problem 6.15 Drawing figures is highly recommended, but we leave that as an exercise.
(a) The filter output at time t is given by
Z Z t
z(t) = y(τ )h(t − τ )dτ = y(τ )dτ
t−1
Thus, the filter is basically doing integrate-and-dump, and adding filter samples is equivalent to
a correlation of y against an appropriately chosen waveform. Thus,
Z 0 Z 1 Z 1
Z = z(0) + z(1) = y(τ )dτ + y(τ )dτ = y(τ )dτ = hy, ui
−1 0 −1
where the “effective correlator” u(t) = I[−1,1] (t). Conditioned on 0 sent, y(t) = n(t) and Z =
hn, ui ∼ N(0, σ 2 ||u||2 = 2σ 2 ). Conditioned on 1 sent, y(t) = s(t) + n(t) and Z = hs, ui + hn, ui =
A + hn, ui ∼ N(A, 2σ 2 ).
(b) The model in (a) falls within the standard Gaussian format, and the ML rule is given by
H1
> A
Z
< 2
H0
10
Visit https://testbankdead.com
now to explore a rich
collection of testbank,
solution manual and enjoy
exciting offers!
This is a factor of 3/4 (or about 1.25 dB) worse than optimal demodulation of OOK.
(c) The decision statistic
Z 0 Z 0.5 Z 1
Z2 = z(0) + z(0.5) + z(1) = y(τ )dτ + y(τ )dτ + y(τ )dτ = hy, vi
−1 −0.5 0
where the effective correlator v(t) = I[−1.−0.5] (t)+2I[−0.5,0.5](t)+I[0.5,1] (t) (draw this to see how the
shape compares with the original signal s(t)). We leave it as an exercise to show that ||v||2 = 5
and hs, vi = 47 A. Following the same steps as in (b), we have Z2 ∼ N(0, σ 2 ||v||2 = 5σ 2 ) if 0 is
sent, and Z2 ∼ N(hs, vi = 74 A, σ 2 ||v||2 = 5σ 2 ) if 1 is sent. This again falls within the standard
Gaussian format, and the error probability is given by
7 7
√ !
A − 0 3E
b
Pe = Pe = Pe|0 = Pe|1 = Q 4 √ =Q p4
2 5σ 2 2 5N0 /2
which simplifies to !
r
147Eb
Pe = Q
160N0
This is a factor of 147/160 (or 0.37 dB) worse than OOK with optimal demodulation. Thus,
the degradation is smaller than in (b), which is to be expected since the shape of the effective
correlator v(t) here is a better approximation for the signal s(t) than the effective correlator u(t)
in (b).
a(t)
t
1 2 3
Figure 8: Correlator for implementing ML rule in Problem 6.16(a).
11
The correlator a(t) = s1 (t) − s0 (t) is sketched in Figure 8. We have ||s1 ||2 = 2 and ||s0 ||2 =
R1 2 2
2 0 t2 dt = 2/3, so that the threshold γM L = ||s1 || −||s
2
0 ||
= (2 − 2/3)/2 = 2/3.
1 2 2 1
(b) Note that Eb = 2 (||s1 || + ||s0 || ) = 2 (2 + 2/3) = 4/3 and
easily computed and form the vectors: s1 = (0, 1, 1)T and s0 = (−1/2, −1/2, 0)T . The conditional
distribution of Z given 0 sent is therefore N(s0 , σ 2 I).
(e) Actually, since the noise is uncorrelated, this is not really challenging as promised. This is
simply signaling in discrete WGN, and the ML rule is given by
H1
>
hZ, s1 i − ||s1 ||2 /2 hZ, s0 i − ||s0 ||2/2
<
H0
which simplifies to
H1
>
hZ, s1 − s0 i h(||s1 ||2 − ||s0 ||2 )/2
<
H0
Plugging in the numbers, we obtain
H1
1 3 >
Z1 + Z2 + Z3 3/4
2 2 <
H0
12
The error probability for ML reception is discrete WGN is given by
s !
d2 Eb
||s1 − s0 ||
Pe = Q =Q
2σ Eb 2N0
where d2 = ||s1 − s0 ||2 = 7/2 is for the discretized system, but Eb = 4/3 for the original
continuous time system, computed in (a). (Note that the scaling is such that the noise variance
per dimension is the same in both systems, otherwise we would have to account for that.) We
have d2 /Eb = 21/8, so that !
r
21Eb
Pe = Q
16N0
Problem 6.17
(a) Signal space representations with respect to the given orthonormal basis are:
Signal Set A: s1 = (1, 0, 0, 0)T , s2 = (0, 1, 0, 0)T , s3 = (0, 0, 0, 1)T and s4 = (0, 0, 0, 1)T
Signal Set B: s1 = (1, 0, 0, 1)T , s2 = (0, 1, 1, 0)T , s3 = (1, 0, 1, 0)T and s4 = (0, 1, 0, 1)T
(b) For Signal Set A, the pairwise distance between any two points satisfies d2 = d2min = 2, while
the energy per symbol is Es = 1. Thus, Eb = Es /(log2 4) = 1/2, and d2min /Eb = 4. The union
bound on symbol error probability is therefore given by
q p
p
2
Pe (signal set A) ≤ 3Q dmin /Eb Eb /2N0 = 3Q 2Eb /N0
For signal set B, each signal has one neighbor at distance given by d21 = 4 and two at distance
given by d22 = d2min = 2. The energy per symbol is Es = 2, so that Eb = 1. The union bound is
given by
p p p p
Pe (signal set B) ≤ 2Q d22 /Eb Eb /2N0 + Q d21 /Eb Eb /2N0
p p
= 2Q Eb /N0 + Q 2Eb /N0
(c) For exact analysis of error probability for Signal Set B, suppose that the received signal in
signal space is given by Y = (Y1 , Y2 , Y3 , Y4 ). Condition on the first signal s1 = (1, 0, 0, 1)T being
sent. Then
Y1 = 1 + N1 , Y2 = N2 , Y3 = N3 , Y4 = 1 + N4
where N1 , ..., N4 are i.i.d. N(0, σ 2 ) random variables. A correct decision is made if hY, s1i >
hY, sk i, k = 2, 3, 4. These inequalities can be written out as
The second and third inequalities give Y1 > Y2 and Y4 > Y3 , and imply the first inequality. Thus,
the conditional probability of correct reception is given by
Pc|1 = P [Y1 > Y2 and Y4 > Y3 |1] = P [Y1 > Y2 |1]P [Y4 > Y3 |1]
since Yk are conditionally independent given the transmitted signal. Noting that Y1 − Y2 and
Y4 − Y3 are independent N(1, 2σ 2 ), we have
2
1 1 2 1
Pe|1 = 1 − P c|1 = 1 − 1 − Q √ = 2Q √ −Q √
2σ 2 2σ 2 2σ 2
13
1
p
Setting √2σ 2
= a Eb /N0 , with Eb = 1 and N0 = 2σ 2 , we have a = 1. Further, by symmetry, we
have Pe = Pe|1 . We therefore obtain that
p p
2
Pe = 2Q Eb /N0 − Q Eb /N0 , exact error probability for signal set B
1) 3)
t t
−2 −1 0 1 2 −2 −1 0 1 2
2) 4)
t t
−2 −1 0 1 2 −2 −1 0 1 2
Figure 9: Natural basis for signal set in Problem 6.18, labeled in the order of the coordinates
they correspond to in signal space.
Problem 6.18 Using the natural basis shown in Figure 9, the 4 signals map (not worrying
about scale factors to the vectors a = (1, 1, 1, 1)T , b = (1, 1, −1, −1)T , c = (1, −1, 1, −1)T ,
d = (1, −1, −1, 1)T . It is easy to check that these vectors are all orthogonal, so this is 4-ary
equal energy, orthogonal signaling. Thus, Es ≡ 4 (hence Eb = logEs 4 = 2) and all pairwise
2
distances are given by d2 = 2Es = 8. The power efficiency d2 /Eb = 4.
(a) The union bound is given by
dcb
Pe|c ≤ Q(
q
dca
2σ
) + Q( Q( d2σ
2σ ) +q
cd d
)= 3Q( 2σ )
2
d Eb 2Eb
=Q Eb 2N0
=Q N0
(b) False: not more power-efficient than QPSK. The power efficiency equals 4, which is the same
as that of QPSK.
Problem 6.19
(a) For 8-PSK, the symbol energy Es = R2 . For the QAM constellations,
14
so that, in the high SNR regime, we expect that
(c) Each symbol is assigned 3 bits. Since 8-PSK and QAM1 are regular constellations with at
most 3 nearest neighbors per point, we expect to be able to Gray code. However, QAM2 has
some points with 4 nearest neighbors, so we definitely cannot Gray code it. We can, however,
try to minimize the number of bit changes between neighbors. Figure 10 shows Gray codes for
8-PSK and QAM1. The labeling for QAM2 is arbitrarily chosen to be such that points with 3
or fewer nearest neighbors are Gray coded.
011 010
101
000 010 110 100
111 001
011 000 001 110 100
R 000
001 011 111 101
010 100 101 111
110
8-PSK QAM1 QAM2
(d) For Gray coded 8-PSK and QAM1, a symbol error due to decoding to a nearest neighbor
causes only 1 out of the 3 bits to be in error. Hence, using the nearest neighbors approximation,
P [bit error] ≈ 13 P [symbol error]. On the other hand, P [symbol error] ≈ N̄dmin Q( dmin
2σ
), where
N̄dmin is the average number of nearest neighbors. While the latter is actually an upper bound on
the symbol error probability (the nearest neighbors approximation coincides with the intelligent
union bound in these cases), the corresponding expression for the bit error probability need not
be an upper bound (why?).
For 8-PSK, dmin = 2R sin π8 and Es = 3Eb = R2 . Plugging in σ 2 = N0 /2 and N̄dmin = 2, we
obtain s
1
2 3(1 − 2 )Eb
√
P [bit error]8P SK ≈ Q
3 N0
For QAM2, we need to make nearest neighbors approximation specifically for bit error probability.
Let N(b) total number of bit changes due to decoding into nearest neighbors when symbol b is
1
P
sent. For the labeling given, these are specified by Table 1. Let N̄bit = 8 b N(b) = 11/4 denote
the average number of bits wrong due to decoding into nearest neighbors. Since each signal
point is labeled by 3 bits, the nearest neighbors approximation for the bit error probability is
now given by !
r
1 dmin 11 6Eb
P [bit error]QAM 2 ≈ N̄bit Q = Q
3 2σ 12 5N0
15
Other documents randomly have
different content
very large surface and by the free circulation of air. It is possible to
make wine vinegar by the quick process, but it is not done, because
the product is inferior in taste and aroma to that made by the slow
process.
Both wine vinegar and malt vinegar when freshly prepared have a
stupefying and unpleasant odour. Before the product is ready for the
market, it has to be matured in barrels. During this process, a small
quantity of alcohol which still remains in the vinegar combines slowly
with some of the acetic acid, producing acetic ester, a substance
which has a pleasant fruity odour.
73
The aqueous liquid that distils over contains methyl alcohol (wood
spirit), acetone, and acetic acid. The crude mixture is known as
pyroligneous acid. This is neutralized with milk of lime or soda ash,
which converts acetic acid into calcium or sodium acetate, but has
no action on the methyl alcohol and acetone which are also present.
The mixture is then distilled, when methyl alcohol, acetone, and
water pass over into the distillate, leaving the acetate in the retort.
To obtain the free acid from the acetate, the latter is well dried and
then distilled with concentrated sulphuric acid. Acetic acid, being the
more volatile of the two acids, distils over, and is nearly pure.
The method of removing the last traces of water depends upon the
fact that acetic acid solidifies at 17° C. The acid, which is nearly, but
not quite, free from water, is cooled until a portion solidifies. The
part which still remains liquid is poured away, and the process is
repeated until a residue is obtained which solidifies as a whole. This
is glacial acetic acid, so called because it is a mass of glistening
plates which look like newly-formed ice.
The Acetates
The slow action which acetic acid vapour has upon the metal lead
finds a very interesting application in what is known as the Dutch
[4]
process for the manufacture of white lead for paint. The metal is
cast into grids or spirals, which are placed on the shoulders of the
specially made pots sketched in Fig. 11. A little dilute acetic acid is
poured into each of the pots, which are then arranged side by side
on a thick layer of tan bark, stable manure, or other material which
will heat by fermentation. The first layer of pots is then boarded
over; another layer of pots is placed upon this, and so on, tier upon
tier, until the shed is quite full. The heat developed by the
fermenting material vaporizes the acetic acid, and this vapour
corrodes the lead, forming basic lead acetate. The carbon dioxide
which is also produced during fermentation converts the acetate into
the carbonate, which falls as a heavy white powder into the pots.
Oxalic Acid. Oxalic acid and its salts, the oxalates, are very widely
distributed in the vegetable kingdom. These compounds are present
in wood sorrel (Oxalis acetosella), in rhubarb, in dock, and in many
other plants. The acid is made on a large scale by mixing pine
sawdust to a stiff paste with a solution containing caustic soda and
potash. The paste is spread out on iron plates and heated, care
being taken not to heat the mixture to the point at which it chars.
The mass is then allowed to cool, and is mixed with a small quantity
of water to dissolve out the excess of alkali. This is recovered and
used again.
Oxalic acid and its salts are poisonous. The free acid has 78
sometimes been mistaken for sugar with fatal results.
Formic Acid (L. formica, an ant) is found both in the vegetable and
in the animal kingdom. If the leaf of a stinging nettle is examined
with a microscope, it is seen to be covered with long pointed hairs
having a gland at the base. This gland contains formic acid. When
the nettle is touched lightly, the fine point of the hair punctures the
skin, and a subcutaneous injection of formic acid is made, which
quickly raises a blister.
The inconvenience which arises from the stings of bees and wasps,
also from the fluid ejected by ants when irritated, is due to formic
acid. The remedy in each case is the same; the acid must be
neutralized as quickly as possible with mild alkali, such as washing
soda.
The Fatty Acids. Animal fats and vegetable oils are similarly
constituted bodies. They are composed mainly of three chemical
compounds known as stearin, palmitin, and oleïn. Of these, stearin
and palmitin are solids at ordinary temperatures, while oleïn is a
liquid. Hard fats like those of mutton and beef are composed mainly
of stearin; fats of medium hardness contain stearin, palmitin, and
some oleïn; while oils such as cod-liver oil and olive oil are nearly
pure oleïn.
In order to obtain the fat free from tissue which it contains in its
natural state, it is tied up in a muslin bag and heated in boiling
water. The fat is squeezed out through the meshes of the fabric and
floats on the surface of the water as an oil which solidifies on 79
cooling. This clarified fat is called tallow.
All fats and vegetable oils can be resolved into their two
constituents, the acid and the glycerine. This can be brought about
by heating the fat with water to about 200° C. This operation must
be carried out in a vessel capable of withstanding pressure and
closed with a safety valve; otherwise, the requisite temperature
could not be obtained. After this treatment, there is left in the vessel
an oily layer which solidifies on cooling and an aqueous layer which
contains the glycerine. The solidified oily layer is the fatty acid. In
the case of mutton or beef tallow, it would be mainly a mixture of
stearic and palmitic acids. This mixture is used to make “stearin”
candles. The acids themselves are wax-like solids without any
distinctive taste. Stearic acid melts at 69° C. and palmitic at 62° C.
They have no perceptible action on the colour of litmus, neither have
they any solvent action on metals or carbonates. We should not
recognize these substances as acids at all were it not for the fact
that they combine with alkalis, forming salts.
The salts of the fatty acids are called soaps. To make soap, the fat is
boiled with caustic alkali or caustic lye, as it is more often called.
This breaks the fat up primarily into the acid and glycerine; but in
this case, instead of obtaining the acid as the final product as we did
above by heating with water under pressure, we get the sodium or
potassium salt of the acid according to the alkali used. When caustic
soda is used, the product is a hard soap; when caustic potash is
used, it is a soft soap. The treatment of fats in this way with caustic
alkalis is called “saponification.”
80
CHAPTER VIII
MILD ALKALI
Caustic and Mild. There are two classes of alkalis distinguished by
the terms caustic and mild. If a piece of all-wool material is boiled
with a solution of caustic soda or potash, it dissolves completely,
giving a yellow solution. Mild alkali will not dissolve flannel, though it
may have some slight chemical action causing shrinkage. Partly for
this reason, and partly because commercial washing soda often
contains a little caustic soda, woollen garments must not be boiled
or even washed in hot soda water.
Mild Soda and Potash. Until the middle of the eighteenth century,
it was thought that the soluble matter extracted from the ashes of
all plants was the same. In 1752 it was shown that the substance
obtained in this way from plants which grew in or near the sea
differed from that from land vegetation by producing a golden yellow
colour when introduced into the non-luminous flame of a spirit lamp,
while that from land plants gave to the flame a pale lilac tinge. The
former substance is now known as mild soda, and the latter as mild
potash.
At this point it is well to make it clear to the reader that there 81
are two bodies commonly called soda, and two called potash.
One of each pair is caustic and one mild.
The Leblanc Process. At the present time, the greater part of the
world’s supply of soda is made from common salt by two processes.
The older of these, which is known as the Leblanc process, was
introduced in France towards the end of the eighteenth century. In
those days soda was very dear, for the main supply came from the
ashes of seaweeds; wherefore the French Academy of Sciences, in
1775, offered a prize for the most suitable method of converting salt
into soda on a manufacturing scale. The prize was won by Nicholas
Leblanc, who in 1791 started the first soda factory near Paris. These
were the days of the French Revolution; the “Comité de Sûreté
Général” abolished monopolies and ordered citizen Leblanc to
publish the details of his process.
83
Salt Cake. The first stage of the Leblanc process consists in mixing
a charge of salt weighing some hundredweights with the requisite
amount of “chamber” sulphuric acid. The operation is carried out in
a circular cast-iron pan (D, Fig. 12) about 9 ft. in diameter and 2 ft.
deep. The pan is covered over with a dome of brickwork, leaving a
central flue (E) for the escape of hydrochloric acid gas which is
produced. At first, the reaction takes place without the application of
heat, but towards the end the mass is heated for about one hour.
The contents of the pan are then raked out on to the hearth of a
reverberatory furnace (a, b) and more strongly heated. More
hydrochloric acid gas is given off, and the reaction is completed. The
solid product which remains is impure Glauber’s salt (sodium
sulphate), and is known in the trade as “salt cake.”
Black Ash. In the second stage of the Leblanc process, salt cake is
converted into black ash. The salt cake is crushed and mixed with an
equal weight of powdered limestone or chalk and half its weight of
coal dust. This mixture is introduced into a reverberatory furnace
(Fig. 13) by the hopper K, and heated to about 1000° C. by flames
and hot gases from a fire at a. During this operation, the mass is
kept well mixed, and after some time it is transferred to h where the
temperature is higher. The mixture then becomes semi-fluid and
carbon monoxide gas is given off.
85
The chemical changes which take place in making black ash are
probably as follows: Carbon (coal dust) removes oxygen from
sodium sulphate, which is thus changed to sodium sulphide. This
substance then reacts with the limestone (calcium carbonate),
forming sodium carbonate (soda) and calcium sulphide.
Alkali Waste. Black ash contains less than half its weight of soda,
so that for every ton of soda produced there is from a ton and a half
to two tons of an insoluble residue which collects in the lixiviating
and settling tanks. This residue is known as alkali waste.
The whole of the sulphur which was contained in the sulphuric acid
used in the first stage of the process remains in the alkali waste,
mainly as calcium sulphide. A plant for the recovery of this sulphur is
established in some of the larger works. The alkali waste is mixed
with water to the consistency of a thin cream, in tall, vertical
cylinders. Carbon dioxide under pressure is forced into the mixture,
and this converts the calcium sulphide into calcium carbonate 88
and sets free hydrogen sulphide, which, when burnt with a
limited supply of air, yields sulphur.
89
Fig. 14. THE SOLVAY PROCESS
Baking powder is a mixture of bicarbonate of soda and ground 90
rice; the latter substance is merely a solid diluent.
The Solvay Process. Soda ash is one of the principal forms of mild
alkali used in commerce. Large quantities of this substance are made
by heating bicarbonate of soda. We shall now consider another alkali
process in which this substance is the primary product.
For the greater part of the first century of its existence, the Leblanc
soda process had no rival, although another method, known as the
ammonia-soda process, was patented as early as 1838. In this case,
however, as in many others, expectations based on the experiments
carried out in the laboratory were not realized when the method
came to be tried under manufacturing conditions. It was not until
1872 that Ernest Solvay, a Belgian chemist, had so far solved the
difficulties, that a new start could be made. In that year, about 3,000
tons of soda were produced by the ammonia-soda or Solvay process,
as it has now come to be known. Since then, however, the quantity
produced annually has been steadily increasing, until at the present
time it amounts to more than half of the world’s supply.
In many ways, the Solvay process compares very favourably with the
older method. It is an advantage to start with brine, for that is the
form in which salt is very often raised from the mines. The end
product is relatively pure; moreover, it is quite free from caustic
soda, which for some purposes for which soda ash is used is a great
recommendation. There is no unpleasant smelling alkali waste. On
the other hand, the efficiency of the Solvay process is not high, for
only about one-third of the salt used is converted into soda. This
would make the process impossible from the commercial point of
view were it not for the cheapness of salt.
The Leblanc process, too, has its advantages. In the next chapter we
shall see that it is adaptable for the production of caustic as well as
mild alkali. The chlorine which is recovered in the Leblanc process is
a very valuable by-product. In the Solvay process, chlorine is lost, for
hitherto no practicable method has been found for its recovery from
calcium chloride.
1884. 1894.
Leblanc Solvay Leblanc Solvay
soda. soda. soda. soda.
Great Britain 380,000 52,000 340,000 181,000
Germany 56,500 44,000 40,000 210,000
France 70,000 57,000 20,000 150,000
United States — 1,100 20,000 80,000
Austria- 39,000 1,000 20,000 75,000
Hungary
Russia — — 10,000 50,000
Belgium — 8,000 6,000 30,000
545,500 163,100 456,000 776,000
95
CHAPTER IX
CAUSTIC ALKALIS
The Alkali Metals. The discovery of current electricity in 1790
furnished the chemist with a very powerful agency for bringing
about the decomposition of compounds. Hydrogen and oxygen were
soon obtained by passing an electric current through acidulated
water; and in 1807, Sir Humphry Davy, who is perhaps better
remembered for his invention of the miners’ lamp, isolated the
metals sodium and potassium by subjecting caustic soda and caustic
potash respectively to the action of the current.
Sodium and potassium are very remarkable metals. They are only a
little harder than putty, and can easily be cut with a knife or
moulded between the fingers. When exposed to the air, they rust or
oxidize very rapidly, so much so that they have to be preserved in
some mineral oil or in airtight tins. They are lighter than water,
which they decompose with the liberation of hydrogen, and under
favourable circumstances the hydrogen takes fire so that the metals
appear to burn on the surface of the water. After the reaction is over
and the sodium or potassium has disappeared, a clear colourless
liquid remains which has a strongly alkaline reaction, and when this
is evaporated until the residue solidifies on cooling, caustic soda or
potash is obtained. For very special purposes, the caustic alkalis are
sometimes made by the action of the metals on water, but for
production on a large scale, less expensive methods are adopted.
When the manufacturer intends to make caustic soda and not soda
crystals, the composition of the black ash mixture is varied by adding
a larger proportion of limestone, so that there may be an excess of
lime in the black ash produced. The treatment with water is carried
out as described under washing soda, and then more lime is added
to convert the mild soda into caustic soda. After the excess of lime
and other suspended matter has settled down, the clear caustic
liquor is evaporated in iron kettles until it becomes molten caustic,
which will solidify on being allowed to cool.
Caustic Lime. Apart from its use in making mortar and cement,
lime is very often employed to neutralize acids. For this purpose, a
suspension in water, called milk of lime, is generally used, for lime
itself is not very soluble. Probably it is only the soluble part which
reacts; nevertheless, as soon as this is used up, more of the solid
dissolves, and in this way the action goes on as if all the lime were
in solution.
Ammonium Salts
101
CHAPTER X
ELECTROLYTIC METHODS
One of the most noteworthy developments of modern chemical
industry has been the increasing use of electricity as an agent for
bringing about changes in matter. This has followed naturally from
the reduction in the cost of electricity, due in great measure to the
utilization of natural sources of energy which for untold ages had
been allowed to run to waste.
Solutions of acids, bases, and salts, and in some cases the fused
substances themselves, conduct the electric current; but at the same
time they suffer decomposition. This method of decomposing a
substance is known as electrolysis, or a breaking up by the agency
of electricity.
The apparatus required in a very simple case is shown in Fig. 15. It
merely consists of some suitable vessel to contain the liquid; two
plates—one to lead the current into the solution, the other to lead it
away again—and wires to connect the plates to the poles of a
battery, storage-cell, or dynamo. Each plate is called an 103
electrode, and distinguished as positive or negative according
as it is joined to the positive or negative pole of the current
generator. By convention, electricity is supposed to “flow” from the
positive pole of the battery to the positive electrode or anode, and
then through the solution to the negative electrode or cathode, and
so back to the negative pole of the generator, thus completing the
circuit external to the battery.
When acids, alkalis, and salts are dissolved in water, there is strong
evidence to show that they break up to a greater or less extent into
at least two parts called ions. These are atoms, or groups of atoms,
[5]
which have either acquired or lost one or more electrons. They
move about quite independently of one another and in any direction
until the electrodes are placed in the liquid. Then they are
constrained to move in two opposing streams—those which have
acquired electrons all move towards the negative electrode, and
those which have lost electrons towards the other. At the electrodes
themselves, the former give up and the latter take up electrons, and
become atoms again. Let us now consider a concrete example.
Common salt is composed of atoms of sodium and atoms of chlorine
paired. When a small quantity of this substance is dissolved in a
large quantity of water, the pairing no longer obtains. The chlorine
atoms move away independently accompanied by an extra satellite
or electron, and the sodium atoms move away also but with their
electron strength one below par. When the current is introduced into
the liquid, the sodium ions travel towards the cathode and chlorine
ions towards the anode, and when they reach the goal, sodium ions
gain one electron and chlorine ions lose one, and both become
atoms again. Chlorine atoms combine in pairs forming molecules and
escape from the solution in the greenish yellow cloud that we call
chlorine gas. The sodium atoms react immediately with water,
forming caustic soda with the liberation of hydrogen.
Some of the chlorine remains dissolved in the liquid and reacts with
the caustic soda, forming other substances which, though valuable,
are not easy to separate from the caustic soda. It is possible to get
over this difficulty to some extent by placing a porous partition
between the anode and the cathode, and in that way dividing the
cell into cathodic and anodic compartments. As long as the partition
is porous to liquids, it will allow the current to pass, but at the same
time it will greatly retard the mixing of the contents of the two
compartments. Porous partitions or cells which are in common use
for batteries are made of “biscuit” or unglazed porcelain.
testbankdeal.com