0% found this document useful (0 votes)
47 views

Digital Communication Chapter 7

book

Uploaded by

Saloni Tripathi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
47 views

Digital Communication Chapter 7

book

Uploaded by

Saloni Tripathi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 51
cele aias Analog-to-Digital rs Conversion Communication systems are designed to transmit information. In any communication sys tem, thete exists an information source that produces the information; the purpose of th ‘communication system isto transmit the output of the source to the destination. In radi I ‘broadcasting, for instance, the information source is either a speech source or a musi source. In TV broadcasting, the information source isa video source that outputs a movin image. In FAX transmission, the information source produces a still image. In communica tion between computers, either binary data or ASCII characters are transmitted; therefort the source can be modeled as a binary or ASCII source. In storage of binary data on computer disk, the source is again a binary source. Tn Chapters 3, 4, and 6, we studied the transmission of analog information usin different types of analog modulation. The rest of this book deals with transmission ¢ digital data. Digital data transmission provides a higher level of noise immunity, mot flexibility in the bandwidth—power trade-off, the possibility of applying cryptographic an antijamming techniques, and the ease of implementation using a large-scale integratio of circuits. In order to employ the benefits of digital data transmission, we have to fir convert analog information into digital form. Conversion of analog signals into digital dat should be carried out with the goal of minimizing the signal distortion introduced in th conversion process. In order to convert an analog signal to a digital signal, ie, a stream of bits, thre operations must be completed, Firs, the analog signal has to be sampled, so that we ca obtain a discrete-time continuous-valued signal from the analog signal. This operation called sampling. Then the sampled values, which can take an infinite number of value ate quantized, ie, rounded to a finite number of values. This is called the quantizatio. process. After quantization, we have a discrete-time, dscrete-amplitude signal. The thir stage in analog-to-dgital conversion is encoding. In encoding, a sequence of bts (on and zeros) are asigned to different outputs ofthe quantizer. Since the possible outputs ‘the quantizer are finite, each sample ofthe signal can be represented by afnite number « bits. For instance, if the quantizer has 256 = 2* possible levels, they can be represented b Bits | ~ Section 7.1. Sampling of Signals and Signal Reconstruction 207 7.1 SAMPLING OF SIGNALS AND SIGNAL RECONSTRUCTION FROM SAMPLES ‘The sampling theorem is one of the most important results in the analysis of signals; it has widespread applications in communications and signal processing, This theorem and its numerous applications clearly show how much can be gained by employing the frequency-domain methods and the insight provided by frequency-domain signal analysis, Many modem signa-processing techniques andthe whole family of digital communication methods are based on the validity ofthis theorem and the insight provided by it. In fact, this theorem, together with results from signal quantization techniques, provide a bridge that connects the analog world to digital communication techniques. 7.4.1. The Sampling Theorem ‘The idea leading to the sampling theorem is very simple and quite intuitive, Let us assume that we have two signals, x; (0) and x2(1), as shown in Figure 7.1. The first signal xi() is a smooth signal, which varies very slowly; therefore, its main frequency content is at low frequencies. In contrast, x2(t) is a signal with rapid changes due to the presence of high-frequency components. We will approximate these signals with samples taken at reg- ular intervals 7; and 7;, respectively. To obtain an approximation of the original signal, we ccan use linear interpolation of the sampled values. It is obvious that the sampling interval for the signal x (¢) can be much larger than the sampling interval necessary to reconstruct, signal x;(C) with comparable distortion. This isa direct consequence of the slow time vari- ations ofthe signal x(t) compared to x2(t). Therefore, the sampling interval forthe signals of smaller bandwidth can be made larger, or the sampling frequency can be made smaller. ‘The sampling theorem is a precise statement of this intuitive reasoning. It basically states, two facts 1. If the signal x(¢) is bandlimited to W, sufficient to sample at intervals 7, vif X(f) O for |fl > W, then itis x0 Figure 7 Sampling of signals 298 AAnalog-to-Digital Conversion Chapter 2. If weare allowed to employ more sophisticated interpolating signals than linear inter polation, we are able to recover the exact original signal from the samples, as lon as condition 1 is satisfied. Obviously, the importance of the sampling theorem lies inthe fact that it not only provide ‘a method to reconstruct the original signal from the sampled values, but also gives a pre cise upper bound on the sampling interval (or equivalently, a lower bound on the samplin, frequency) required for distortionless reconstruction. Sampling Theorem. Let the signal x(t) have a bandwidth W, ie, let X(f) =1 for |f| > W. Let x(t) be sampled at multiples of some basic sampling interval T;, wher 4, 10 yield the sequence (x(nT, )},2° zg. Thenitis possible to reconstruct the origina signal x(t) from the sampled values by the reconstruction formula x) = So AW’ xen, snel2W'G nT) ous where W’ is any arbitrary number that satisfies the condition 1 wews ry then the replicated spectrum of x() overlaps and reconsteution of the orginal signal isnot possible. This typeof distortion, which results from undersimpling,is known a8 aliasing error or aliasing distortion. However, if, < zly,no overlap occurs, and by employing an appropriate filter we can reconstruct the original signal, To get the original signal back, itis ‘sufficient to filter the sampled signal through a lowpass filter with the frequency-response characteristics 1. H(f)=1, for|fl < W. 2. H(f) = Ofor|f| > w. ForW xo + kT sine OM Uo +47) an) Example 7.1.2 ‘A bandlimited signal has a bandwidth equal to 3400 H. What sampling rate should be used to gdarantee a guard band of 1200 Hz? Solution We have =2W + We: therefore, x 3400 + 1200 = 8000. . ‘After sampling, the continuous-time signal is transformed to a discrete-time signal In other words, the time-axis has been quantized. After this step, we have samples taken at discrete times, but the amplitude of these samples is still continuous. The next step in analog-to-digital conversion is the quantization of the signal amplitudes. This step results n'a signal that is quantized in both time and amplitude. 7.2 QUANTIZATION After sampling, wehavea discrete-time signal, ie, a signal with values at integer multiples of T,. The amplitudes of these signals are still continuous, however. Transmission of real numbers requires an infinite numberof bits, since generally the base 2 representation of real numbers has infinite length. After sampling, we will use quantization, in which the amplitude becomes discrete as well. Asa result after the quantization step, we will deal 302, Analog:te-Digital Conversion Chapter with a discrete-time, finite-amplitude signal, in which each sample is represented by finite number of bits. In this section, we will study different quantization methods. W ‘begin with scalar quantization, in which samples are quantized individually, then, we wi explore vector quantization, in which blocks of samples are quantized at a time. 7.2.1 Scalar Quantization In salar quantization, each sample is quantized into one of a fnite number of levels, whic is then encoded into a binary representation. The quantization process is a rounding prc cess; each sampled signal point is rounded to the “nearest” value from a finite set of por sible quantization levels. In scalar quantization, the set of real numbers R is partitione into NV disjoint subsets denoted by Ri, 1 < k <= N (each called a quantization region Corresponding to each subset Re, a representation point (or quantization level) iy is chu sen, which usually belongs to Mt. If the sempled signal at time i, x; belongs to @t, the it is represented by is, which is the quantized version of x. Then, fy is represented t a binary sequence and transmitted. This later step is called encoding. Since there are possibilities for the quantized levels, logy N bits are enough to encode these levels in binary sequences.' Therefore the number of bits required to transmit each source outp is R = log, N bits, The price that we have paid for representing (rounding) every samp that falls in the region by a single point Sy is the inroduction of distortion. Figure 7.3 shows an example of an eight-level quantization scheme. In this schem the eight regions are defined as Ry = (—90, ai), Ra = (@i, a2), ..., Rs = (ay, $00). Figure 73. Example ofan cightevel quantization scheme "a genealychoten tobe a power of 2. fits nt then the amber of required bis woud be logs where [x] denote the smallest integer greater than ot equal o x To maketh development easier, we ala sasume that isa power of Section 72 Quantization 303, representation point (or quantized value) in each region is denoted by Z; and is shown in the figure, The quantization function Q is defined by Q(x) = & forall eR. 721) This function is also shown in the figure. Depending on the measure of distortion employed, we can define the average distor- tion resulting from quantization. A popular measure of distortion, used widely in practice, is the squared error distortion defined as (x ~ 2)?. In this expression, x is the sampled signal value and is the quantized value, ic., £ = Q (x). Ifwe are using the squared error distortion measure, then dQ, 8) = @- 9 (0) where. = x ~ Q (2). Since X is a random variable, so are X and X; therefore, the average (mean squared error) distortion is given by D = E(d(X, 9) = E(X ~ 0. (OY. Example 7.21 ‘The source X (1) i a stationary Gaussian source with mean zero and power spectral density suo=(3. fas ‘The source is sampled atthe Nyquist rate and each sample is quantized using the cight-level ‘quantizer which is shown in Figure 7.3 This figure has ay = ~60, a2 = ~40, a 60, and %, = ~70, % = ~50, % = ~30, i = 0. What isthe resulting distortion and rate? Solution The sampling frequency is f, = 200 Hz, Fach sample is a zero-mean Gaussian random variable with variance oa B0) = Rae [Siar fa 00, where we have used Equations (5.2.18) and (5.2.21). Since each sample is quantized into eight im igtt Sa ese sen cee eee al R=3f, = 600 bitslsee. Tofindthe distortion, we have to evaluate E(X — ? foreach sample. Thus, we have D=EK-kY [le oor ‘where f(«) denotes the probability density function of the random variable X. From here, wehave p= ha (0) fad, 304 ‘Analog:to-Digital Conversion Chapter cor equivalently, D [lena penis + fe A? fated [es pcode, aa where fr(2) is ase”, Substituting (aif, and (3) inthis integral and evaluating th result with the QAapction table, we obtain D ~ 33:38, Note tht f we wer to se eo bi per source tpt, then the bes srategy would be ost the reonsrcted signa! equal to zen In this case, we would have a distortion of D = E(X — 0)? = a? = 400. This quantizatic scheme and transmission of 3 is pe source ouput as enabled ust reduce the distortion 53838, which sa factor of 1198 eduction or 10.78 dB In the preceding example, we have chosen E(X — Q (X))?, which is the mea squared distortion, or quantization noise asthe measure of performance. A more meat ingful measure of performance isa normalized version of the quantization noise, and it normalized with respect tothe power of the orginal signal Definition 7.2.1. If the random variable X is quantized to Q (X), the signab-t quantization noise ratio (SQNR) is defined by E(x) sqnr = — FO) __ 72. ‘ONE = FER OO c ‘When dealing with signals, the quantization noise power is le = jim} ~ oxy 1 Par [Foo oxy? ar 2. andthe signal poweris 1st Pe= fim + [BoP Ode a2. AT Ly Hence, the signl-to-quantizaton nose ratio is ee SONR = FE. (2 If X(t) is stationary, then this relation simplifies to Equation (7.2.3), where X isthe rand variable representing X(t) at any point. Example 7.22 ‘Determine the SQNR forthe quantization scheme given in Example 7.2.1 Solution FromExample7.2.1, wehave Py = 400 and Py = D = 33.38, Therefore, Pr _ 400 _* nr = 2x = 400° SONR= 5, = 3538 1.98 = 10.78 4B. Section 72 Quantization 305 Uniform Quantization. Uniform quantizers are the simplest examples of scalar quantizers. In a uniform quantizer, the entire real line is partitioned into NV regions. All regions except Jt, andy arc of equal length, which is denoted by A. This means that for all] 0? feoode 728) There exists a total of 2N — 1 variables in this expression (@j.a2,....ay-1. and 41, 42, ...,) and the minimization of D is to be done with respect to these variables. Differentiating with respect to a; yields . 8 (a, ~ Ha.) Foy 7 Pxl alla, 2)" ~ (a, — Rg) 0, (72.9) which results in i + fia. (72.10) 306 Analog-to-Digital Conversion Chapter ‘TABLE 7.1 OPTIMAL UNIFORM QUANTIZER FOR A GAUSSIAN SOURCE ‘Number Output ‘Outpaievel| ‘Mean Squared Enrop Levels Spacing Error HG) ” a D 00 1.596 1.000 1536 09957 o8as0 oma 2409 06508 asses 2398 0.860 oosrse 2781 05338 0.03069 2908 04908 02s6s 3032 * oasis nies 09238 o.rsss 3253 03972 0.645 33st 03139 01850 asc 03s 0.89 332 03382 ooiisa 360% ost 33% ay 0.01040 36m 03082 0.009830 374¢ 0.2909 008se4 san 208 007869 aan 2678 007235 asx 02516 0006678 399% 0282 00siks 404: 02396 0.005787 40 oats 005355, a1 oxo ‘200so04 419 oar 0.004687 424 ones 004401 428: 02086 00441 4321 01987 003905 4am 01932 noses 4a oss 0.00490 444 01833 0.003308 aa 0.781 003141 _ 42 ona 0.002986 4561 0.1703 002843 = 439, From Max (960% © IEEE. ‘This result simply means that, in an optimal quantizer, the boundaries of the quantiz, tion regions are the midpoints of the quantized values. Because quantization is done on ‘minimum distance basis, each x value is quantized to the nearest). ‘To determine the quantized values £,, we differentiate D withrespect to £, and defn ay = —00 and ay = +00. Thus, we obtain Section 72. Quantization 307 , B= [0 x - softwar 2m) whieh results in Sil fuQoidx Se, fx @dx Equation (7.2.12) shows that in an optimal quantizer, the quantized value (or representation point) for a region should be chosen to be the centroid ofthat region. Equations (7.2.10) and (7.2.12) give the necessary conditions for a scalar quantizer to be optimal; they are known as the Lloyd-Max conditions. The criteria for optimal quantization (the Lloyd~Max conditions) can then be summarized as follows: (72.12) 1. ‘The boundaries of the quantization regions are the midpoints of the corresponding quantized values (nearest neighbor law). 2. The quantized values are the centroids of the quantization regions. Although these rules are very simple, they do not result in analytical solutions to the optimal quantizer design. The usual rmethod of designing the optimal quantizer is to start with a set of quantization regions and then, using the second criterion, to find the quantized values. Then, we design new quantization regions for the new quantized values, and altemate between the two steps until the distortion does not change much from one step to the next, Based on this method, we can design the optimal quantizer for various source, statistics. Table 7.2 shows the optimal nonuniform quantizers for various values of JV for a zero-mean unit-variance Gaussian source. If, instead of this source, general Gaussian source with mean m and variance 0? is used, then the values of a, and Zi read from Table 72 are replaced with m + oa; and m + of, respectively, and the value of the distortion D will be replaced by 0? D, Example 7.23 How would the results of Example 7.2.1 change if, instead ofthe uniform quantizer shown Figure 7, we used an optimal nonuniform quantizer withthe same numberof levels? Solution We an ind the quantization regions and the quantized values from Table 7.2 with 1N = 8, and then use the fact that our source is an NM (0,400) source, ie, m =O and o = 20. ‘Therefore, alla; and &; values read from the table should be multiplied by ¢ = 20 and the Gistortion has to be multiplied by 400. This gives us the values a; = —a; = ~34.96, a) = ag = ~21, as = ~as = —10.012;a4 = Oand fy = ~¥, = ~43.04, & = —8, = ~26.88, ig = 15. =4:902anda distortion of D = 13.816. The SQNR is SQNR 2895 = 14.62.48, ~ T3a16 which 3.8448 beuerYhan the SQNR ofthe uniform quer . 308 AAnalog-to-Digital Conversion Chapter TaMle72_oPTHAL NONUNFORY QUANTIZER FORA GAUSSIAN SOURCE ¥ z a > Ha . i ° 7 ° i 2 9 2990 ome \ 3 0.6120 0.1224 0.1902, 1536 4 o.a9et6 0.528, 1510 ois 191 | 5 a3eas. i244 o.o-e46 178 ones 2208 ! & 06s 14s7 0.3177, 100,884 oss 2483 H 7 0.2803, 08744, 1.611 0, 0.5606, 1.188, 2.033 004400, 2647 } $ —0,05006, 1051748 02451, 07560, 1344, cous: = 25 21s a 9 02218, 0.6812, 1.198, 0,0.4436, 0.9188, 1.476, onoms 2.983 if 166 as } 0008047, 08999, 32, 01356 06089, 1058, oo as 2368 1391.25 | 110897 05593, 09656, 0.03615, o7s08 1.179, cow 3283 ; 136 2039, 53.2836 1 1538.24 1246 1.789, 2899 13013, 04760, a6, .03i38, 0603. 09870, 0140s 481 i ‘ies 16282215 aah 18 2565 i 14 002535 03959, 0918 0.145.044.0750, oom + 3582 ato 2382 Tone 146,199, 2425 i 5 01369, 0410, 07050, 027.055.0852 oom a6TT Tors 136i 176.256 T1751 346 2007.2.68 ! 16 0,02582,05224,07996, 01284, 0381, 06568, ooses 3.65 Tove 437 1664,2401 09424, 1.256168 200.279 1701218, 03670, 06201, 0.02830,0490,0749, oes a9 ars, 11781308, (906, an6 1 31,1685, 2527, 18 0,02306,04683,07051, 01148, 03464,058, aoorsiy a8 | (0.9680, 1.251, 1.573, 1.964, (0.8339, 1.102, 1.400, 1.746, 2308 isi 2826 19 030%2,02294,05551, 0.02i84.0440s,06608, asses 4002 7006 1.082 1318, 1634, ——OMtIT. 17318681 803, . 2oi8 255 aaa 20 9,02083,0.197,06315, __01038,03128,05265, avosan 407 bee tti1381 60, O2486.09esr 1238, \ oes, 2554 13041457, 2279, 2908 210038, 02999, 05027, 0.01984. 02994,06059. 0us6u8 481 nis 0936i, 1195 Saris 1951, 13001599, 1.440, 1.743, 2.116, 2635 1.908, 2324, 2.946 . - 22001500 038%2,05798, _09409,02882, 04792, avosiss 4206 Oiese, 0,235 1495, a, 08893 113, 1793.2160,367 1357 1.692, 1955 236, aoe ‘Gotad oer I, Section 72 Quantization 309 TARLE72. (CONTINUED) y ta +8 > Gb) 23009085, 02736, 04594, 0,0.1817, 03684, 05534, Doone 4268 (06507, 08504, 1.062, (07481, 09527, 1.172, 1.281, 1586, 1861, 2208, 1.411, 1.682, 2000,2.06, ant 3016 24 0,0.1746,0.3810, 05312, (0.08708, 0:2621,0.4399, 004367 4327 0.7173, 09122, 1.19, 1344, 96224, 08122, 1.01, 1.595, 1885, 2263, 2746 1.227, 1.462, 1728, 2042, 2484, 3048 25 .oa3es, 02522, 04231, 00.1676, 0.3368, 05093, oodass 3a 0.5982,0.7797,09702, (015870, 08723, 1.068, 1.173, 1.394, 1641, 192, 1279, 1.510, 1772, 2083, 2.480,3.079 26 —0,0.1616,03245, 0.4905, (0.06060, 0.2425, 04066, ooosrs 4439 0.60, 08383, 025, (05743, 0.7477, 09288, 1.224, 1.482, 1685, 1968, 1.121, 1328, 1356, 1.814 2318,2811 221,214, 3.108 2797779, 02340, 03921, 0,0.1386, 03124, 04719, ooo 4.451 0.87, 0.7202, 08936, (06354, 08049, 09824, 1.077, 1273, L487, 1.727, L171, 1374, 1599, 1854 22006, 2382, 2.842 2188,2587, 3.137 28 00.1503, 0.3018, 004556, (0.07502, 02256, 0.3780, 00004542 0.6132, 0.7760, 03460, 05333, 06930, 0.8589, 1.126, 1319, 1529, 1.765, 1.033, 1.18, 149, 1.640, 2082, 2385,2871 1892, 2193,2.578 3164 From Max (1960); © IEEE. 7.22 Vector Quantiz In scalar quantization, each output of the discrete-time source (which is usually the result of sampling.of @ continuous-time source) is quantized separately and then encoded. For example, if we are using a four-level scalar quantizer and encoding each level into two bits, we are using two bits per each source output. This quantization scheme is shown in Figure 7.4 Now if we consider two samples of the source at each time, and we interpret these two samples as a point in a plane, the quantizer partitions the entire plane into 16 quanti- zation regions, as show in Figure 7.5. We can see that the regions in the two-dimensional space are all of rectangular shape. If we allow 16 regions of any shape in the two- dimensional space, we are capable of obtaining better results. This means that we are {quantizing two source outputs at atime by using 16 regions, which is equivalent to four bits per two source outputs or two bits per each source output. Therefore, the number of bits per source output for quantizing two samples at a time is equal to the number of bits per source output obtained in the scalar case. Because we are relaxing the requirement of having rectangular regions, the performance may improve. Now, if we take three samples at a time and quantize the entire three-dimensional space into 64 regions, we will have 310 AAnalog-to-Digita Conversion Chapter Figure 7A four-level scalar spans 4 Figure75_A scalar fur-evel quantization applied to two samples even less distortion with the same number of bits per source output. The idea of vect« {quantization is to take blocks of source outputs of length n, and design the quantizer in tk n-dimensional Euclidean space, rather than doing the quantization based on single sampl: in a one-dimensional space. . Let us assume that the quantization regions in the n-dimensional space are denot« by S;, 1 << K. These K regions partition the n-dimensional space. Each block « source output of length 1 is denoted by the n-dimensional vector x € RY; if x € Wj, it quantized to © (x) = &;. Figure 7.6 shows this quantization scheme for n = 2. Now, sin there are a total of K quantized values, log K bits are enough to represent these value This means that we require log K bits per n source outputs, or the rate of the source code aw kek bits/source output. 2a The optimal vector quantizer of dimension n and number of levels K choodest regions 9's and the quantized values y's such thatthe resulting distortion is miimize Applying the same procedure that we used for the case of salar quantization, we obte the following criteria for an optimal vector quantizer desig: 1. Region isthe st of all points inthe n-dimensional space thatare closer to % th any other, forall j # sie, Hy = (ERY: Mix — Kill < Ik Kyll, WI ADs Section 73 Encoding an Figure 76 Vector quantization in two dimensions. the centroid ofthe region yi ie, 2 um SES J | xfecoax AA practical approach to designing optimal vector quantizers is on the basis of the same approach employed in designing optimal scalar quantizers. Starting from a given set of quantization regions, we derive the optimal quantized vectors for these regions by using. criterion 2. Then we repartition the space using the first criterion, and go back and forth ‘ntl the changes in distortion are negligible. ‘Vector quantization has found widespread applications in speech and image coding; ‘numerous algorithms for reducing its computational complexity have been proposed, 73 ENCODING Inthe encolng proces sequence of i ar amiga tn diferent asian valve Since there are a total of NV = 2" quantization levels, v bits are sufficient for the encoding process. In this way, we have v bits corresponding to each sample; since the sampling rate is f, samples/sec, we will have abit rate of R = vf, bits/sec. ‘The assignment of bits to quantization levels can be done in a variety of ways. In scalar quantization, a natural way of encoding is to assign the values of 0 to NV — 1 to different quantization levels starting from the lowest level to the highest level in order of increasing level value. Then we can assign the binary expansion of the numbers 0 to N ~ 1 to these levels, Thus, v zeros are assigned to the lowest quantization level, 0.011 to the second lowest quantization level, 0. .0 10 to the next level, ...and J... 10 the highest -a . a Analog-to-Digital Conversion Chapter 7 ‘TABLET.3 NBC AND GRAY COOES FOR A 16:EVEL QUANTIZATION Quatintion Level | LevlOréer | NECCod | GnyCade | a 0 S| x 2 coo | eon | 1 3 vo! ao] & 3 100 mn 3 voor m0 7 va v0. v100 | 3 a wie voor] a B wt wooo | ‘quantization level, This type of encoding is called natural binary coding or NBC for short ‘Another approach to coding is to encode the quantized levels in a way that adjacent levels differ only in one bit. This type of coding is called Gray coding. ‘Table 7.3 gives an example of NBC and Gray coding for a quantizer with N levels. 1 7.4 WAVEFORM CODING ‘Waveform coding schemes are designed to reproduce the waveform output of the source a the destination with as little distortion as possible. In these techniques, no attention is paic to the mechanism that produces the waveform; all attempts are directed at reproducing the source output at the destination with high fidelity. The structure of the source plays no rol. in the design of waveform coders and only properties of the waveform affect the design Thus, waveform coders are robust and can be used with a variety of sources as long a the waveforms produced by the sources have certain similarities. In this section, we stud; some basic waveform coding methods that are widely applied to a variety of sources. a ee Section 7.4 Waveform Coding 33 7.4.1 Pulse Code Modulation Pulse code modulation (PCM) isthe simplest and oldest waveform coding scheme. A pulse code modulator consists of three basic sections: a sampler, a quantizer and an encoder. A functional block diagram of a PCM system is shown in Figure 7.7. In PCM, we make the following assumptions: 1. The waveform (signal) is bandlimited with a maximum frequency of W. Therefore, it can be fully reconstructed from samples taken ata rate of f, = 2W or higher. 2. The signal is of finite amplitude. In other words, there exists a maximum amplitude mm such that forall ¢, we have [x(¢)1 < Xm 3. The quantization is done with a large number of quantization levels NV, which is a powerof 2(N = 2", ‘The waveform entering the sampler is a bandlimited waveform with the bandwidth W. Usually, there exists a filter with bandwidth W prior to the sampler to prevent any com- ponents beyond W from entering the sampler. This filter is called the presampling filter. ‘The sampling is done at a rate higher than the Nyquist rate; this allows for some guard band. The sampled values then enter a scalar quantizer. The quantizer is either a uniform quantizer, which results in a uniform PCM system, or a nonuniform quantizer. The choice of the quantizers based on the characteristics ofthe source output. The output ofthe quan- tizer is then encoded into a binary sequence of length v, where V = 2° is the number of quantization levels. Uniform PCM, In uniform PCM, we assume that the quantizer is a uniform ‘quantizer. Since the range of the input samples is {—xmax, +%max] and the number of quan- tization levels is WV, the length of cach quantization region is given by WT pel ‘The quantized values in uniform PCM are chosen to be the midpoints of the quantization regions; therefore, the error 7 = x — Q (x) is a random variable taking values in the interval (—, +4]. In ordinary PCM applications, the number of levels () is usually high and the range of variations of the input signal (amplitude variations ay) is small, This ‘means that the length of each quantization region (A) is small. Under these assumptions, ineach quantization region, the error X = X — Q (X) can be approximated by a uniformly distributed random variable on (4, +4). In other words, ad (741) “(3 odie x0) [sen bl Goma Figure 7.7 Block diagram of PCM system e110... 34 [Analog-to-Digita Conversion Chapter 7 ‘The distortion introduced by quantization (quantization noise) is therefore ee = | 3 i where v is the number of bits/source sample and we have employed Equation (7.4.1), The signal-to-quantization noise ratio then becomes (742) 3x4" Py alk 743) my ‘where Py is the power in each sample. In the case where X (1) is a wide-sense stationary process, Py can be found using any ofthe following relations 7 Px = Ruse J = [7 sunar = [ 2 fx(oae Notethat since sas is the maximum posible value for X, we alway have Py = EC) a This meansthat 2 < 1 (usually 2 « 1);henee, 3N? = 3 4” is an upperbount to ie ‘SNR in uniform PCM. This also means that SQNR in uniform PCM deteriorate: as the dynamic range of the source increases because an increase in the dynamic range o} the source results in a decrease in 2°. Expressing SQNR in decibels, we obtain Px. +6 +48. (14a SQNR, ~ 1008 ‘Weecan see that each extra bit (increase in v by one) increases the SQNR by 6 dB. This i a very useful strategy for estimating how many extra bits are required to achieve a desirec SQNR. Example 7.4.1 ‘Whats the resulting SQNR fora signal uniformly distributed on [—1, 1}, when uniform PCY with 256 levels is employed? Sain wetave F=f See = 5 tei i og = Tandy == 8 coh : ; Teta of tani euiemens of ple esis sens, of which PC results concerning the bandwidth requirements of a PCM system. Ifa signal has a band width of W, then the minimum number of samples for perfect reconstruction of the signa Section 7.4 Waveform Coding 315 is given by the sampling theorem, and itis equal to 20 sarples/se. If some guard band is required then the number of samples per secon is, whichis more than 2. For each sample,» bite are sed therefor, aoa of vf, bt/see are required fr transmission ofthe PCM signal. In the ease of sampling atthe Nygis ate this i equal to 20W bits. The rinimum bandwidth requirement for binary transmission of bitslse (or, more presise!y, R pulsesfsec)is . (See Chapter 10) Therefore, the minimum bandwidth requitement of aPOM system is BW (7.46) we 2 Which, inthe case of sampling at the Nyquist rate, gives the absolute minimum bandwidth requirement for pulse transmission as BWrq = YW This means that a PCM system expands the bandwidth ofthe original signal by a at least». Nonuniform PCM. As long as the statistics of the input signal are close tothe uniform dixtibution, uniform PCM works fine. However, in coding of certain signals such as speech, the input distribution is far from uniform. For a speech waveform, in particular, there exists a higher probability for smaller amplitudes and a lower probability for larger amplitudes. Therefore, it makes sense to design a quantizer with more quantization regions at lower amplitudes and fewer quantization regions at larger amplitudes, The resulting quantizer will be a non-uniform quantizer that has quantization regions of various sizes. ‘The usual method for performing nonuniform quantization’ is to first pass the samples through a nonlinear element that compresses the large amplitudes (reduces the dynamic range ofthe signal) and then performs a uniform quantization on the output. At the receiving end, the inverse (expansion) ofthis nonlinear operation is applied to obain the sampled value. This technique is called companding (compressing-expanding), A block diagram of this system is shown in Figure 78 There are two types of companders that are widely used for speech coding. The ela compander, used in the United States and Canada, employs the logarithmic function at the transmitting side, where jz] <1: log + lz) Tog +1) ‘The parameter 2 controls the amount of compression and expansion. The standard PCM system in the United States and Canada employs a compressor with 4. = 255 followed by a uniform quantizer with 8 bits/sample. Use of a compander in this system improves, the performance of the system by about 24 dB. Figure 7.9 illustrates the j-law compander characteristics for a = 0, 5, and 255. a(x) sgn(s). (748) *A more practical bandwidth requirementis , where 1 | Quantizer |} Digital ‘pera B= M00He oe ‘tock (6000 Fz) Figure 719A genera speech encoder signal is first passed through an antialiasing lowpass filer and then sampled. To ensure ‘that aliasing is negligible, a sampling rate of 8000 Hz or higher is typically selected. The analog samples are then quantized and represented in digital form for transmission over telephone channels, PCM and DPCM are widely used waveform encoding methods for digital speech transmission. Logarithmic 1 = 255 compression, given by Equation (7.48), is generally used for achieving nonuniform quantization. The typical bit rate for PCM is 64,000 bps, ‘while the typical bitrate for DPCM is 32,000 bps. PCM and DPCM encoding and decoding are generally performed in a telephone central office, where (elephone lines from subscribers in a common geographical area are connected to the telephone transmission system. The PCM or DPCM encoded speech signals are transmitted from one telephone central office to another in digital form over so-called trunk lines, which are capable of carrying the digitized specch signals of many subscribers. The method for simultaneous transmission of several signals over a common ‘communication channel is called multiplexing. In the case of PCM and DPCM transmis: sion, the signals from different subscribers are multiplexed in time; hence, we have the term time-division multiplexing (TDM). In TDM, a given time interval Ty is selected as a frame, Each frame is subdivided into N subintervals of duration T;/N, whore N corresponds to the number of users who will use the common communication channel. Then, each subscriber who wishes to use the channel for transmission is assigned a subin- terval within each frame. In PCM, each user transmits one &-bit sample in each subinterval In digital speech transmission over telephone lines via PCM, there isa standard TDM hierarchy that has been established for accommodating multiple subscribers. In the frst level of the TDM hierarchy, 24 digital subscriber signals are time-division multiplexed into a single high-speed data stream of 1.544 Mbps (24 x 64 kbps plus a few additional bits for conteol purposes). The resulting combined TDM signal is usually called a DS-1 channel. In the second level of TDM, four DS-1 channels are multiplexed into a DS-2 channel, ceach having the bit rate of 6.312 Mbps. In a third level of hierarchy, seven DS-2 channels are combined via TDM to produce a DS-3 channel, which has a bit rate of 44.736 Mbps. Beyond DS-3, there are two more levels of TDM hierarchy. Figure 7.20 illustrates the TDM hierarchy for the North American telephone system, Section 766 Digital Audio Transmission and Dial Audio Recording 327 G4kbps 1544 Mops «6312 Mbps_——«44.736 Mops——_274.176 Mbps ach cach ach each ech Signals from Signalsfrom Signalsfrom Signals from otherDS-1 other DS-2 other DS-3-——other DS-4 [igure7.20. TheTDM hierarchy forthe North America telephone sjstem. In mobile cellular radio systems for transmission of speech signals, the available bit rate per users small and cannot support the high bit rates required by waveform encoding. methods, such as PCM and DPCM. For this application, the analysis-synthesis method ‘based on the LPC, as described in Section 7.5, is used to estimate the set of model param- ters from short segments of the speech signal. The speech model parameters are then transmitted over the channel using vector quantization. Thus, a bit rate in the range of 4800-9600 bps is achieved with LPC. In mobile cellular communication systems, the base station in cach cell serves as the interface to the terrestrial telephone system. LPC speech compression is only required for the radio transmission between the mobile subscriber and the base station in any cel “At the base station interface, the LPC-encoded speech is converted to PCM or DPCM for transmission over the terrestrial telephone system ata bit rate of 64,000 bps or 32,000 bps, respectively. Hence, we nate that a speech signal transmitted from a mobile subscriber to a fixed subscriber will undergo two different types of encoding; however, a speech signal ‘communication between two mobiles serviced by different base stations, connected via the terrestrial telephone system, will undergo four encoding and decoding operations. 7.62 Digital Audio Recording Audio recording became a reality with the invention of the phonograph during the second half of the nineteenth century. The phonograph had a lifetime of approximately 100 years, before it was supplanted by the CD that was introduced in 1982. During the 100-year period, we witnessed the introduction of a wide variety of records, the most popular of which proved to be the long-playing (LP) record that was introduced in 1948. LP records provide relatively high-quality analog audio recording. In spite of their wide acceptance and popularity, analog audio recordings have a number of limitations, including a limited dynamic range (typically about 70 dB) and a relatively low signal-to-noise ratio (typically about 60 4B). By comparison, the dynamic range of orchestral music is in the range of 100-120 dB. This means that if we record the ‘music in analog form, at low music levels, noise will be audible and, if we wish to prevent this noise, saturation will occur at high music levels. 328 Analog-to-Digital Conversion Chapter 7 TABLET. COMPARISON OF ANLP RECORD AND ACO SYSTEM SSS aaa) Frepmcyrapene | 308-22 7k se 18 Pmakme | 008 08 = ‘Signal-to-noise ratio 6048 29048 | Tne dorm | iam oan | Durability, igh frequency response | Permanent | degaes wit laying | Soha 30-0 a0 Digital audio recording and playback allows us to improve the fidelity of recorded 1music by increasing the dynamic range and the signal-to-noise ratio. Furthermore, digi- tal recordings are generally more durable and do not deteriorate with playing time, as db analog recordings. Next, we will describe a CD system as an example of a commercially successful digital audio system that was introduced in 1982. Table 7.4 provides a compar son of some important specifications of an LP record and a CD system, The advantages of the latter are clearly evident. From a systems point of view, the CD system embodies most of the elements of a ‘modern digital communication system. These include analog-to-digital (A/D) and digital- to-analog (D/A) conversion, interpolation, modulation/demodulation, and channel cod- ing/decoding. A general block diagram of the elements of a CD digital audio system is illustrated in Figure 7.21. Now, we will describe the main features of the source encoder and decoder. The two audio signals from the left (L) and right (R) microphones in a recording studio or a concert hall are sampled and digitized by passing them through an A/D con verter. Recall that the frequency band of audible sound is imited to approximately 20 KH. Therefore, the corresponding Nyquist sampling rate is 40 kHz. To allow for some fre- quency guard band and to prevent aliasing, the sampling rate in a CD system has been selected to be 44.1 kHz, This frequency is compatible with video recording equipment that is commonly used forthe digital recording of audio signals on magnetic tape ‘The samples of both the L and R signals are quantized using uniform PCM with 16 bits/sample. According tothe formula for SQNR given by Equation (7.4.4, 16-bit ui form quantization results in an SQNR of over 90 dB. In addition, the total harmonic distor tion achieved is 0.005%. The PCM bytes from the digital recorder are encoded to provide protection against channel erors in the readback process and passed to the modulator. ‘Atthe modulator, digital control and display information is added, including a table of contents of the disc. Ths information allows for programmability ofthe CD player. ; j : F : Section 7.6 Digital Audio Transmission and Digtal Audio Recording 329 Laser i recording Control Modulator |, and Demodulator splay Channel ‘Channel encoder ‘ecoder = Upsamplisg ot Interpolation AD DIA L oR L oR igure7.21 CD digital audio ‘Studio system, Using later, the digital signal from the modlator is optically reared on the su- face ofa glass dite that scouted with photoresist. This results in a maser disc, which i ‘sed fo produce CDs by ateis of proceies that imately convert the information int) Gay pits onthe plastic dae, The disci conted wth a reflective aluminum ctting and thea ih a potecive nage In the CD player, alaser is wed topically scan a tack on the dis ata constant velocity of 1.25 m/s and, thus, to ead the digitally recorded signal. After the L and R Signal are demoted and passed through the channel decoder. the digital audio signals converted back oan analog audio signal by means ofa D/A converter. ‘The conversion of Land Rdigtal audi signal into the DA converter has precision of 16 bits In principle, the dgital-o analog conversion ofthe two T6-it signals at the 330 AAnalog:to-Digital Conversion Chapter 7 Digital interpolator wal] ih = hy [Pps Pam [Bie | Signe [Trae | se L, [Analog] ouput | fa eon J" Peas ase ph 1a moar] By” |DYA [Mite | E ite |e samale | revit | : bee " dpa 80H pigatinsspttor | war] Mh 1-1 —— pen -y — A | Pewee | Pig | Sie [ENS | [Base ono PF [finest phase thigh 1~i]ncaetag| Ti” | DIA [~ |Past r se na ee output igure7.22 Oversamping and digital filtering 44.1 kHz sampling rate is relatively simple. However, the practical implementation of a 16-bit D/A converter is very expensive, On the other hand, inexpensive D/A converters with 12-bit (or less) precision are readily available. The problem is to devise a method for D/A conversion that employs low precision and, hence, results in a low-cost D/A converter, while maintaining the 16-bit precision of the digital audio signal ‘The practical solution to this problemis to expand the bandwidth of the digital audio signal by oversampling through interpolation and digital filtering prior to analog conver- sion. The basic approach is shown in the block diagram given in Figure 7.22. The 16-bit L and R digital audio signals are up-sampled by some multiple U by inserting U — 1 zeros between successive 16-bit signal samples. This process effectively increases the sampling rate to U x 44.1 kHz. The high-rate L and R signals are then filtered by a finite-duration impulse response (FIR) digital filter, which produces a high-rate, high-precision output ‘The combination of up-sampling and filtering is a practical method for realizing a digital interpolator. The FIR filter is designed to have a linear phase and a bandwidth of approx. imately 20 kHz, It serves the purpose of eliminating the spectral images created by the up-sampling process and is sometimes called an antiimaging filter If we observe the high sample rate and high precision of the L and R digital audio signals ofthe output ofthe FIR digital filter, we will find that successive samples are nearly the same; they differ only in the low-order bits. Consequently, it is possible to represent successive samples of the digital audio signals by their differences and, thus, to reduce the dynamic range of the signals. If the oversampling factor U is sufficiently large, delta ‘modulation may be employed to reduce the quantized output to a precision of I biUsample. Thus, the D/A converter is considerably simplified. An oversampling factor U = 256 is normally chosen in practice. This raises the sampling rate to 11.2896 MH. Recall that the general configuration for the conventional delta modulation system is as shown in Figure 7.23. Suppose we move the integrator from the decoder to the input of the delta modulator. This has two effects. First, it preemphasizes the low frequencies in the {input signal; thus, it increases the correlation of the signal into the delta modulator, Second it simplifies the delta modulator decoder because the differentiator (the inverse system, : : [ | Section 766 Digital Audio Transmission and Digital Audio Recording 331 oe say i a pre Encoder ‘ Decoder vat = a signal igure7.24 A sigma-deta modulate required at the decoder is canceled by the integrator. Hence, the decoder is reduced to a simple lowpass filter. Furthermore, the two integrators at the encoder can be replaced by a single integrator placed before the quantizer. The resulting system, shown in Figure 7.24, is called a sigma-delta modulator (SDM). Figure 7.25 illustrates an SDM that employs a single digital integrator (first-order SDM) with a system function H@) Thus, the SDM simplifies the D/A conversion process by requiring only a L-bit D/A fol- owed by a gonventional analog filter (a Butterworth filter, for example) for providing antialiasing protection and signal smoothing. The output analog filters have a passband High-rate ® eee See Sana ss me [se ‘ter igure 7.25 A fst-order sigma-delta modulator. 332 Analog:to-Digital Conversion Chapter? of approximately 20 kHz; thus, they eliminate any noise above the desired signal band. In modem CD players, the interpolator, the SDM, the 1-bit D/A converter, and the lowpass smoothing filter are generally implemented on a single integrated chip. 7.7 THE JPEG IMAGE-CODING STANDARD “The JPEG standard, adopted by the Joint Photographic Experts Group, is a widely used standard for lossy compression of stil images. Although several standards for image com- pression exis, JPEG is by fr the most widely accepted. The JPEG standard achieves very g0od-to-excellent image quality and is applicable to both color and gray-scale images ‘The standard is also rather easy to implement and can be implemented in software with acceptable computational complexity JPEG belongs tothe clas of wansform-coding techniques, ie. coding techniques that do not compress the signal (in this case, an image signal directly, but compres the tcansform of it. The most widely used transform technique in image coding is DCT (4is- crete cosine transform). The major benefits of DCT are its high degree of energy com- paction properties and the availability ofa fas algorithm for computation of the transform. ‘The energy compaction property of the DCT results in transform coefficients, in which } only afew of them have significant values, so that nearly all ofthe energy is contained in | those particular components ‘The DCT of an N x WV picture with luminance function x(m,n),0.< m,n < =I can be obtained with the use of the following equations: x0,0= 2° Sx, amy is 2 Che Dun) [ere Don Xo = FL Voxtk Dove SEP cos[ AEP] av 40. 272) ‘The X(0,0) coefficient is usually called the DC component, and the other coefficients are called the AC components. i ‘The JPEG encoder consists of three blocks: the DCT component, the quentizer, an¢ the encoder, as shown in Figure 7.26. ‘The DCT Component. A picture consists of many pixels arranged in an m x 7 array. The first step in DCT transformation ofthe image isto divide the picture array int \ 8 x8 subarrays, Ths size ofthe subarrays has been chosen as a compromise of complexity and quality In some other standards, 4-4 or 16% 16 subarrays are chosen. Ifthe number rows or columns (m or n) is nota multiple of 8, then the last row (orcolumn) is repicatet tomake ita multiple of 8. The replications are removed at the decoder. After generating the subarrays, the DCT of each subarry is computed, This proces ‘ generates 64 DCT coefficients for each subarray starting from the DC component X (0,0 and going up to X (7,7). The process is shown in Figure 7.27 Section 7.7 The JPEG lmage-Cocing Standard 333 8x ablocks DCT-based encoder | [er pe] ee | pL __| Compre Saas dte Source image data Table i specications equamtizer—| ict ‘Compressed image data Table bie Reconstructed specications] [specifications image data © Figure7.26_ ‘Theblock diagram of «JPEG encoder and decode. 8 a i . ——- | Figure7.27 ‘The DCT transformation in PEG. 334 AAnalog:to-Digital Conversion Chapter 7 TABLETS QUANTIZATION TABLEFOR PEG 16 uw 10 16 4 40 st a | e[e[u}>ols)s]e|s | «[s[e[»[«|[s* |e) «| wfto|l2flo»fs |» | o |e | a en eae eo a ee ee | »fs fs |e | [om [im [2 | o[« [sl] | | m | | m | mf [ss |» [im | m | m |» ‘The Quantizer. Due to the energy compaction property of the DCT, only low- frequency components of the DCT coefficients have significant values. Since the DC component carries most of the energy and since there exists a strong correlation between the DC component ofa subarray and the DC component of the preced: ing subarray, a uniform differential quantization scheme is employed for quantization of DC components. The AC components are quantized using a uniform quantization scheme. Although all components are quantized using a uniform scheme, different uniform quanti zation schemes use different step sizes. All quantizers, however, have the same number of quantization regions, namely, 256. ‘A 64-clement quantization table determines the step size for uniform quantization of cach DCT component. These step sizes are obtained using psychovisual experiments. The result of the quantization step is an 8 x 8 array with nonzero elements only in the top left ‘comer and many zero elements in other locations. A sample quantization table illustrating the quantization steps for different cocfficients is shown in Table 7.5 ‘After the quantization process, the quantized DCT coefficients are arranged in avec tor by zigzag sampling, as shown in Figure 7.28. Using this type of sampling, we obtain vector X of length 64 with nonzero values only in the first few components. The Encoding. The quantization step provides lossy compression of the imagi using the method described previously. After this step, entropy coding (as will be discusses in Chapter 12) is employed to provide lossless compression of the quantized values. On of the entropy-coding methods specified in the JPEG standard is Huffman coding; this wil bbe discussed in Section 12.3.1. In this case, Huffman codes are based on tables specifying code words for different amplitudes. Since the quantized subarrays contain a large numbe of zeros, some form of runlength coding is used to compress these zeros. Refer to th references atthe end ofthis chapter for further details. Compression and Picture Quality in JPEG. Depending on the rate, JPEC can achieve high-compression ratios with moderate-to-excellent image quality for bot Section 78 Summary and Further Reading : 335 AG, ACs Te \ Figure 7.28 sampling ofthe ne igure 7.28 Zigzag sampling. ACa per eoefciente gray-scale and color images. AC rates of 0.2-05 bits/pixel, moderate-to-good quality ppictures can be obtained that are sufficient for some applications. Increasing the rate to 05-075 bits/pixel results in good-to-very-good quality images that are sufficient for many applications. At 0.75-1.5 bits/pixel, excellent quality images are obtained that are sufficient for most applications. Finally, at rates of 1-5-2 bits/pixel, the resulting image is practi- cally indistinguishable from the original. These rates are suficient for the most demanding applications. 7.8 SUMMARY AND FURTHER READING ‘The focus of this chapter was on the conversion of analog signals to digital form, We began by describing the sampling theorem for bandlimited signals. We demonstrated that bby sampling an analog signal with bandwidth W at the minimum rate of 2W samples per second, itis possible to reconstruct the analog signal from its samples with no loss in fidelity or information. This minimum sampling rate of 2W samples per second is called the Nyquist rate. ‘The second step in the conversion of an analog signal to digital form is quantization of the samples to a set of discrete amplitude levels. The simplest form of quantization is scalar quantization, where each sample is quantized separately. A scalar quantizer can per- form either uniform quantization or nonuniform quantization. We described both methods and characterized the performance of a uniform quantizer and a nonuniform quantizer in terms of the signal-to-quantization-noise ratio (SQNR). We also described vector quanti- zation, in which a block of k samples is jointly quantized. In general, vector quantization 336 ‘Analog-to-Digital Conversion Chapter? results in superior performance compared to scalar quantization, and is widely. used in speech and image digital signal processing. ‘The third and final step in the conversion of an analog signal to digital form is encod- | ing, Inthe encoding process, a sequence of bis is assigned to different quantization values ‘We also described several waveform coding schemes which are designed to repro. duce the output waveform from a source atthe destination with as litte distortion as possi ble. These methods include both uniform and nonuniform pulse code modulation (PCM) differential pulse code modulation (DPCM), delta modulation (DM), and adaptive delt: modulation (ADM). Another waveform encoding method is based on construction of « ‘model for the analog source and using linear prediction to estimate the mode! parameters which are transmitted to the receiver, In turn, the receiver uses the model parameters tc reconstruct the source and generate the source output. This method is called an analysis: synthesis technique, where the analysis is performed at the transmitter to estimate tht model parameters and the synthesis is performed at the receiver to construct the mode for the source and generate the source output. This technique is widely used in speect coding Inthe last two sections of this chapter, we presented applications of analog-to-digita conversion in digital audio transmission in telephone systeras, digital audio recording fo the compact disc, and image coding based on the JPEG standard, 5 Jayant and Noll (1984) and Gersho and Gray (1982) examine various quantizatio, and waveform coding techniques in detail. Gersho and Gray (1992) include a detailed treat rent of vector quantization. Analysis-synthesis techniques and linear predictive codin are treated in books on speech coding, specifically Markel and Gray (1976), Rabiner an Schafer (1979), and Deller, Proakis, and Hansen (2000). The JPEG standard is describe {in detail inthe book by Gibson, etal. (1998) PROBLEMS | 7.1 Assume x( asa bandwidth of 0 KE, 1, Whatis the minimum sampling rate for this signal? 2. What is the minimum sampling rate if guard band of 10 kHz is required? 3. What is the maximum sampling interval for the signal x (1) + (0) €08(80,000x1)? 7.2 For a lowpass signal with a bandwidth of 6000 Hz, what is the minimum samplir frequency for perfect reconstruction of the signal? What is the minimum require sampling frequency if a guard band of 2000 Hz is required? If the reconstructic filter has the frequency response K fl < 7000 K — KZ 7900 < |f| < 10,000 , 0 otherwise HA Problems 337 what is the minimum required sampling frequency and the value of K for perfect, reconstruction? 73 Letthe signal x(¢) = Asine(10001) be sampled with a sampling frequency of 2000 samples/sec. Determine the most general class of reconstruction filters for perfect reconstruction of this signal 7.4 The lowpass signal x(t) with a bandwidth of W is sampled with a sampling interval of T,, and the signal YX rete nt) 208 is reconstructed from the samples, where p(t) is an arbirary-shaped pulse (not nec- essaily time limited tothe interval [0, 7.) 1. Find the Fourier wansform of x9(1). 2, Find the conditions for perfect reconstruction of (1) frorixp(t). 3. Determine the required reconstruction fier. 7S The lowpass signal x(t) with a bandwidth of W is sampled at the Nyquist rate, and the signal DY Cyn se =n) is generated 1. Find the Fourier transform of x(t) 2. Can x(t) be reconstructed from x(t) by using a linear time-invariant system? why? 3. Can x(t) be reconstructed from x(t) by using a linear time-varying system? How? 7.6 A lowpass signal x(*) with bandwidth W is sampled witha sampling interval T;, and the sampled values are denoted by x(n7,). A new signal x;() is generated by lincar interpolation of the sampled values, ic., tnt x0 = 2007) + FMF NT) ~ xT) nT, st < (n+ Te 1. Find the power spectrum of x;(t). 2. Under what conditions can the original signal be reconstructed from the sampled signal and what isthe required reconstruction filter? 38 Analog-to-Digitl Conversion Chapter? 727 A lowpass signal x(t) with bandwidth of 50 Hz is sampled at the Nyquist rate and the resulting sampled values are -1 -4en<0 x0) ={ 1 O 20, you can use the approximation arctan x © 7.24 Signal X(t) has a bandwidth of 12,000 Hz, and its amplitude at any time is a random variable whose PDF is shown in Figure P-7.24. We want to transmit this signal using auniformPCM system. Figure F724 1. Show thata = 3. 2, Determine the power in X(t). 3. What isthe SQNR in decibels if a PCM system with 32 levels is employed? 4. What isthe minimum required transmission bandwidth in Part 3? 344 Analog-to-Digital Conversion Chapter 7 5. If we need to increase the SQNR by at least 20 dB, by how much should the ‘cansmission bandwidth increase? 725 The power spectral density of a zero-mean WSS random process X (0) is given by F450 “00, 5000 = f <0 SN = | -F +9000 9 5 5009 + 5000 0

You might also like