Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
23 views
Module4 (ADC) Compressed
Uploaded by
Dhanush Sagar
AI-enhanced title
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Module4(ADC) Compressed For Later
Download
Save
Save Module4(ADC) Compressed For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
23 views
Module4 (ADC) Compressed
Uploaded by
Dhanush Sagar
AI-enhanced title
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Module4(ADC) Compressed For Later
Carousel Previous
Carousel Next
Save
Save Module4(ADC) Compressed For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 20
Search
Fullscreen
«means that it ean take on an infinite woe ite number of Values, Aj, be converted into a digital 8 on only fit : 0 ong and auantizing, tha is, rounding a Yalueo one of theese, 7 tere) shown in Fig. $14, The amplitudes of the signal () lie in the range (~mp, mip), ort rtioned into L subintervals, eagh a? AV = 2my/L, Nex, each nee se ed bythe midpoint valve of the g Sample falls (see Fig. 5.14 for L= 16). numbers. Thus, the signal ie digitized, with quantized A signal is known igital signal, val in Which the sample is NOW APProximated to one of the king on Any One Of the values, Such as an Leary d on practical viewpoint, a binary digital Signal that can take on only two values) is very le because of its simplicity, economy, and ease of engineering. We can convert an L-ary signal into a esr signal by using pulse coding. Such a Coding for the case of 1 16 was shown in Fig. 1.5, vel by binary representation ofthe 16 decimal digits - This contraction of ion and is used throughout the book, Thus, each sample in this example is encoded by four bits, To transmit this binary data, we need to assign disinet pulse shape to each of the two bits, One possible way is to assign a negative pulse to a binary 0 1 i ig. ia nate sali ow ranmited bya gous of Gere ulses (pulse code). The resulting signal is a binary signal, Allowed quantiz:282 SAMPLING AND ANALOG-TO.DIGITAL CONVERSION The audio si lation Cimeltiginien eo is about 15 kHz, However, for speech, subjective objective in tel not affected if all the components above 3400 Hz are su Hare iain {Comraunication is ineligibility rather than high fidelity the og 8 KH), Thin nay 8 !owPASs filter The resulting signal is then sampled at rate of 8000 filters can be apie intentionally kept higher than the Nyquist sampling rate of 6.8 Which requine ree £0" signal reconstruction. Each sample is finally quantized ina fequires 8 x oe binary pulses to encode each sample (28 = 256). Thus, a ta, ,000 binary pulses per second, signal Relea tore application of PCM. This is a high-fidelity situation req Rares ote the Noa sampling rte is only 40 kta, the @ = 65,536) of sit PealaGn = re earlier. The ee I ee : jon levels, each of which is represented by 16 bits to reduce t ‘The binary-coded samples (1.4 million bivs) are then recorded on the CD. 7 3.2.1 Advantages of Digital Communication Here are some of the advantages of digital communication over analog communication. 1. Digital communication, which can withstand channel noise and distortion much better than as the noise and the distortion are within limits, is more rugged than analog communication, ortion or noise, no matter how small, will distort the messages, on the other hand, any 2. The greatest advantage of digital communication over analog communication, however, is th of regenerative repeaters in the former. In an analog communication system, a message Sig progressively weaker as it travels along the channel, whereas the cumulative channel noise distortion grow progressively stronger. Ultimately the signal is overwhelmed by noise and di ‘Amplification offers little help because it enhances the signal and the noise by the same B Consequently, the distance over which an analog message can be transmitted is limited by transmission power. For digital communication, a long transmission path may also lead 10 noise and interferences. The trick, however, is to set up repeater stations along the tan distances short enough to be able to detect signal pulses before the noise and distortion have! ‘accurmuilate sufficiently. At each repeater station the pulses are detected, and new, clean puls mitted to the next repeater station, which, in turn, duplicates the same process. If the noise an are within limits (which is possible because of the closely spaced repeaters), pulses can be d rectly.’ This way digital messages can be transmitted over longer distances with greater relia i mes from quantizing. This error can be reduced as much a8) ing levels, the price of which is paid in an increased increasing the m transmission medium (channel). ital hardware implementation is flexible integrated circuits. 4, Digital signals can be coded to yield extremely low error rates and high fidelity as well as £0 5, Ibis easier and more 6. Digital rin axorage 18 relatively easy and inexpensive. It also has the ability to 8 7 pide aoe 1 a distant electronic database __ ares ‘also be suppressed without affecting the articulation. “Components bel anos pe nade negli and permits the use of microprocessors, digital 3. ‘efficient (o multiplex several digital signals. js inherently more efficient than analog in exchanging SNR fornerd nes OF ogress if joo ial POS wen ication OF st exh eqnnntunic erorieal Noe ual 7 : a jen rai re Port tt : awa" and in doing § system, wry 8c." Others, like R. Hall in Mathematics of Poetry place him fate, circa 200 8CE ih co eibniz (1646-1716) was the first mathematician in the West to work on. tls NS, «Wiel ion (using 18 and 0s) for any number, He felt a spiritual significance ia this He > rer representing Unity, was clearly a symbol for God, while 0 represented pat 1 Fn numbers can be represented merely by the use of 1 and 0, this surely as sha verse out of nothing! <92 Quantizing oned earlier, digital signals come from a variety of sources, Some sources such as computers are al. Some sources are analog, but are converted into digital form by a variety of techniques PCM and delta modulation (DM), which will now be analyzed. The rest of this section provides iscussion of PCM and its various aspects, such as quantizing, encoding, synchronizing. the .jansmission bandwidth, and SNR. + quantization, we limit the amplitude of the message signal m(t) to the range (—mp, my), a8 shown Note that mp is not necessarily the peak amplitude of m(:). The amplitudes of m(t) beyond my. chopped off. Thus, mp is not a parameter of the signal m(:); rather, it is the limit of the quantizer. ude range (—mp, mp) is divided into L uniformly spaced intervals, each of width Av = 2mp/L- value is approximated by the midpoint of the interval in which it lies (Fig. 5.14). The quantized * ae coded and transmitted as binary pulses. At the receiver some pulses may be detected incorrectly. ‘bere are two sources of error in this scheme: quantization error and pulse detection error. In almost al schemes, the pulse detection error is quite small compared to the quantization error and can be in the present analysis, therefore, we shall assume that the error in the received signal is caused vely by quantization, 7.) 's the kth sample of the signal m(t), and if (kT) is the corresponding quantized sample, then ‘e interpolation formula in Eq. (5.10), (1) = J m(kT) sine (2x Bt ~ kx) ‘ ad fit) = > M(KT,) sine (2x Bt — kw) k "is the signa ‘Thus, "ced signal je, 40 = 0) mh z Sa the f284 SAMPLING PLING a AND ANALOG -TO.DIGITAL CONVERSION 40) = SY [HkT,) — m(kT,)] sine On Bt — kar) 7 = 3 a(kT) sine (Bt ~ kr) t Where g(Kr, 's) is the — 8 Noise, known san Zttion error in the Ath sample, The signal q() is the undesired ve S dantization noise. To calculate the power, or the mean square | T1009 aan iz = lim = (kT,) sinc (27 Bt~kr)| de ding f, [Saar ja aaa r/2 WO) = Jim ff Pwd x. -T/2 We can sh Ow that is, that (see Prob, 2.14-4) the signals sine (21Bt — mm) and sine (27Br — nx) ¢ 2° O m#n Sine (27 Bt — mz) sinc (2x Bt —nx)dt={ 1 00 ap man Because of thi: i 4 gawe ol an result, the integrals of the cross-product terms on the right-hand side of Eq. (5.2 Se et 1 pte ) = lim — 2, ota eel gi) ie 1 ¢ (kT,) sinc* (20 Bt — kr) dt 1 7/2 = Jim, a Leary a sine? (27 Br — ker) dt From the orthogonality relationship (5.29b), it follows that Because the sampling rate is 2B, the total number of samples over the averaging interval T is the right-hand side of Eq, (5.30) represents the average, or the mean of the square of the The quantum levels are separated by Av = 2mp/L. Since a sample value is approximated ihe sub (of height Av) in which the sample falls, the maximum quantization error is he quantization error lies in the range (—Av/2, Av/2), where 2my nes Ave atcat N, the mean square valor power of the quan nie, weal ent ty fi Sis Ng = Po= ee j ignal fn(s) at the nat the pulse detection error at the receiver is negligible, the reconstructed signal sing y output iS Fat) = mit) + a) ed signal at the output is m(P), and the (quantization) noise is q(#). Since the power of the message ied si is (1), then So= mt) me. al 2 G.34) a 5 ews a lethis equation, m, is the peak amplitude value that a quantizer can accept, and is therefore a parameter of ‘te quantizer. This means S/N, the SNR, is a linear function of the message signal power m(t) (see Fig. S86 with « = 0), 4 523 Principle of Progressive Taxation: Nonuniform Quantization Fecal that $,/,, , the SNR, isan indication of the quality ofthe received signal. Ideally we would like to R (the same quality) amass message signal power m(). Unfortunately, the ts familiar with the theory emo gis Wnp/t) = 0B fied constant SN hse atzaon,286 SAMPLING AND ANALOG TODIGITAL CONVERSION i ir rey repr trie signa ewer PC) whic es ime 40 4B (a power ratio of 10). The signal power can also vary becste pi circuits. This indicates that the SNR in Bq. (5.34) can vary widely, ine ee of the circuit. Even for the same speaker, the qual i the person speaks sofily, Statistically, it is found that sn : amplitudes are much less frequent, This means the SNR will be Jew iS of ont ‘The root of this difficulty lies in the fuct that the quantizing © fe ate ‘The quantization noise Ny = (Av)?/12 (Eq. (5:32)] is directly PoP for ‘The problem can be solved by using smaller steps for smaller amplitudes in Fig. 5.1Sa. The same result is obtained by fi quantization. The input-output character the normalized input signal (i Quantization levels Sas Uniform; AY Nonuniform (b) Figure 5.15 Nonuniform quantization.js the output signal ys The: on all np signals, and ns AY tor umber ‘of steps (or smaller si i ie signal power; AN approxiately: sroportional to the signal power m(1), hus making the SNR. practically fidepentent “i ae over a Inge dying ranged (Hee mer Fig, 5.18). ‘This approach ba br - similar to the use of progressive Income tax to equalize incomes. ‘The lou : Js are penalized with higher noise steps Av to compensate the soft talkers and neil rurchoices, (Wo compression laws have been accepted us desirable standards by eT yi North America and Japan, and the A-law used in Europe and the rest of the worl me vee utes, Both the (law and the A-law curves have odd symmetry about the vertical axis. Hi amplitudes) is given by non Pag ae m (5.35a) = re mee Os st +r positive armplitudes) is réa(Z) os <} rena ( +n) 4s™<1 Mp m oo (5.35b) ” 5. «characteristics are shown in Fig, 5.16, | pmnpression parameter 44 (or A) determines the degree of compression, To obtain a nearly constant \sveraadynanie range of input signal power 40 dB, 4 should be greater than 100. Early North American wel banks and other digital terminals used a value of 4 = 100, which yielded the best results for 7-bit ‘encoding. An optimum value of = 255 has been used for all North American 8-bit (256-level)‘Nonuniform quantizer Figure 5.17 Utilization of compressor and expander for nonuniform digital terminals, and the earlier value of j1 is now almost extinct, For the A-law, a ‘Comparable results and has been standardized by the ITU-T.° fe The compressed samples must be restored (o their original values at the receiver by with a characteristic complementary to that of the compressor. The compressor and are called the compandor. Figure 5.17 describes the use of compressor and expander quantizer to achieve nonuniform quantization. Generally speaking, time compression of a signal increases its bandwidth, But in PG pressing not the signal m(s) in time but its sample values. Because neither the time se ‘of samples changes, the problem of bandwidth increase does not arise here. It happens 1 compandor is used, the output SNR is a my Rey eieee Sle = er» No [in(l +n) mo The output SNR for the cases of = 255 and 1 = 0 (uniform quantization) as a fune message signal power) is shown in Fig. 5.18 0 10 2 30 4 30 60 Relative signal power m?(1), dB —> ‘Figure 8:18 Rollo of signal o vonization nos in FEM wih ond wihou290 SAMPUNG t NG AND ANALOG-TO.DIGITAL CONVERSION With @ voltage reference vane tained by a combination of reference voltages proportional to 27, The Voltages are conveniently generated by a bank of resistors R, 2R, 27R, 9 ‘Upper or en involves answering successive questions, be} sample rok If of the allowed range. The first code di ‘upper or the lower half of the range, In the second step, another Scania, a Whether the sample is in the upper or the lower half of the subinterval ~gathbend Process continues Uuntil the last binary digit in the code has been ger the inverse of encoding. In this case, each of the n digits is applied to a tue. The Ath digit is applied to a resistor 2R. The currents in all the resistors an Proportional to the quantized sample value, For example, a binary code word 100101; Proportional to 2” 4-0 4.0.4 24 4-04 22 4 2! +0 = 150. This completes the D/A 5.2.4 Transmission Bandwidth and the Output SNR For a binary PCM, we assign a distinct group of 1 binary digits (bits) to each of the L Ise & Sequence of » binary digits can be arranged in 2” distinct patterns, La? or n=logyh each quantized sample is, thus, encoded into 7 bits. Because a signal mi(?) band-limited minimum of 28 samples per second, we require a total of 2nB bit/s, that is, 2nB pieces of iq ‘second. Because a unit bandwidth (1 Hz) can transmit a maximum of two pieces of informats (See. 5.1.3), we require a minimum channel of bandwidth Br Hz, given by By =nB Hz ‘This is the theoretical minimum transmission bandwidth required to transmit the PCM si 8.3, we shall see that for practical reasons we may use a transmission bandwidth higher ‘Example 5.5 A signal m(?) band-limited to 3 kHz is sampled at a rate 334% higher than the Nyquist rate acceptable error in the sample amplitude (the maximum quantization error) is 0.5% of the p The quantized samples are binary coded. Find the minimum bandwidth of a channel encoded binary signal. If 24 such signals are time-division-multiplexed, determine the min ‘bandwidth required to transmit the multiplexed signal. Solution The Nyquist sampling rate is Ry = 2 x 3000 = Ry = 6000 x (15) = 8000 Hz. The quantization step is Av, and the maximum quantization error is +Av/2, Therefore, from Eq. (5,31), 6000 Hz (samples per second). The actual sat Ay 05 fe Pe. =2 Se age mee 200 binary coding, L-must be a power of 2, Hence, the next higher value of L that is @P - From Eq. (5.37), we need n = log) 256 = 8 bits per sample, We require (0 C= 8 x 8000 = 64,000 bits, Because we can transmit up to 2 bils per hertz of ‘2 minimum transmission bandwidth By = C/2 = 32 kHz.minimum of in sinnal has a total of Cx = 24 % 64,000 = 1.536 Mb/s, which requires til of transmission bandwidth, 70 r f the Output SNR ncrense © J the output SNR in Eq, 6,34) oF Ba, (5.36) ean be expressed 8 n= 2, 6.39 Se een 3 mt [uncompressed case, in Eq. (5.34)] m a 1 —_3___,_ [compressed case, in Eq, (5.36)) tnd +r vf 69, (5.38) into Eq. (5.39) yields ¥ So (5.40) % = eat tg (540) we observe that the SNR increases exponentially with the transmission bandwidth Br. ve «efor bandwidth is attractive and comes close to the upper theoretical limit. A small increase in tr yields a large benefit in terms of SNR. This relationship is clearly seen by using the decibel sc 9) as = 10logyy ( B10 N, 1010849 [e(2)" rite Eq. = 10logig¢ + 2n logy 2 = («+ 6n) dB G4) 0 logio¢. This shows that increasing n by 1 (increasing one bit in the codeword) quadruples the SNR (a 6 dB increase). Thus, if we increase n from 8 to 9, the SNR quadruples, but the transmission ' increases only from 32 KFIz to 36 kHz (an increase of only 12.5%). This shows that in PCM, SNR ntrolled by transmission bandwidth. We shall see later that frequency and phase modulation also iteg uires a doubling of the bandwidth to quadruple the SNR. In this respect, PCM is strikingly »FM or PM. bample 5.6 nut Of bandwidth B= 4 kHz is transmitted using a binary companded PCM with 4. = 100, he» Me case of L = . Mu SNR 64 with the ease of L = 256 from the point of view of transmission bandwidth and3 SAMPUNG AND ANALOG-TO DIGITAL CONVERSION Solution Pork = 4, = 6, the trnnsmission bandwidth ix nfl = 24 KH, x = (a +36) dB we 10109 me tamny =O {in 10) Hence, 5 ° = 27,49 dB For L = 256, m = 8, the transmission bandwidth in 32 kHz, $0 ns + 6n = 39,49 JB The difference between the two SNRs is 12 dB, which is a ratio of 16, Thus, the SNR fori the SNR for L = 64. The former requires just about 33% more bandwidth compared to the ents on Logarithmic Units ithmic units and logarithmic scales are very convenient when a variable has a lange iis the case with frequency variables or SNRs. A logarithmic unit for the power ratio isd ‘as 10 logy (power ratio), Thus, an SNR is x dB, where = 101 . x B10 r gain or Joss over a certain transmission medium, Por it the same unit to express powe factor of 15, the cable gain is in cable the signal power is attenuated by a f G = 10 logio i = ~11,76 dB ‘or the cable attenuation (loss) is 11.76 dB. 7 ‘Although the decibel is a measure of power ratios, it is often used as a measure of énstance, “100 watt” may be considered to be # power ratio of 100 with respect (0 1 expressed in units of ABW as Panw = 10 logio 100 = 20 dBW ‘Thus, 100-watt power is 20 dBW. Similarly, power measured with respect to 1 mW instance, 100-watt power is 100W imw = 50 dBm Pag = 10 log302 SAMPLING AND ANALOG-TO.DIGITAL CONVERSION system (DDS), which provides standards for multiplexing digital signals with DSO signal for transmission through the network. ___ The inputs to a TI multiplexer need not be restricted only to digitized voice el signal of 64 kbit/s of appropriate format can be transmitted. The case of the higher| ple, all the incoming channels of the DM1/2 multiplexer need not be DS1 signals 24 channels of 64 kbit/s each. Some of them may be 1.544 Mbit/s digital signals of soon. In Europe and many other parts of the world, another hierarchy, recommended by has been adopted. This hierarchy, based on multiplexing 30 telephone channels of 64 into an E-1 cartier at 2,048 Mbit/s (30 channels), is shown in Fig. 5.26. Starting from four lower level lines form one higher level line progressively, generating an E-2 fin of 8.448 Mbivs, an E-3 line with data throughput of 34.368 Mbit/s, an E-4 line 139.264 Mbivs, and an E-5 line with data throughput of 565.148 Mbit/s. Because di able to interface with one another across the three different systems (North American, in the world, Fig. 5.26 demonstrates the relative relationship and the points of their com PDH T-Carrier worldwide USA and Canada Japan 6.312 Mbit/s 1,544 Mbivs | JL Single-user line ® 64 kbiv/s Figure 5.26 Plesiochronous digital hierarchy (PDH) according to ITU-T Recommendation 5.5 DIFFERENTIAL PULSE CODE MODULATION PCM is not a very efficient system because it generates so many bits and requires so transmit, Many different ideas have been proposed to improve the encoding efficiency In general, these ideas exploit the characteristics of the source signals, Differential p (DPCM) is one such scheme. : {In analog messages we can make a good guess about a sample value from knowledge ues, In other words, the sample values are not independent, and generally there is a great’ in the Nyquist samples, Proper exploitation of this redundancy leads to encoding a si Consider a simple scheme; instead of transmitting the sample values, we transmit the successive sample values. Thus, if m[k] is the kth sample, instead of transsl ign inter St re ae oot SNR, of fora given SNR wi adie ‘(or ransom bone c upon tis scheme by estimating (preiting) the value ofthe kth sample mI) pro" al previous sample values, If this estimate is M (A), then we transmit 5 tet 7 m{k] — M{k]. At the receiver also, we determine the estimate / (} rin 06 «i then generate m [A] By adding the received d (k{ to the extimate (A) Tee a (the receiver iteratively. If our prediction is worth its salt, the predicted ( pra «iwi ean the successive samples. Consequeil, this scheme, kxiown as the differential Loe ee Peo the naive prediction described in the preceding parngraph, which is « special © S uperior nple value is taken as the previous sumple value, that is, fx (A) = mle the estimate of, Maclaurin, and Wiener lo. we shall betel Susann approach to signal prediction (estimation). To ns ture prediction seems like mysterious stuff, fit only for psychics, wizards, mediums. and summon help from the spirit world, Electrical engineers appear to be hopelessly outclassed sr Not quite so! We can also summon the spirits of Taylor, Maclaurin, Wiener. and the like i ‘vat is more, unlike Shakespeare's spirits, our spirits come when called.* Consider. for example. * v2), which has derivatives of all orders at ¢. Using the Taylor series for this signal, we can express 2 3 mT) = m0) + Tae) + Sin) + i) + -- (SA2ay = m(t) + Tym() for small T, (3.42b) a) shows that from a knowledge of the signal and its derivatives at instant f, we cam predict a I value at 1 + T,. In fact, even if we know just the first derivative, we can still predict this value » as shown in Eq, (5.42b). Let us denote the kth sample of (1) by m [Kl], that is, m(kT,) = m [k}. sini £7.) = m(k+ 1], and so on. Setting ¢ = kT, in Eq. (5.42b), and recognizing that im(kT,) ~ T,) — m(kT, — T;)) /Ts, we obtain iH) — mle — mik-+ 1) mi) + [mene = 2m[(k] — m[k — 1] ¥s that we can find a crude prediction of the nation in Eg, (5.42b) improves as we add ne higher order derivatives in the series, ‘amples we use, the better will be the py (k + Ith sample from the (wo previous samples. more terms in the series on the right-hand side, ‘To We require more samples in the past. The larger the num. rediction. Thus, in general, we can express the prediction m{k] © aym[k — 1 + agmfk = 2) + +++ aym[k ~ NI kG i bere, Hemry IV, Part |, Act Il, Scene I: one lh stam ee ‘an ora a a “ey Come when you do. call for them?id SAMPLING AND ANALOGTO.IGITAL CONVERSION The ADE Fight-hand side is ya (k), the predicted value of m{k. Thus, This put eae Ge of an Nth-order predictor, Larger N would result in better mtk~1}, mik (predictor) is ri (KJ, the predicted value of m[kJ. The input con IST, Gtoere 2},...,.m{k~ N], although it is customary to say that the i Mine ve that this equation reduces to ft [k] = mk ~ 1] in the case of the ied eb: Eg. (5.42b), where we retain only the first term on the right-hand side, 3 ee predictor is a simple time delay. disc naN Cutline here a very simple procedure for predictor design. In a more ssed in Sec, 6.5, where we use the minimum mean squared error criterion for b coefficients a, in Eq. (5.44) are determined from the statistical correlation b predictor described in Eq. (5.44) is called a linear predictor. [tis basically a tans jay line), where the tap gains are set equal to the prediction coefficients, as shown in Input m{&] Figure 5.27 Transversal filer lopped delay line) used as @ linear predictor Analysis of DPCM / ‘As mentioned earlier, in DPCM we transmit not the present sample m [A], but d{k] (the dif mm {El and its predicted value /i(k}). At the receiver, we generate 7 [hk] from the past sample ‘he received d {kl is added to generate m (k]. There is, however, one difficulty associated with the receiver, instead of the past samples m(k— 1], m{k—2], ...» as well as d[k], we have’ gersions ma, (1), m,(b— 2]... Hence, we cannot determine [8]: We ea mil vStimate of the quantized sample m [k], in terms of the quantized samples mg [k= | J, mq sri increase the error in reconstruction. In such a case, a better strategy is to determine of mg {k] (instead of mk), at the transmitter also from the quantized samples mg [k= 1 The difference d {k] =m {k] — m, [k] is now transmitted via PCM. At the receives, We can ‘and from the received d (KJ, we can reconstruct mg [k]. ; Figure 5.28a shows a DPCM transmitter. We shall soon show that the predictor input the predicted value of m, (k]. The difference : its output is ritg {4}, d{k] = mI — fing (A is quantized to yield dy (k] = d{k) + gk)Input) 1g Uk) = Hg [] + dg (k] = mk) — d Uk] + dy UK) =m{k) +k) (AT) (mi, [4] is @ quantized version of mk]. The predictor input is indeed m, [k], as assumed. The «1d, [4] is now transmitted over the channel. The receiver shown in Fig. 5.28b is identical to 4 portion of the transmitter, The inputs in both cases are also the same, namely, dj [&]. Therefore, Niclor output must be stg (K] (the same as the predictor output at the transmitter), Hence, the receiver nich Is the predictor input) is also the same, viz., mg (k] = m{(k] + qk]. as found in Eq. (5.47). » that we are able to receive the desired signal m {k] plus the quantization noise q [é]. This is the jin noise associated with the difference signal d(k], which is generally much smaller than m (&1. imples my [k] are decoded and passed through a lowpass filter for D/A conversion. SNR Improvement ;* he improvement in DPCM over PCM, let mp and dy be the peak amplitudes of m(#) ant aw, ‘', !'e use the same value of Z in both cases, the quantization step Av in DPCM te reduced by "i Because the quantization noise power is (Av)?/12, the quantization noise in DPCM is |) (he !actor (mp /dp)?, and the SNR is increased by the same factor. Moreover, the signal power is |" peak value squared (assuming other statistical properties invariant) Therefore, G, (SNR "ent due to prediction) is at least icon z sala the powers of mt) and d(J), respectively, In terms of decibel “iri, jy (10 !8i9Pn/Pa) AB, Therefore, Eq. (5.41) applies to DPCM. '°eiv(Pn/Pa) 4B, In Example 6.28, a second-order predictor ‘units, this means that the ‘also with @ Value of that Processor for speech signalsa0 SAMPLING AND ANALOG-TO DIGITAL CONVERSION eats this case, the SNR improvement is found to be 5.6 4B. In p a ne as 25 dB in such cases as short-term voiced speech spectra ar es lternately, for the same SNR, the bit rate for DPCM could be I Per sample, Thus, telephone systems using DPCM can often operate at 32 0 Amplitude of a signal m(1) is in the range —1 V to 1 V, The maximum frequeney it transmitted using 8-bil/sample PCM. The same signal is transmitted using differ error signal d(2) ranges from —0.1 V to 0.1 V. In DPCM Av is allowed to be within (a) Calculate the transmission bit rate in PCM and DPCM and hence the bit rate comy Solution For PCM, 8-bit coding implies the number of quantization levels, Lis 28 = 256. Sampling rate = f, = 2 x 4kHz = 8k samples ‘Transmission bit rate = By = 8 x 8 x 10° = 64 kb/s Step size = Av = xo = 7.8125 mV 256 For DPCM, let the step size be Av’, Suppose Av’ = Av. 92 = hits * 10° = 25.6 but L’ should be 2* 6 which gives x = 5. Hence 5-bit coding is Then the number of quantization levels the minimum positive integer such that 2° > 25. and 2 = 25 = 32, Transmission bit rate on the channel By, = 8 x 10° x 5 = 40kb/s 0.2 J = = 6.25mV Av ay = ons AY x 100 = 20% Hence, Av’ satisfies Av + 0.25Av condition. The compression ratio= 25 = 4x10 — 1,6, by = 40x10" 5.6 ADAPTIVE DIFFERENTIAL PCM ‘Adaptive DPCM (ADPCM) can further improve the efficiency of DPCM encoding by tive quantizer a the encoder. Figure 5.29 illustrates the basic configuration of ADPCM, For Be the number of quantization levels L is fixed. When a fixed quantization step Av is applied, ¢ s t00 large because Ay is too big or the quantizer cannot cover the necessary 1. Therefore, it would be better for the quantization step Av to be adaptive S 61 small depending on whether the prediction error for quantizing is large or small. It is important (0 note that the quantized prediction error dy [A] can be a good indicator of emor size. For example, when the quantized prediction error samples vary close to the largest forthe largest negative value), it indicates thatthe prediction error is large and Av needs (0 gr if the quantized samples oscillate near zero, then the prediction error is small and Ay needs zation error Av is too smallve 5.29 ADPCM encoder uses an adaptive avantizer contoled only by the encoder output vr Hes e soih the modulator and the receiver have acvess to the same quantized samples. ey worn the receiver reconstruction ean apply the same algorithm to adjust the Ov ee ine p DPCM, ADPCM can further compress the number of bits needed for a signal DEC vow vety common in practice for an 8-bit PCM sequence to be encoded into a 4-bit A\ ne le 1c ye sampling rate. This easily represents a 2:1 bandwidth of storage reduction with virtu: me 8a neoder has many practical applications, The ITU-T standard G.726 specifies an re ind decoder (called codec) for speech signal samples at 8 KHz.’ The G.726 ADPC! koe sighth-order predictor, For different quality levels, G.726 specifies four different AD! ; “v4 32, and 40 kbit/s. They correspond to four different bit sizes for each speech sample ‘bits, 4 bits, and S bits, respectively, or equivalently, quantization levels of 4, 8, 16, and 32, “common ADPCM speech encoders use 32 kbit/s. In'practice, there are multiple variations \ speech codec. In addition to the ITU-T G.726 specification,’ these include the OKI ADPCM Microsoft ADPCM codec supported by WAVE players, and the Interactive Multimedia Associa- \\\) ADPCM, also known as the DVI ADPCM. The 32 kbivs ITU-T G.726 ADPCM speech codec \y used in the DECT (digital enhanced cordless telecommunications) system, which itself is widely «residential and business cordless phone communications. Designed for short-range use as an access san to the main networks, DECT offers cordless voice, fax, data, and multimedia communications. »w in-use in over 100 countries worldwide. Another major user of the 32 kbit/s ADPCM codec is esonel Handy-phone System (or PHS), also marketed as the Personal Access System (PAS) and known igtong in China. ADPCM HS is a mobile network system similar to a cellular network, operating in the 1880 to 1930 MHz » band, used mainly in Japan, China, Taiwan, and elsewhere in Asia, Originally developed by Laboratory in Japan in 1989, PHS is much simpler to implement and deploy. Unlike cellular S phones and base stations are low-power, short-range facilities. The service is often pejo- \ the “poor man’s cellular” because of its limi ted range and poor roaming ability. PHS first ployment (NTT-Personal, DDI-Pocket, and ASTEL) in Japan in 1995 but has since nearly Surprisingly, PHS has seen a resurgence in markets like China, Taiwan, Vietnam, Bangladesh, ‘i, Tanzania, and Honduras, where its low cost of deployment and hardware costs offset the 1 yaygltges: Im China alone, there was an explosive expansion of subscribers, reaching nearly on in 2006, 7 DELTA MODULATION rea ‘used in DPCM is further exploited in delta modulation (DM) by oversampling (typically Ch tes; WuiSt ate) the baseband signal. This increases the correlation between adjacent samples, "Small prediction error that ean be encoded using only one bit (L. = is 2). Thus, DM igbasically a 1-bit DPCM, that is, a DPCM that uses only two levels (L = 2) for quanti In comparison to PCM (and DPCM), it is a very simple and inexpensive method of A/D ‘codeword in DM makes word framing unnecessary at the transmitter and the receiver. This tus to use fewer bits per sample for encoding a baseband signal. ‘ In DM, we use a first-order predictor, which, as seen earlier, is just a time delay interval). Thus, the DM transmitter (modulator) and receiver (demodulator) are identi DPCM in Fig. 5.28, with a time delay for the predictor, as shown in Fig. 5.30, from whieh) mq k] = mq [k — 1] + dg [Kk] Hence, mg {k — 1] = mg [k — 2) + dy [k- 1) Substituting this equation into Eq. (5.48) yields sm {K] = mg {k ~ 2] + dy [K] + dy [k= 1 ‘Proceeding iteratively in this manner, and assuming zero initial condition, that is, mg [0] = k mg (k] = > da on) m=0 % This shows that the receiver (demodulator) is just an accumulator (adder). If the output de [kl by impulses, then the accumulator (receiver) may be realized by an integrator because its of the strengths of the input impulses (sum of the areas under the impulses), We may also integrator the feedback portion of the modulator (which is identical to the demodulator). ‘output is mg (kJ, which when passed through a lowpass filter yields the desired signal St quantized samples. wt Figure 5.31 shows a practical implementation of the delta modulator and carlier, the first-order predictor is replaced by a low-cost integrator circuit (such as:. ‘Amplifier integrator Start-up Slope overload yt) (e) Delia modulation: (a) and (b) delta demodlators; (c) message signal versus integrator output signal ‘ed pulse trains; (e) modulation errors, " (Fig. 5.31a) consists of a comparator and a sampler in the direct path and an integrator. ine feedback path. Let us see how this delta modulator i © signal m(t) is compared with the feedback signal (which serves as a predicted Signal) mig(e), ‘al d(t) = m(r) ~ Jtg(t) is applied to a comparator, If d(®) is positive, the comparator output is a é “of amplitude E, and if d(1) is negative, the comparator output is —B, ‘Thus, the difference isa eof ~2) thatis needed to generate a 1-bit DPCM, The comparator output is sampled by asampler ices 1, A™Ples per second, where fy is typically much higher than the Nyquist rate, The sampler thus Stain of ng a nang RA A pulses da [K] (to simulate impulses) with positive pulse when m(e)30 510 SAMPLING AND ANALOG-TO.DIGITAL CONVERSION mere ny when m(*)
You might also like
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
From Everand
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
Mark Manson
4/5 (6132)
Principles: Life and Work
From Everand
Principles: Life and Work
Ray Dalio
4/5 (627)
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
From Everand
The Gifts of Imperfection: Let Go of Who You Think You're Supposed to Be and Embrace Who You Are
Brene Brown
4/5 (1148)
Never Split the Difference: Negotiating As If Your Life Depended On It
From Everand
Never Split the Difference: Negotiating As If Your Life Depended On It
Chris Voss
4.5/5 (935)
The Glass Castle: A Memoir
From Everand
The Glass Castle: A Memoir
Jeannette Walls
4/5 (8215)
Grit: The Power of Passion and Perseverance
From Everand
Grit: The Power of Passion and Perseverance
Angela Duckworth
4/5 (631)
Sing, Unburied, Sing: A Novel
From Everand
Sing, Unburied, Sing: A Novel
Jesmyn Ward
4/5 (1253)
The Perks of Being a Wallflower
From Everand
The Perks of Being a Wallflower
Stephen Chbosky
4/5 (8365)
Shoe Dog: A Memoir by the Creator of Nike
From Everand
Shoe Dog: A Memoir by the Creator of Nike
Phil Knight
4.5/5 (860)
Her Body and Other Parties: Stories
From Everand
Her Body and Other Parties: Stories
Carmen Maria Machado
4/5 (877)
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
From Everand
The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
Ben Horowitz
4.5/5 (361)
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
From Everand
Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
Margot Lee Shetterly
4/5 (954)
Steve Jobs
From Everand
Steve Jobs
Walter Isaacson
4/5 (2923)
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
From Everand
Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
Ashlee Vance
4.5/5 (484)
The Emperor of All Maladies: A Biography of Cancer
From Everand
The Emperor of All Maladies: A Biography of Cancer
Siddhartha Mukherjee
4.5/5 (277)
A Man Called Ove: A Novel
From Everand
A Man Called Ove: A Novel
Fredrik Backman
4.5/5 (4972)
Angela's Ashes: A Memoir
From Everand
Angela's Ashes: A Memoir
Frank McCourt
4.5/5 (444)
Brooklyn: A Novel
From Everand
Brooklyn: A Novel
Colm Toibin
3.5/5 (2061)
The Art of Racing in the Rain: A Novel
From Everand
The Art of Racing in the Rain: A Novel
Garth Stein
4/5 (4281)
The Yellow House: A Memoir (2019 National Book Award Winner)
From Everand
The Yellow House: A Memoir (2019 National Book Award Winner)
Sarah M. Broom
4/5 (100)
The Little Book of Hygge: Danish Secrets to Happy Living
From Everand
The Little Book of Hygge: Danish Secrets to Happy Living
Meik Wiking
3.5/5 (447)
Yes Please
From Everand
Yes Please
Amy Poehler
4/5 (1988)
The World Is Flat 3.0: A Brief History of the Twenty-first Century
From Everand
The World Is Flat 3.0: A Brief History of the Twenty-first Century
Thomas L. Friedman
3.5/5 (2283)
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
From Everand
Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
Gilbert King
4.5/5 (278)
Bad Feminist: Essays
From Everand
Bad Feminist: Essays
Roxane Gay
4/5 (1068)
The Woman in Cabin 10
From Everand
The Woman in Cabin 10
Ruth Ware
3.5/5 (2641)
The Outsider: A Novel
From Everand
The Outsider: A Novel
Stephen King
4/5 (1993)
A Tree Grows in Brooklyn
From Everand
A Tree Grows in Brooklyn
Betty Smith
4.5/5 (1936)
The Sympathizer: A Novel (Pulitzer Prize for Fiction)
From Everand
The Sympathizer: A Novel (Pulitzer Prize for Fiction)
Viet Thanh Nguyen
4.5/5 (125)
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
From Everand
A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
Dave Eggers
3.5/5 (692)
Team of Rivals: The Political Genius of Abraham Lincoln
From Everand
Team of Rivals: The Political Genius of Abraham Lincoln
Doris Kearns Goodwin
4.5/5 (1912)
Wolf Hall: A Novel
From Everand
Wolf Hall: A Novel
Hilary Mantel
4/5 (4074)
On Fire: The (Burning) Case for a Green New Deal
From Everand
On Fire: The (Burning) Case for a Green New Deal
Naomi Klein
4/5 (75)
Fear: Trump in the White House
From Everand
Fear: Trump in the White House
Bob Woodward
3.5/5 (830)
Manhattan Beach: A Novel
From Everand
Manhattan Beach: A Novel
Jennifer Egan
3.5/5 (901)
Rise of ISIS: A Threat We Can't Ignore
From Everand
Rise of ISIS: A Threat We Can't Ignore
Jay Sekulow
3.5/5 (143)
John Adams
From Everand
John Adams
David McCullough
4.5/5 (2544)
The Light Between Oceans: A Novel
From Everand
The Light Between Oceans: A Novel
M L Stedman
4.5/5 (790)
Noise Part2
PDF
No ratings yet
Noise Part2
23 pages
ADC Notes
PDF
No ratings yet
ADC Notes
25 pages
3rd Module Imp Questions
PDF
No ratings yet
3rd Module Imp Questions
15 pages
Module5.ADC Compressed
PDF
No ratings yet
Module5.ADC Compressed
31 pages
Module 1,3,4 SLE
PDF
No ratings yet
Module 1,3,4 SLE
8 pages
Adc 1.
PDF
No ratings yet
Adc 1.
30 pages
Module3 (ADC) Compressed
PDF
No ratings yet
Module3 (ADC) Compressed
19 pages
ADC Important Topics
PDF
No ratings yet
ADC Important Topics
4 pages
The Unwinding: An Inner History of the New America
From Everand
The Unwinding: An Inner History of the New America
George Packer
4/5 (45)
Little Women
From Everand
Little Women
Louisa May Alcott
4/5 (105)
The Constant Gardener: A Novel
From Everand
The Constant Gardener: A Novel
John le Carré
3.5/5 (109)