0% found this document useful (0 votes)
380 views

Dcom Easy Solution PDF

Uploaded by

Hamza Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
380 views

Dcom Easy Solution PDF

Uploaded by

Hamza Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 97
Mumbai University Paper Solutions Strictly as per the New Revised Syllabus (Rev - 2016) of e- w.e.f. academic year 2018-2019 (As per Choice Based Credit and Grading System) Digital Communication Semester V Electronics and Telecommunication Engineering ¥ TechKnowledge Publications Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter § Chapter 6 Chapter 7 Chapter 8 Chapter 9 Index Syllabus Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Dec. 2018 May 2019 INDEX Probability and Random Variables Introduction to Digital Communication Information Theory and Source Coding Linear Block Codes Cyclic Codes Convolution Codes Digital Modulation Techniques Baseband Modulation & Transmission Optimum Reception of Digital Signals Table of Contents Probability and Random Variables DC-1 to DC-02 Introduction to Digital Communication — DC-02 to DC-03 Information Theory and Source Coding DC-03 to DC-11 Linear Block Codes DC-11 to DC-20 Cyclic Codes DC-20 to DC-32 Convolution Codes DC-32 to DC-38 Digital Modulation Techniques DC-39 to DC-53 Baseband Modulation & Transmission DC-53 to DC-61 Optimum Reception of Digital Signals DC-61 to DC-69 D(18)-1 to D(18)-13 M(19)-1 to M(19)-10 Question Papers Q4 to Q-2 goo Probability Theory and Random Variables and Random Processes : Information, Probability, Conditional Probability of independent events, Relation between probability and probability Density, Rayleigh Probability Density, CDF, PDF. Random Variables, Variance of a Random Variable, correlation between Random Variables, Statistical Averages (Means), Mean and Variance of sum of Random variables, Linear mean square Estimation, Central limit theorem, Error function’and Complementary error function Discrete and Continuous Variable, Gaussian PDF, Threshold Detection, Statistical Average, Chebyshev In-Equality, Auto-correction. Random Processes. Information Theory and Source Coding : Block diagram and sub-system description of a digital communication system, measure of information and properties, entropy and it's properties. Mini Source Coding, Shannon's Souree Coding Theorem, Shannon-Fano Source Coding, Huffman Source Coding. Differential Entropy, joint and conditional entropy, Mutual information and channel capacity, Channel coding theorem, Channel capacity theorem. Error Control Systems : ‘Types of error contro, eror control codes, linear block codes, systematic linear block codes, generator matrix, parity check matrix, syndrome testing, Error correction, and decoder implementation. Systematic and ‘Non-systematic Cyclic codes : Encoding with shift register and error detection and correction, Convolution Codes : ‘Time domain and transform domain approach, graphical representation, code tree, trellis, state diagram, decoding methods. Module 4 Bandpass Modulation and Demodulation : Band-pass digital transmitter and receiver model, digital modulation schemes Generation, detection, signal space 2° 3. Now, ifthe channel and modulation are memoryless then the input ouput characteristics of the composite channel of Fig. L1(6)is given by P(Yey/X=m) = PCY/m) t) ee Wdigtat Communication (E&To- MU) pez wher ~ 1 andj 0, n= channel is characterized by the joint conditional probability ‘Note that Equation (1) represents conditional probabilities ek that y, is received when x was tansmited. The graphical POL = Va Ya Von Yo Wal Xp Xt KM) representation of such probabilities is as shown in 2 Fig 110). =H POW /X=9) ~@ 4. Such a channel is called as a “Discrete memoryless channel, (DMC)". Ibis graphically shown in Fig 1.1(b) I the input to 4 digerete memoryless channel isa sequence ofn symbols uy, 4, = uy from X and if the corresponding output is the sequence Vj, Vj. Yq ffom Y. Then the discrete memoryless ‘This expression represents the discrete memoryless channel since the outputs depend only on the coresponding input. Chapter 2 : Introduction to Digital Communication Q.1 Write a note on block diagram of digital communication system. [= Ans. : Fig. 2.1 shows the functional block diagram ofthe digital communication system (DCS). It basically consists ofa transmiter (upper blocks), a receiver (ower blocks) and a communication channel. The signal processing steps carried out at the reosiver re exaly ‘opposite to those taking place a the transmitter. The modulate and demodulatldetec blocks together are known as MODEM. Irermation From ober Channot | Mossane 4 * Bia EWANZ tert TZ ontionat cnsrators Cresent (€-138) Fig. 2.1: Block diagram of a typical digital communication system (DCS) In Fig. 2.1, the only essential blocks are as follows Channel encoding is done to minimize the effect of channel Formatting, Modulation, De-Modulation / Detection and | noise. This will reduce the number of errors in the received data Synchronization and will make the system more reliable. Channel coding technique Encryption is the process of converting the digital signal at | itroduces some redundancy. The channel encoder maps the ; the source encoder output into another secret coded signa, This is | ieoming digital signal into a channel input. Channel coding forthe ‘an optional ‘block. Encryption is required to ensure the | &iven data rate can reduce the probability of ero. The output of ; communication privacy. The encrypted signal is applied to the | chanel coder is applied to a multiplexer which combines other signals originating from some other sources. channel coding block. Grn = Digital Communication (E&TC-MU) pes ‘Upto this point the signal is in the form of the bitstream. At ‘modulator this bit stream is converted into waveforms, that are compatible wth the transmission channel (this is also called as line coding), ‘The pulse modulation process will convert the bit stream at “ts input into suitable line codes. The Bandpass Modulation is, ‘sed for providing an efficient transmission of the signal over the ‘channel. The modulator can use any of the CW digital modulation, techniques such as ASK (amplitude shift keyings), FSK (frequency shift keying) of PSK (phase shift keying). The demodulator is used {for demodulation. “The frequency spreading (spread spectrum technique) will prociuce & signal that is immune to interference or any kind. The ‘modulated’ waveform is passed through the optional multiple ‘access block (FDMA / TDMA or CDMA) and applied to the ‘eansmission channel Receiver: [At the receiver the transmitted signal alongwith the noise added to it while travelling over the channel is received. The blocks used atthe receiver perform exactly the opposite operation ‘as that are performed atthe transmitter. The received signal is first, ‘passed through the multiple access decoder to separate out the signals. ‘Then the signal is passed thfough the frequency despreading block which recovers the original signal from the spread spectrum, signal. Frequency dispreading process is opposite to the signal spreading process. It is then passed through the ‘demodulator/detector which is opposite to the modulation process, The demultiplexer will separate out the multiplexed signals and pass on the desired (selected) signal to the channel decoder. ‘Channel decoder = ‘The channel decoder is present atthe receiver and it maps the channel output into a digital signal in such a way that effect of channel noise is reduced fo a minimum, Thus channel encoder and decoder together provide a reliable communication over a noisy channel. This is achieved by introducing redundancy (party bits) in predecided form, at the transmitter. ‘The output of the channel encoder isa series of codewords Which include the message and some parity bits. These additional parity bits introduce redundancy. The channel decoder converts these codewords back into digital messages. The channel coder ‘output is applied to deeryption block. Decryption is a process ‘which is exactly opposite to the encryption process. The decrypted, signal is epplied to the source decodes. ‘Source decoder : Source decoder is atthe rceiver and it behaves exactly in an inverse way tothe source encoder. It delivers the destination (user) the original digital source ‘output. Main advantage of using the source coding is that it reduces, the bandwidth requirement. The source decoder output is finally, ‘passed through the format block to recover the original information signal back. Thus in source coding the redundancy is removed whereas in channel coding the redundancy is introduced in a controlled manner. Its posible to opt for only source encoding oF {or only channel encoding. It is not essential to perform both but in, ‘many systems both these are performed together. It is possible to change the sequence in which channel ‘encoding and source encoding are being performed. Channel and source encoding improve the system performance but they increase the circuit complexity as wel. ‘Synchronization : ‘Synchronization is essential in DCS. Its key element is clock signal. This clock signal is involved in the control of all signal, ‘processing fimetions within the DCS. In Fig. 24.1 synchronizatic ‘block is drawn without any connecting lines, to show that it has got, ‘role to play in each and every block of DCS. ‘Signal Processing Functions or Transformations : Following are some of the important signal processing functions or ‘transformations that are related to the digital ‘communication 1. Formatting and source coding Basebdnd signaling Bandpass signaling Equalization ‘Channel coding Multiplexing and multiple access Spreading Encryption ‘Synchronization Chapter 3 : Information Theory and Source Coding Q.1 What is entropy of an information source ? Derive, the expression for entropy. When is entropy ‘maximum ? Ans. ‘The “Entropy” is defined as the average information per message. It is denoted by H and its units are bits/message, Go ROR EE Ne WD digital Communication (E8To-MU) pos ‘The entropy must be as high as possible in order to ensure ‘maximum transfer of information. We will prove thatthe entropy depends only on the probabilities of the symbols that are being ‘produced by the source. Expression for Entropy : Follow the steps given befow-to obtain the expression for ‘entropy. ‘Steps to be followed : Step 1: Let there be M different messages m,, m, .. My Let their probabilities of occurrences be Py Py» Pye Step2: Let there be total L messages. Therefore there are p, L. rmumber of m, messages, p, L number of m, messages etc. Step3: Calculate the information conveyed by message m, as og, L1/P,) Step4: Calculate the total information conveyed by m, 36 Ticray “PLL Step 5: Similarly calculate the total information for all the other messages, 12.1 craury f¢raatys “8 Step6: Add all these informations to obtain the total, {information Tau) “Jy ¢roat)* raat) * ~~ ‘Step 7: Divide the total information obtained in step 6 to obtain the expression for entropy. Suppose that a transmitter is transmitting M different and independent messages m,, m,, m, .. Let their probebilities of ‘occurrence be Pi, Py Py ~ Tespectively. Suppose that during « long jon a sequence of L messages is generated. ‘message sequence, , Lemessages of m, are transmited ‘pL messages of m, are transmitted pL messages of m, are transmited Pye L messages of m, are transmitted, 2. The information conveyed by the message m, is given as, 1, = bog, (1/9,) However there are pL numberof messages of m,. Therefore the information conveyed by py L-number of messages will be Tero) = PiE Yoga P11 a) is very very large we can expect that in the L Similarly the total information conveyed by p, L number of im messages i given as Temas) > Pal logs / Pal -Q) Similar expression canbe written forthe remaining messages. 3. As we already know, the total information of more than one mutually independent message signals is equal othe sum of ‘the information content of individual messages. i.e Tera) = Ticraay + eraty* Brats + @) Subsite the values of Is ¢ naa y I (re) ~t6 fom the Equations (1) and @),10 get Aerout) = Pr b logy tM P,1+ P21 logy VP) +p, Log [1/P,] +. “ Vora = E EPs logs (1) * Ps toes (Pa) +50 (ps) Fond 8) 4, The “Entropy” is defined as the average information per message interval. It is represented by the symbol “H. ‘Therefore from Equation (5), we can write that, Tera) [oo = py lows (1 p,) + Ps lou (1 P5) Foon 46) M Entropy: H= Zp, log,(1/p,) AD ki Maximum Entropy : Consider a discrete memoryless souree (DMS). It is ‘mathematically defined by the following equations, $= Ciespty nat } © Let the probabilities of these messages BE Pr Pas == Pet respectively. eI zun-t (9) k=0 ‘Then the entropy ofthe soure is bound as fllows OsHslogk (10) ‘where k isthe radix or number of symbols. 1, Entropy H=0 ifp, = 1. This is lowest value of entropy which corresponds to no uncertainty. 2. Entropy H1= logsk if, = Ife forall k ie. fall the symbols in the alphabet are equiprobable. This is the upper bound on entropy or its maximum possible value which corresponds to ‘maximum uncertanity. arnmmmimn W vigtat Communication (E8Te-MU Q.2. Prove that the entropy of extremely likely and extremely unlikely message is zero. 1. Incase of the “extremely likely” message, there is only one single possible message m, to be transmitted. Therefore its probability p,= 1. The entropy of a most ikely message mis sivenas, Hi = pylogp(1o)™ Hog, (1) logo ee Toxo? oy 2, Foran extremely unlikely message m, its probability p,—> 0 H = pylogs( 1p) =0 @ ‘Thus the average information or entropy of the most likely and most unlikely messages is zero. pea ete 3 A discrete memoryless source has five symbols Xy XX» Xe and x; with probabilities p(x) = 04, p (>) #048, p ( %) = 0-16, (x)= 0.18 and p (x5) = 04. Construct the Shannon-Fano code and calculate the code efficiency. Ans. : 1. ‘The Shannon-Fano code is constructed as follows ‘Table 3.1 : Shannon - Fano codes a 1 fea 8 2. Average information per message (H) : 5 E p(4)logl1/P(x)) i 0.4 log, (1/04 )+0.19 log, (1/0.19) +0.16 log, (1/ 0.16 )*0.15 log, (1/015) +0. log, (1/0.1) He H- H = 2.1Sbitsimessage cs '3. Average code word length (L): s Ep; lenght, in bits) kel L= L. = (042) +(0.19 «2 )+ (0162) +(0.15x3)+(0.1x3) L = 08+038+032+045+03 = 2.25 bitsmessage 4, Efficiency of the code (7): 215 x 100% =248 x 100% =95:6% Ans. Saleh ees Canes se ee G4 Consider five messages given by the probabilities 048, 0.25, 0.128, 0.0625, 0.0626. Calculate H. Use the Shannon-Fano algorithm to develop efficient code and for that code calculate the average number of bits/message. Compare with H. Calculate efficiency and redundancy. An Given: P(m)=05, P()=025, PO)=0125, P (xy) = 0.0625, P(x) = 0.0625 Step 1: Shannon-Fano code : sep} |Codel Code || worden of4 » | om | 1 [2 » | ows | 1 0 | 3 x» | oom | 1 | 1 | + | 0 |see}ttt0) 4 ter] | x | oom | + | 1 | 4 | 4 jsuelettt) 5 ter] | ‘Step 2: Caleulate average number of bits/message (L) : 3 L= Pye censthofm indi) = (05% 1) + (0.25 «2)+(0.125%3) + (0.0625 » 4) + (0.0625 « 4) = 1.875 bitsimessage Armin ¥ Digital Communication (E&To - MU) ‘Step 3 : Caleulate average information per message (H) : 5 H= Y Peayiog, UP O ist (45 log, (105) +025 log, (110.25) +0.125 log (0.125) +0.0625 log, (10.0625) +0,0625 tog, (1.0625) 05 +05 +0375-+025 +025 H = La7Sbtsimessage ‘Step 4: Caleulate efficiency of code : ante -1225.100 = 1096 Ans. 6 Consider the five source symbols (messages) of a discrete memoryless source and their probabilities as shown in the Table 3.2(a). Follow the Hutfman's algorithm to find the code words for each message. Also find the average code word length and the average message. Table 32(0) Message | m | m2 | ms | mg | ms Probability | 0.4 | 0.2| 0.2] 0.4 | 04 Ans. : (a) Tofind the code word for each message Step 1: Arrange the given messages in the order of decreasing probabilities as shown in Fig 3.1), Step2: The two messages having lowest probability are assigned 0 and 1. The two messages with lowest probabilities are m, and m, as shown in Fig. 3.1(). €106 Fig. 3.100) Now consider the two messages of lowest probabilities in stage Il ofthe Fig 3.1(0). Assign 0 and { to these two messages. Consider that these two messages are combined to form a new message witha probability of (02 +02) 0.4. Place the probeblity ofthe combined message according to its value in stage IIL Place it as high as possible if the other messages have the same probebility. This is as shown in Fig. 3.100). Follow the same procedure till only two messages remain and assign the O and 1 for them. All this is as shown in Fig. 3.1) Stage Stage lt __Read the encircled bits 1 get code form, as O11 [Messages Probabilities os oa 2 ee 2 m™ o1—S Now in| oe 7 (109 Fig. 3.1(0) e107) Fig. 340) ‘Step 6: How fo write the code word for a message ? Consider the dotted path shown in Fig. 3.1(¢) To write the cede for message m, this path isto be used. Start from stage IV and, track back upto stage I along the dotted path. And waite down the ‘code word in terms of Os and 1s starting from stage LV. ‘The code word for message mis 0 11. Similarly write code words for the other messages a8 shown in Table 33(0), Step: Now consider that these two messages m, and m, as being combined into a new message (Fig. 3.1(8)) and eae place te probaly of the new combined mesg, in B= [lala le the list according to its value. Place the combined - message as high as possible when its probability is seeatabiiins | 0 | 02 | 02 | 0.1 | 0.1 qual to that ofthe other messages. This is as shown in Code words | 00 [10 [11 [010 | on Fig. 3.100) Gran aa Communication (E&Te - MU) ©) Tofind the average code word length : ‘The average code word length s given a, 3 L = Evy [Length ofm, indies] k = (04%2)+(02%2)+(02%2) +(0.1%3)+(0.1x3)=22 (©) To find the entropy of the source ‘The entropy ofthe source is given as, 5 B= E plow (1/p,) H = 04 10g, (1/04) +02 log, (1/02) +02 logy (1/0.2)+0.1 log, (10.1 ) +0.1 log, (1/01) = 0.52877 +0.46439 + 0.46439 + 0.33219 +033219, H = 212193 2.6 Write note on : Lampel-ZIV coding. Gap Ans. : ‘The major disadvantage of the Huffman code is that the symbol probabilities must be known or estimated if they are ‘enknown. In addition to tis, the encoder and decoder must know ‘he coding re. ‘Moreover when text messages are to be modelled using Huffman coding, the storage requirements do not allow this code to capture the higher order relationship between words and phrases. So'we have to compromise the efficiency of code. These practical ‘imitations of Huffman code can be overcome by using the Lempel ZL algorithm, The advantages of Lempel ZIV algorithm are its -sdaptability and simplicity of implementation. Principle of Lempel ZIV algorithm : Dez ‘Now start examining the data in Equation (2) from LHS and. find the shortest subsequence which is not encountered previously. This subsequence cannot be only 0 because it has already been. encountered. It is 00. So we inelude 00 as the next entry in the ‘subsequence and move 00 from Data to subsequence as follows ‘Subsequenees stored: 0, 1,00 ‘Data to be parsed 10101001 0) ‘The next shortest subsequence which is not previously repeated is 1. in Equation (3). Note that we are examining from LHS. Hence we waite, ‘Suibsequences stored 0,1, 00,01 Data to be parsed 101001 @) ‘The ‘ext shortest subsequence which is previously not ‘encountered is O10. Note that it can't be O1 because we have already store it Hence we write, Subsequences stored Data to be parsed Similarly we can continue until the data stream has been completely parsed. The code book of binary subsequences gets ready as shown in Fig. 3.2 Numerical position |1|2| 3 | 4 | s | 6 ‘Subsequences. 1 | 00 | ot | o10 | o1or Fig. 32: Code book of sequences 0,1, 00, 01, 010 o101 ‘The first row in the codebook shows the numerical positions ‘of various subsequences in the codebook. ‘Numerical representation : Now adé third row to Fig. 3.2. This row is called as ‘numerical representation as shovin in Fig. 3.2(a). ((ineweneatpesnon [2] 3 [4s] e o1 | o10 | o101 Encoding : To illustrate this principle now consider the example of an ‘put binary sequence specified as : ‘00010101001 ay It is assumed that the binary symbols O and 1 have already een stored in this order inthe code book. Hence we write, ‘Subsequences stored: 0,1 Data to be parsed ‘00010101001 2) un] iz] a Fig. 32(0) ‘The subsequences 0 and I are originally stored. So consider ‘the third subsequence i.e. 00. This is the first subsequence in the ata stream and it is made up of concatenation of the first subsequence i.e. 0 with itself: Hence itis represented by 11 in the row of numerical representation in Fig, 3.22) Similarly, subsequence 01 is obtained by concatenation of first and second subsequence so we enter 12 below that. The remaining subsequences ae treated accordingly. Sr jtal Communication (E8T WU) Binary encoded representation ‘The last (4%) row added as shown in Fig. 3.2(b), is the binary ‘encoded representation of each subsequence. [immer i [2] 3) 4] 5] 6 {1} 00 | o1 | oo uff ala ‘010 | ooit | 1000 Fig. 320) ‘The question is how to obtain the binary encoded blocks. The last symbol of each subsequence in the second row of Fig. 3.2(6) (called as codebook) i called as an innovation symbol. So the las bt in each binary encoded block (4% row) is the innovation symbol of the corresponding subsequence. ‘The remaining bits provide the equivalent binary representation of the “pointer” to the “root subsequence” that, ‘matches the one in question except forthe innovation symbol. ‘This can be explained as follows : 1. Consider numerical position 3 in Fig. 32(b) The binary ‘encoded block is 0010. Consider numerical position 5 in Fig. 3.20). It is partially reproduced below. Row 1 Numerical posiion—+ 3 “This the st subsequence, Innovation umber Flow 2: Subsequence —» Take asitis sina ecodednoar 222] yoga erence pont) Rows eu Binary equivalent ‘F(T called pointer) ens) Similarly the other entries in the fourth row are made. In this ‘way we get the binary encoded version of the original sequence using LZ algorithm. pe8 Decoder : ‘The decoding is as simple as encoding. The steps followed at the time of decoding areas follows, ‘Step 1: Take the binary encoded block : For example consider the binary encoded block in position $ Use the pointer to identify the root subsequence : oe Pointer = 4 Pointer value 4 corresponds. 10.4” subsequence i.e. 01 ene ‘Step3: Append the innovation symbol to the step2: ‘Appetd the innovation number ie. 0 to the root subsequence ‘of 01 to get the subsequence 010 corresponding to position 5. This, is how the process of decoding can be used to recover the original, bit sequence from the LZ encoded signal. bsequence in 7 Determine Lempel-Ziv code for the following bit, stream: 0100111110011000001010101100110000. ESSE ‘We assume thatthe binary symbols 0 and 1 are already stored inthe code book. Sulbsequences stored: 0,1 Encoding is accomplished by parsing the source data stream {nto segments that are shortest substances, not encountered previously. The given stream of bits can be parsed into subsequences as shown below 6, 1,00, 11, 11, 001, 10, 000, 01, 010, 101, 011, 0011, 0000 “These subsequences are encoded as shown in Fig. 3.3. Ta (169) Fig. 3.3: Principle of encoding Similarly we ean encode the remaining sequences as shown in the following encoding table. erm WW cigitat Communication (E87 MU) ‘Table 3.4: Encoding table ee ee ee © [1 | oo | a1 [am [oot | 10 | 000 01 | o1o | 101 | ori [001 [0000 =] | [a [2 [2 [a [a [elo [a [ oe [| © [1 Joot0[ 10: |00x]or11 fo100{or10 or [roo1o] 1111 0013 forns [orto Consider a DMS $ = (Sy, Sp S,...S;) with following |. 4=9.528 + 0.5 +0.410+0.332 + 0.2160 +0.1517 + 0112 message probabilities. S| 8. [S| 8 | s. | S| | & H= 1.9177 bits /symbol P(S)/0.40/0.26|0.16]0.10| 0.05 |0.03]0.02| Code efficiency : Encode the source using Huffman algerthn, Find ‘the average code length and efficiency. Ans Huffman coding : “The Huffinan code is as shown in Fig 3.4 ‘ymost Probably Sage! Sagell —Saelll Sagal Sage 00 —> 01 —+ 04 —+00—eou0~ p08 Sos Soe re oe ge reees five symbol with their probabilities as shown : my 0.16 my 04 Ma 0.19 Ma 046 0.10 Construct a Shannon Fano code for the source and calculate code efficiency, redundancy of the code, Repeat same for the Huffman source coding technique. Ans. For Shannon Fano code refer Q. 3. " eva Fig. 24 : Haftnan ade Redundancy : Table3.6 shows the codewords for symbols aes Table3.s Talal 1s LS ]]| Parti: turtman cosing: abd Sigel oasfaas|oi0| aos | aos | om Saad evord | 1 | 01 | 01 [0010001 o0000| 00001 m OW leedeword «| 1 | 2(3) 4) 5] 6 | 6 or ™ te ‘Average codeword length (L) : ma: = 1 ™ on EP x length of m, in bits = (04 x 1) + (025 x 2) K= (0.15 3)+(0.1 x4) + (005 x8) +(003 x6) + (0.026) 4+05+045+04+025+0.18+0.12 =23 bitsimessage ‘Source entropy (H): 7 H= Py x log, (1/Py) = 04 log, (1/04) + 0.25 log, Kel (4025) + 0.15 logy (10.15) + 0.1 log, (10.1) + 0.05 logy (1/0.05) + 0.03 log, (1/0.03) + 0.02 log, (10.02) (e164 Fig. 3.5 “The codeword for symbol m, is 100 (follow the dotted line in Fig. 3.7. Similarly the codewords for other symbols can be obtained, They are listed in Table 3.6. Table3.6 m [im | m | my og | oi6 | 01s [01 iit | 110 | 101 | 100 a, [sales @nS Digital Communication (E&Tc - MU) 41. Average codeword length (L) s L = 5 yx (length oft code word) ker = 04x 1) +(019x3)+(016%3)+ 0.18 x3)+ 0.1 x3) L = 22 bits /eymbol 2, Source entropy (H) : 5 Z Plog, ey) kel = 0.4 fogs (V4) + 0.19 fog, (10.19) +016 fog, (0.16) +0.15 log, (0.15) H +041 log, (10.1) 0.5288 +0455 +04200 + + 0.4105 +0:3322 = 2.15 bisisymboL 3. Code efficiency : n= Exscon2dl im-s170% Ame i ateatoy= R = (-») = (ogra ez Am ‘G10 State and explain Shannon’s theorem on channel ‘capacity. Ans. : Statement : ‘The statement of the Shannon's theorem is : Given that a source of M equally likely messages with M >> 1, which is ‘generating information at arate R. Given that a channel of capecity, C* exists. Then if, Rsc “Then there exists a “coding” technique such thatthe output of the source may be transmitted over the channel with a probebilty, of error in the received message which may be made arbitrarily, smal Meaning : 1. The theorem talks about the rate of transmission of information (R) over a communication channel 2, The channel capacity “C" is arate of transmission in bits/sec. ‘According tothe theorem if R< C, then it is possible to use a coding technique and make an erorfee transmission even in the presence of noise. ‘There is a negative statement associated with the Shannon's theorem. It's statement is as follows : rns ee me pe-10 Critical rate : Let the source entropy by H, the source produces symbols cone per T, seconds, Let the chanel capacity be C and be used covery T, sceonds Channel coding theorem : Du tothe presence of noise ina communication chanel, te eros are inirodoced inthe digital signals. 1 the channel is to nosy the eorprobeblityis very high. A probably of ero equal 0 10~* or lower is required for many applications. It is possible to raise the level of performance by using the ‘channel coding. The goal of designing a channel coding scheme is, ‘to increase the resistance of the digital communication system to fhe channel noise. Channel coding is a process of mapping the incoming data sequence into another sequence applied to the channel end inverse mapping the sequence at the output of the ‘channel into an output data sequence in such a way thatthe effect, ‘of noise is minimized. ‘This process is illustrated in Fig. 3.6. (€-14) Fig, 3.6 : Digital communication system showing channel coding “The process of mapping takes place a the transmitter and itis performed by the channel encoder. ‘The inverse mapping takes place at the receiver by the channel decoder as shown in Fig. 3.6 ‘Then according to channel coding theorem we can write, R received codeword: [OJ [+] o] +] +] wy Incorrect Bitinveriod @.9 Parity check matrix for the (7, 3) code is given, below: o111000 ne[t ot otoo +1 00010 1140004 Construct syndrome table for single bit error patterns. Using syndrome, find error pattern and code word for each of the following received vectors. r, = 0011104, r,= 1101110, r, = 0191011 Hf information vector is (011), determine its corresponding code word. TaTEE Ans. Given : The parity check matrix. ¥ Digital Communication (E&Te MU) po-16 o111000 _|to10100 HJ: 100010 111000 ter Step1: Write the expresion for syndrome : SyndromeS = BH" =[B},,,[H"L,, Step2: Write various error veetors ‘Various error vectors with single bit errors are shown in ‘Table 43. The encircled bits represent the locations of erors. 06a Table 4.3) z = 11@]o]°] ol] 0] 0 [rm TL o[@]ololo| o| o | sem ype} ol[@] oo | o| o | ti [ot olol@] | | © | ram s[o|olo|o[@|°| ¢ [Rm efefofe]|o[o|@| o | sim 7LeLeofe [eo] |e | @ [sean Step3:. Prepare the syndrome table: (Shey = (Ele lH Ig Substituting (E) = (1000000 and HT to get, o1id [s}=(000000}} 9 § f | O=1.1.1.1) 1900 o100 0010 0001 ax 1x4 wy 1, This isthe syndrome for firs bitin error. 2. Similarly obtaining the syndromes for other bts as show in Table 4300). 521) Table 43(b) : Syndromes for various error vectors or Row how ot now oth ovat n rie To0e [a oe bor x © ‘Table 4.3() shows that the syndrome vectors are same asthe rows ofthe transpose matrix". ‘Step 4: Compute the code words 1. Received vectors r, [0011 101] ‘The syndrome for this code word is given by s= YH oid 1o1l 1101 = [oo io1j] 1000 0100 0010 0001 S = [0,0,0,0) ‘A syndrome with ll zeros represents zero errors. 1 i corect code word. 2. For'the received code word r, = 1101 110 8 = [1101110] S = [0,0,1,0) So the corresponding eor pattem is obtained from Table 4.3(b) as B = [0000010] "The correct code word is obtained as, X, = m@Es=[1101 110] @ (0000010) = XM = [1101100] Ans. 3. For the received codeword r,= 0111011 ont 1o1l 1101 Syndromes = forti0iyj} 1000 0100 0010 0001 8 = [1101] Hence the corresponding error pattem obtained from Table 43 is B = [0010000] “ence the correct codeword is obtained as follows : X, = %,@B=[0111011] @[ 0010000] Grn ‘Communication (E&Te - MU) 2X = [0101011] Ans. Step 5: Code word for information vector (011): O11 Given parity check matric =] 1° 1 oy 110 iii 1 0 0 ° = sem wn t8 1] [0,1,1,0] scans a Be Complete code word e198) eee 10 A generator matrix of (6, 3) linear block code is given by : 3. 4 Ans. Given: @® = 63 n= 6ke3 4. Allcode words : 1 GO = Uh/Plen 100 erin] o 1 0 . oot ated 110 O11 [As the size of message block is k= 3, There are 8 possible message blocks = pez (0, 0, 0), (0,01), (041.0), (0. 1, 1, (1s 0) (1, 0, 1 (Is Ts OD, 1D ‘The prt bits are obtsined as follows Tan [bby by] = twam[! : °| (1) “The solution of Equation (1 is given by by = mg@m, b = mOm, Om, by = m@m 0 1{1fofololi{r|i{ojo ela je le la le |p r{rfrfolrfofr{rfrjolrjol 4 ‘Minimum distance dy: an = 3 Error detection and correction capability: ‘Number of errors that can be detected : ay 2 841 3eetl 152 lence at the most two erors can be detected. [Number of erors that can be corrected Sr dg BHD 232 MH ntsl “Therefore atthe most two erors ean be corrected “Message bit sequence : ‘The received sequence Y = 101101 ‘Therefore the syndrome is given by s = vit -offom the above table s = (1ontot] S = [1600016000,18081 9080, 16081608081) asrnnimn | Communication (E&T¢ - MU) 8 = 10.01) This is same as 6* row of the transpose matrix H", which indicates that there isan error in the 6 bt of received signal ie. ¥ = 101100 «, The correct code word X = 101100. ‘The correct code word is obtained by replacing the 6* bit by a0, Q.11 Consider a (7, 4) code whose generator matrix is 111000 e=| 1010100 1100140 100001 1. Find all the codewords of the code. Find H, the parity-check matrix of the code. 3. Compute the syndrome for the received vector 1101101. Is this a valid code vector? 4. Whatiis the error-correcting capability of the code? What is the error-detecting capability of the code? Ans. 4. Find all the codewords : Given: n=7,k=4 e All the code words can be obtained by equation 1111000 g-|lo1or00 o110010 1100001 G = [PA hee ‘Comparing this withthe given generator matrix to get, Relation between party vectors B, message veetor and the DMD) | QDERD) op) 4 RD D'@R\(D)= Q\(D) (D +D+1) @ GD) GD) _~ 28 GD) orn ‘Communication (E&Tc - MU) ‘Obtain the value of Q (D ). The quotient Qj (D ) can be ‘obtained by dividing D°~ by G ( D ). Therefore to obtain Q\(D), divide Dby (D+D+ 1. ‘The division takes place as follows 01+ D4 1 Queen! palma O{0) oteDer ofepts Mod-2+ @ “@ “@ stone oe of oF ot +00" 0% 0 Moi-2+6 "@ @ © sestlons os oF > ate een ee 6 tae Deone1=— Remainder retynomil R40) Here the quotient polynomial -«Q,(D) = D?+D+1 andthe remainder polynomial -R,(D) = D?+0D+1 Substutng these values into Equation (2) we get, D'@R,(D) = (D'+D+1) (DP+D+1) *+D'+D'+D'+D'+D+D'+D+1 =D'+0n°+(11)D'+(161)D +D°+(181)D+1 =*+0p'+0D'+0D'+D'+0D+1 1" Row polynomial => D‘+ 0D‘+ 0D'+0D"+D"+0D+1 =. I" Row elements => 1000101 Using the same procedure, we can obtain the polynomials for the other rows ofthe generator matrix as follows 2" Row polynomial => D’+D'+D+1 3" Row polynomial = D'+D'+D 4" Row polynomial = D’+D+1 ‘These polynomials can be transformed into the generator matrix as follows: 9, 9, 9, 2, Fow2-> Row 9+ Row 4 | 0 ae Row [1 ° as 5 peal be Neng Paced 4x7 ens) ‘This is the required generator matrix. To obtain the parity check matrix [H] : ‘The party check matrix is given H = [PY x3) ern pe-22 The transpose matrix P* is given by interchanging the rows ‘and columns ofthe P matrix. 1 110 =foiid 110 td3e4 Hence the party check matrix is given by, 3x7 1. |code word] sg [arty] [No clear division] potynomiat Ibeween the message and| parity bis. 2, |Codeword XD) = [DY *M OX O)=M(D)- GD) [polynomial SBD) 3 [Complexity [Less More of builing kine encoder & [Decoding oes IMore eomplexity . [Computation|Easy as only the|Complex as mesage bis) lot codeword [party bts need to belae scrambled with pri fomputed ts. © [Finding telEasy [piticat laity check nate Q.6 Write short note on : CRC codes. Ans. This is a type of polynomial code in which a bit string is ‘epresented inthe form of polynomials with coefficients of 0 and 1 ‘only. Polynomial arithmetic uses a modulo-2 arithmetic i.e. ‘Addition and subtraction are identical to EXOR. For CRC code the sender and receiver should agree upon a generator polynomial G(x). A codeword can be generated for a given dataword (message) polynomial M(x) with the help of fong division. ‘This technique is more powerful than the parity check and checksum error detection CRC works on the principle of binary division. A sequence of redundant bits called CRC or CRC remainder is appended at the end of the message. We will eal this word as appended message word. The appended word thus obtained becomes exactly divisible by the generator word corresponding to G (x). The sender appends the CRC tothe message word to form a codeword. | Communication (E8&T - MU) [At the receiver, this codeword is divided by the sume ‘generator word which corresponds to G (x). There is no errr if the remainder of this division is zero, But @ non-zero remainder indicates presence of errors in the received codeword. Such an, ‘erroneous codeword is then rejected. RC Encoder and Decoder : Fig, 52 shows the block diagrams of CRC encoder and decoder. Encoder : ‘The encoder shown in Fig. 52 has a 4 bit data word (k= 4) and 7 bit codeword (n =7) with three party check bts. The data bits are augmented by adding 3 zeros (n-K) to the RH.S. of the Word. This 7 bit resultant word is applied tothe generator. This, ‘word acts as the dividend. The generator uses a4 bit (nk + 1) sor which is predefined and used by the encoder as well as decoder to divide the augmented data word. Ths is @ modulo ~ 2 ‘Sender or Encoder Datawors ozs division. The quotient of this division is discarded and the 3 bit, remainder is appended tothe data word to create a7 bit codeword, as shown in Fig. 52. Decoder : ‘A corrupted codeword is received by the decoder. All the 7 bits of the received codeword are applied to the checker alongwith the 4 bit divisor whichis common to encoder and decoder. Checker is the replica of generator. A three bit syndrome is produced atthe ‘output of the checker which is actully «thee bit remainder. The symdrome is applied tothe decision logic block. fall the three syndrome bits are zero then the four leftmost bits of the received codeword are accepted (S; $; S) = 000 represents the no error situation). Bu if any ofthe syndrome bits are nonzero, then discard the 4 deta bits because a non z2r0 syndrome indicates presence of error inthe received codeword. ‘Receiver or Decoder Tranericon eda (G-usoFig. 52 : CRC generator and checker Q.7 Draw the encoder for a (7, 4) cyclic Hamming code need ty aac a Sets D to ie ‘The generator polynomial is given by : G@) = D+0D+D+1 “ The generator polynomial of an (n, K) cyclic code is Ans. : ~@) Fora (7,4) cyclic Hamming eode, 7-4-1 GO=1+ E gpiep*=1+g,D+—D'+D? G@ = D’+g:0'+4,D+1 0) Compasing Equations (1) and 3) we get <1 md gn0 of) ‘Therefore the encoder for a (7,4) Hamming code is a8 shown Q.8 For the same encoder of the previous Q. 7, obtain the code word for a message input of m= 1010. ‘Ans. : Initially the output ofall the lip-lops is assumed to be equal to zero. Refer Table 5.1 to understand the generation of code words, Digital Communication (E&Ts - MU ‘aren Table 5.1 Froza, [eri =rrosa] Fr2= 7 FO. ° 1 1 0: i ‘When the output switch is in postion 1, the codeword is ‘equal to the message bit for first 4 clock cycles. For the next 3 clock eyeles, the output switch is in position 2 andthe eade bits equal to the FF contents. wie) = Codeword = (CST OTST] Am Mowlage | Pefty Verification : G@) = D+D+1 nok = 7-453 M(@) = D'+op+D+0 p'*M(p) = D'@’+D)=D'+D" ote oto +1) Oe ofeote ot ene) o O'eoet Orr FO) - Codeword polynomial X (D) is X(@) = [D**M@)]@RO) p'+op'+D'+oD'+0D' +00 +9)9(D+1] = p’+0p'+D'+0p'+0D"+D+1 1010 PONTE Hence verifies TE cree ee Gs Construct a systematic (7, 4) cyclic code using the generator polynomial G (x) = x° + x + 4. What are the error correcting capabilities of this code ? Construct the decoding table and for the received code word 1 1 01 1.00, determine the transmitted 5 Codeword data word. Ans. : ‘Steps to be followed : Step 11+ Obtain the generator matrix. ‘Step2: Then obtain the code vectors, X = MG Step 3: Calculated, and obtain the error correcting capability ‘Step4: Get the transpose matrix P™ from P matrix, ‘Step 5: Obtain the parity check matrix H™= [P ‘Step 6: Obtain the transpose matrix HI. ‘Step 7: From H prepare the decoding table, ‘Step: Decode the received code word Y= 1101100 From the given data itis clear that m = 7 and k.™ 4 for this code. Step 1: The generator matrix : “The generator matrix forthe given generator polynomial is: 1oo0:101 o1oo:1il G=looro:it0 a o001:011 Step 2: Obtain code vectors : “The generator matrix of step 1 can be used to calculate the code vectors, because x = MG a Where M = Message vector. For example if the message vector is M= 1 0 1 0 then the corresponding codeword can be obtained as follows. 1000:101 o1oo:iit X= LOI goror110 oo01:011 X= 1010044 ed am ™ 8 ‘Similarly we can obtain the code words for other message vectors. ‘Step 3 : Calculate dy, ‘According to code words the minimum distance 15 dyn = 3. ‘Therefore this code will be able to detect upto 2 errors and. correct upto only 1 errors. Step 4: Transpose matrix PY WeknowthatG = (k:Pava-n] 1.0000 or O. 1e0 <0. et But = 0 10 10 ° at tel blag es80) 101 111 PA lire Cheer sri ¥ Digital Communication (E&Te- MU) Decode the input word ‘The transpose of this matrix can be obtained by interchanging | SteP 8 Said aso ‘he input word Y= 1101100 rio The received word can be expresied in the polynoal P = [ei] eas | 1101 dse0 Vuuren Step: Obtain the party check matrix H: “The syndrome vectors given by Yo) ‘The parity check matrix is given by Sq = Reminder 2] : perform the division as follow 5 el = Ft uno ok Rigen YQ) = vextor'ee + +00+0 u-[oriio10 ee 1101:001 and G(x) = x 40x t+ : ‘The division aks place a follows: Step 6: Obtain th transpose mats: pure ‘We can obtain the transpose matrix H” by interchanging the 40x x41 ee trot ee x + OK40 pealaeeteet! vate wea? 33 * 5 2adions “ ho1 a Rote bin rote tat z Noduo +6 @ @ @ bee Ogata 2 adadtions a 100 ae 840+ O10 pao xeon tax Poa Moduo-+@ @ @ © 2 edations Step: Construct the decoding table: we 4x40 OC eK HT ‘We can prepare the decoding table from the transpose ofthe eas eee party check matrix H' because each row of H represents Se ORS syndrome and unigue rr pater. Remainder em ‘Table 5.2 shows the error patterns and syndrome vectors. A ree ai s = oi] "The nonzero syndrome indicates that there exists an error in ‘the received code word. The decoding Table 5.3 indicates thatthe ‘error vector comesponding 10 the syndrome S = 1 0 1 ‘Table 5.2 : Decoding table i of o| 0 Jolololojojolol} is giventy: 2 |Fintrowor” [1] 0 | 1 [1 Jolololololo Beer riauh 3 |Secondrow oft” |1| 1 | 1 |0{1]0]0]0]0[ 0]! therefore the corrected code wor is given by 4 |mirdoworn™ [1] 1 | 0 [ofolifolololo xX = YOE 5 |Fouthrowot” |o| 1 | 1 |o/o}o{1/o/o/o = [1101100}@[1000000) Fahroworh’ [1] 0 | 0 [ofolololijolo eee, as 0 lolololololi{o|| & 10 Fora (7, 4) cyclic code, the generating polynomial g(x) = 1 +x + x. Find the code word if data is seventhrow ort | 0| 0 | 1 |ofofofolojojs 4. 0011 2 0100. Show how cyclic code is decoded to get data word for previous case. 6 7 |sixthrowofH™ | 0 | 1 8 ‘This is the required decoding table. Grn EI ¥ Digital Communication (E&Te = MU) pe-26 Tk= 4, g@)ax txt] 011, M,= 0100 ‘Systematic codeword generation : 1. Mj=0011 2M @)=x+1 M,@) = x (x+1)= Divide x*"* M, (x) by (9) a8 follows Sexes Remainder Rx) = x+ 1 The codeword polynomial is given by, X@) = TEM, (18 RO = xtatexel = Ox'+ 0x" txt tx? 40x +x +1 eimg Codowordx = [FT TOTT ae) ® ie 2 M,=0100 MQ) = 2 XM 0) = xxtax’ Divide x°“* M, (2) by g(x) as follows : ein, Saxet Remainder x)= x+1 ‘The codeword potynomial is given by, X@) = pM, @I]OR@ = Sextt = ox tx' t0x¢+ 0x 40x +241 ‘Codeword x = [0 100 FO cease) = me 8, Ans, ‘14 Design encoder for an (8, §) cyclic code with generator g (x) = 1 +x +x" + x’, Use this encoder to find the codeword for the message (10101) in systematic form. ‘Ans. : 8G) = Lextx tert ixticeie = Ltgxtextx’ 28 "beet “ence the encoder is as shown in Fig. $4 100 Fig, 5.4 4, = FRR@d, FRO = 4, FF2=¢, FFI = Fred, FR = FFI@d, ‘Message = 10101 2 Initially the output of all the flipflops is assumed to be equal to ero, Refer Table 5.3 to understand generation of codewords, (4939) Table $.3 : Encoder operation 44-40 E ‘= FFO+ dj] PRR FFT eas © t[iroeGr ft ofost=t 1 ifieoet | 4 Oferta | 4 ijtizo | o 1 Parity bts = 014 Codewordx = [7010 OT] a M 8 Digital Communication (E&Te Verification : Give M = 10101 M@) = xt+¢+1 x™'M@ = dated patdee Divide x'-* M (x by g (9) as follows en totes Pavaxet Teter Dartexet mainder Ris) = +1 (Codeword polynomial X (x) = [x""*M (x)] ®R&) a XQan tt txt] Caloneigmaly, e190) + Codowordx = [COTO TOT]. verted Eo Mw 8 ‘@.12 Explain syndrome decoding for cyclic codes. [a Ans. ‘Syndrome decoding : ‘When a code word X is transmitted over # noisy channel, cerors are likely to get introduced into it. Thus the received code word Y is going to be different from X. For a linear block code, the fitst step in decoding i to calculate the syndrome for the received code word, Ifthe syndrome is zero, then it indicates that thee are ‘no transmission errors in the received code word. But if the syndrome is non-zero, then received code word contains transmission erors which requires correction. Incase ofa cyclic code, inthe systematic form, the syndrome can be calculated easily. Let the reosived code word be a polynomial of degree (a1) o less. Let it be given by, YD)= Yor, D)4 4,1 D () D027 Now divide y (D) by the generator polynomial @ (D). Let Q @) represent the quotient polynomial and R.(D) be the remainder polynomial. yO) RO) am 7 2>*cm @), y(D) = Q(D)-G(D)+R(D) -) “The remainder R (D) is a polynomial with degree (n —k—1) ‘or less. Its called as the “syndrome polynomial”. The coefficients, cof the syndrome polynomial will mske up the (n ~ k) by 1 syndrome S. y@) ‘Thus syndrome polynomial $ (D)= Remainder of -G (py ‘When the syndrome polynomial S (D) is non-zero, the presence oferors in the received code word is confirmed. @13 A (7, 4) cyclic code Is generated using the polynomial x’ +x +4. 4. What would be the generated codeword for the data sequence 1000 and 11007 2. Draw the circuit diagram to generate this code and show how parity bits are generated for the data sequence 1000. 3, Draw the clroult for syndrome calculator and obtain the syndrome for the received ‘code word 1000100, 4. Draw the block diagram of cyclic code decoder. Ans. : Given: (7,4) cyclic code, polynomial =x? +x+1 Data sequence, M, = 1000 and M, = 1100 1. Generate the code words: “The code vectors in systematic form can be obtained as follows : Mo where M = Message or data matrix = Generator matrix For generator matrix, please refer Q.9. For M, = [m,m,m,m]=[1000] x= MG 1000:101) a o100:111 0001:011 .X, = (1000:101) Digital Communication (E&Tc - MU) po-28 end word = [m,m,m,my]=[11 00}, = Ma ° 1 ° 0 X, = [1100:010) ‘This is a required code word. 2, Draw the encoder : ‘em Fig. 54(a) : Encoder for the given eyclic code 3. Obtain the code word : Refer Table 5.4 to understand the process of codeword generation, Let M= 1100 (E19 Table 8.4 : Codeword generation a= 419) =a [P= [Fr2= FFA] = o 1e0=1 7e0=4 O@i=1 Paty bits Parity bits=0 1 0 CodeworsX = 1100 010 aS ein Pay Ans, 4. Syndrome calculator: M0? 1, The given generator polynomial is G(X) = X*+0x+xX+1 Al) ‘The general frm of generator polynomial sas follows G(X) = Wty Xt +H X41 @ 2. Comparing Equations (1) and (2) to get, B= wy -@) ‘Therefore the required syndrome calculator is shown in Fig. 54(, int =O © aa) a 4 baad (eau Fg $.4(0: Syndrome caesar forthe (7,4 yc ‘code generated by the polynomial G(D)=1+D+D" 5. Cyelie code decoder : ‘conscted Orato Fiecaved’ Soa year vector put (©2m) Fig. £.4(¢) : General form of a decoder for eyelic code {Q. 14 The generator polynomial for a (7, 4) cyclic code Isgix)=t4x4x° (a) Draw the block diagrams of an encoder and ‘syndromes calculator for this code. (b) Find the code polynomial for message vector (0101). (c) Assume that the first bit of the code vector for the message vector in (b) suffers transmission error. Find syndrome at the receiver. Ans. (A) Encoder : ‘The generator polynomial is given by G@) = D'+0D'+D+1 w The generator polynomial of an (n, Kk) cyclic code is expressed as, a-k-1 G@) = 1+ E gpi+p"* @) @GrmInn ¥ Digital Communication (E8Te - MU) Fora (7, 4) cyclic Hamming code, n=7 and k= 4 7-4-1 Go) = 1+ FE gpien’ 1+g,D+2,D?+D" 2 OD) = Di+gDi+g,D+1 (Comparing Equstion (1) and (3) we et = Land g=0 AA) Therefore the encoder for a (7 4) Hamming code i as shown in Fig. 5.5(a). (20 Fig. 5.5(2): Encoder fora eyelic hamming code ‘The piven generator polynomial is G@) = D'+op'+D+1 The general form of generator polynomial is as follows : G@) = D'+g,D+g,D+1 © Comparing Equations (5) and (6) we get, = %8 zu) ‘Therefore the required syndrome calculator is as shown in Fig. 550). ) (a8) Fig. £5(b) : Syndrome ealeulator for the (7,4) eyelic code generated by the polynomial G (4)=1+D+D" (8) Find the codeword : Given: M=0101 2 M@) = Ox 4x +Ox+1=x41 8G) = txt MQ) = x@@+p=x tx arm Divide x*°* M (x) by @(8) as follows 1927) ‘Codeword polynomial is givea by, emaunder X@) = PY FM(@B Wat +? = Ox x5 + Ox ex) +x? 0x +0 eivay : CodonordX = 0104 100 aye, Mee. Ans, (C) Calculate the syndrome (S) : 1. The output switch of Fig, 5.5(b) will be initially in position 1. It will remain in this postion until. all the 7 bits of the received signal Y are shifted into register. ‘After that the output switch is switched to position 2. The clock pulles ae then applic to the shift eps o ouput the syndrome vector S, ‘Table 5.5 explain the proces of syndrome generation forthe received code word &-1929 Table 55 : Calculation of syndrome came oan S9=¥@52 8; = 5905 8=5) 2. 0 2 2 + 0 1 x 2 Syndrome $= (011) Ans Confirmation : inder of 22 Remainder of Y) Syndrome $ (x) = ‘The division takes place as follows Received codeword Y = 011000 1 Y¥@) = xftlex 8@) = etx+1 WH digta! Communication (E8Te- MU) Feedback sos ‘The degree of remainder polynomial R(x) will be (n-k-1)= 74-192 . RO = txt S@) = O¢+x+1 “, Syndrome = 011 Confirmed (0) Calculate syndrome : Fcoived signal ¥ et @ 2 YQ) = tex extl 110001 em, sone waxed) Cetexet davaxexet Saved axtt +x <— Remainder R(x) . S@=K+x -.S=110 Ans. 18 Sheich the encoder and syndrome calculator fr" ‘the generator polynomial g (x) = 1 4x8 4x and hua the eyndrome forthe recalved codeword 0, ATT a Part: To draw the encoder 1. Thereeived nde won has 7 bis hence n= 7 andthe depee of pea jlyomil 3 ens =K=3, 00K 4. Thus Breccia 0 edt aca 2. The grmnfor pobomil of an (0 #) oc code Is ae a-k-1 z eG) it gait ey 5) 14D gate 8G) = 1tgxtex te ZO) ‘The given generator polynomial i, B09) = 1+0ntx te on) 3. Comparing Equations (1) and (2) we get, a = Qe Hence the encoder is as shown in Fig. 5.6(0. Coto tbe rane en Fig. 56(a): Encoder Part: Syndrome calculator 1... The given generator polynomial i BQ) = itty g(x) = +e +Oxt1 2,The genera form of generator polynomial is gp) ~ e+ xt8xt1 ‘Comparing Equations (3) and (4) we get, a = 0mA1 “The syndrome calulator is shown in Fg, 5.60) -@) @) (e2n6 Fig. £.6(b) : Syndrome calculator Parti: Calculation of syndrome 1. The output switch of Fig. 5.6(0) will be intially in position 1 ‘until all the 7 bits ofthe received signal Y are shifted into the register. 2, After that, the output switch is shifted to position 2, Clock pulses are then applied to the shift register to output the syndrome vector S. ‘Table 5.6 explains the process of syndrome generation, (es) Table 56 T 2 3 4 3 6 7_ [11s Cade word LSB to MSE Liar ‘Thus the required syndrome. 8 = 100 ARS. Garni tal Communication (E&Tc - MU) Q. 16 The generator polynomial for a (7, 4) cycle code is Oi) 4xt 4x? 1. Draw the block diagram of encoder and ‘syndrome calculator 2. Find the code polynomial of 0110, Ans. Part: To draw the encoder : |, The received code word has 7 bits hence n= 7 and the degree of generator polynomial is 3 hence nk = 3, so k= 4, Thus the given code is (7,4) cyelic code. 2. The generator polynomial of an (a, k) cyclic code is expressed as, n-k-1 e@ = axe gO) 1+ E grits int 8) = ltgxter ty ly ‘The given generator polynomial is, BO) = 1t0xtx te 0) 3. Comparing Equations (1) and (2) we get, B= Oat “ence the encoder is as shown in Fig. 5.7(a) 4 ©1219 Fig. 5.7(@) : Encoder Part Il: To obtain the codeword for message (0 4 1.0): Initially the output of all the Mip-lops is assumed to be equal to zero. 4. ForM=0410 Given: M=0110 Dest (196 Table 5.72) : Encoder operation SOF FPO = 5 ° fre treo Ofosi=+ Thus tho pasty bite = 044 cosmo = B77 ET ee Moomoe Pay 2. ForM=0100 0100 (©1944) Table 5.1(b): Encoder operation Fea roa] Fri arr a] Fra= Fria] ° ° ° ° ° ° 1 ° Message Party ‘ome art il: Syndrome calculator : 1. The given generator polynomial is, BO) = 1+xt4x 8G) = Ptx2+0x+1 .@) 2. The general form of generator polynomial i, 8@) = tere xed ‘Comparing Equation (3) and (4) we get, B= 0, ‘The syndrome calculator is shown in Fig 5.7(). a) (226 Fig. 5.7(0): Syndrome caleulator Q.47 Design a syndrome calculator for a (7, 4) hamming code, generated by the generator Polynomial g(x) = 1 + X" + X°, if the transmitted code word C = (0111001) and received word F= (0110001). W Digital Communication (E8Te- MU) Ans. : “The standard form of generator polynomials a follows: Given: (7,4) Hamming code. aX) = tex tg xtT =) Generator polynomial g(X) = 1 +x" +x ‘Comparing Equations (1) and (2) we get, ‘Transmitted codeword C = 0111001 aera TS 2 ‘Therefore the required syndrome calculator is as shown in Revel emia 1= 01 10 01 Fig 58 ei) ror ‘Tofind : Design a syndrome calculator. “The given generator polynomial i, oO) = etx tOx+1 ay 8 a & Sie (1679 Fig. 58 : Syndrome calculator for the (7, 4) Hamming, code generated by the polynomial g(X)=1 +x" +x" Chapter 6 : Convolution Codes 1 Describe in convolution code, tie domain Se Seproach and anaterm dom approach to ee eae determine encoder output vier x, = XQ) xp aPand mex 2) we ‘Transform - Domain Approach : ‘Time domain approach : ‘The time-domain behaviour of a binary convolutional ‘encoder can be defined in terms of m-impulse responses. Let the impulse response of the mod-2 adder generating x, in Fig. 61 be given by, the sequence {409,49}. Siniarly Jet the sequence fe), ,....e[)} represent the impulse response of the mod-2 adder generating, in Fig. 6.1, These impulse responses are also called as “generator sequences” of the code. Let (Im my yy =) denote the input message sequence applied tothe encoder of Fig. 6.1 one bit ata time (starting from ‘m,). The encoder generates two output sequences x, and x, by “We know that the convolution in time domain is transformed performing convolutions between the message sequence and the | into the multiplication of Fourier transforms in the frequency ‘domain. We cen use this principle in the transform domain (230) Fig, 6.1(a) : Convolutional encoder impulse responses approech, Tbe Sigh calpst nawwater Spates v— Os In this process, the first step is to replace each path in the m encoder by a polynomial in such a way thatthe coefficients ofthe x=x,()= De) mpi, .Q) | polynomial are represented by the respective elements of the i=0 impulse response. For example, for the path corresponding to the boas top adder of Fig. 6.1() itis given that, iar output sequence xis given by, 5 “ . y usu Sequence xis given by Pca sae nt? = Sa? mpi 0) 10 Then these bit sequences are multiplexed (selected ‘alternately) with the help of the commutator switch to produce the mee following output. Input-top adder-output path Grimms Digital Communication (E&Tc - MU) ‘Therefore the input-top-adder-output path of the encoder of above figure can be expressed in terms ofthe polynomial as : Po =a ckeasty ascend G@) = 1+D+D* 43) sad peal GD ae gD HOD +. Dt, DL Similarly the polynomial corresponding to the input-bottom ‘adder-output path forthe encoder shown in Fig, 6.1(a) is given by, 10s, Inputbotom adoroupet pa, GD) = ep +e, D+E,D re) Substiuting ag? = 1, gy =Oandg) =1 weet, PD) = 1+D* 8) ‘The general form of polynomial is given by, ee 2 G@)=8, +8, +8, 7+ +8, DE 9) ‘The polynomials G (D) and G® (D) of Equations (5) and (8) are called asthe “generator polynomials” of the code. From the generator polynomials, we can obtain the codeword polynomials as follows : Codeword polynomial corresponding to top adder is given by, x©@) = G@)-mO) ‘where m (D) = Message polynomial (10) and the codeword polynomial corresponding to the bottom adder is given by, xD) = G7D)-m@) ~ Foro input ==> Fort input (ss Fig. 67(¢) : Trellis diagram WW Digta! Communication (E&Te- MU) De-39 Chapter 7 : Digital Modulation Techniques @.1 Explain the coherent and non coherent digital ‘modulation techniques. CTE Ans ‘The digital modulation techniques are classified into two categories as: 1. Coherent techniques 2. Noncoherent techniques. 1. Coherent techniques : In the coherent digit modulation techniques, we have to use 1 phase synchronized locally generated carier at the receiver 10 recover the information signal. The frequency and phase of this carrier produced at the receiver should be perfectly synchronized with that at the transmitter. Coherent techniques are complex but guarantee better performance, 2. Noncoherent techniques : In the noncoherent techniques, no phase synchronized local carrier is needed at the receiver. These techniques are less ‘complex. But the performance is not as good as that of coherent techniques. Q.2 Explain the following terms in digital modulation techniques : Probability of error, Power spectra, Bandwidth efficiency. Ans. : 1. Probability of Error ( P,) : ‘The most important goal of passband data transmission systems is to design the receiver having minimum value of average probability of error in presence of additive white Gaussian noise (AWGN). The value of error probability P, ofa system indicates its performance in presence of AWGN. The value of P, should be as small as possible. 2. Power Spectra ‘The features of every method can be completely understood if and only if we study the power specra ofthe modulated signal. It is a graph of power spectral density plotted on Y axis versus frequency (on X axis). It gives us information about the bandwith requirement and cochannel interference, 3. Bandwidth Efficiency : ‘The channel bandwidth and transmitted power are the (wo primary communication resources. Every communication systems should be spectrally efficient ‘The bandwidth efficiency is defined as the rato ofthe data rate (bits/sec) to the effectively utilized channel bandwidth. It is, denoted by p. eee The bandwidth efficiency is dependent on the following factors: 1, Multichannel eicoding Tene doy 3, What isthe slgniicance of Euclidean datance? ioe We know that a BPSK signal is mathematically expressed as, Vorsx(t) = b(t)fZP, cos ot a ‘Represent this signal in terms of one orthonormal signal u(t) which is defined as, (t= u, (t)=Y27T) cos @,t -Q) ‘Note that u, (t) is same as the basis function 4, (t). 4 Jar 0 Substituting this value into Equation (1) we get, Vorsa(t) = b(t) x92P, x YT72 xu, (1) = [oct xP] xu, (2) @) But b(t) = #1 Vepadt) = +P, T, xu, (t) (4) ‘That means Vapge(t) = +YPLT u;(tyor —VP,T u(t) ‘We can show these two values as two distnet points on a single axis of u, (¢). The first point is located at +fP,T, while the second points located at—-P, Ty, as shown in Fig. 7.1, “eos at = Mossago Message point 2 4 Point (Euclidean distance) “Fats Pate Sud or 60 (78) Fig. 7.1 : Geometrical representation of BPSK signal When the BPSK signal is received at the receiver, the = VP.Ty represents (oF identified as) a logic 0 signal and + \P.Tyis identified as a logic 1 signal, The distance “* ‘between these signals is given by +VPR-(-VP)= 2h, 9) But P,T,=E, ie. the signal energy ace armmmmmn Communication (E8T d =245, Oy ‘This distance is also called as Euclidean stance. Thus itis the dlstance between the adjacent message points. The importance of this distance is that, the eror probability of BPSK fs dependent onthe value of d. The enor probability decreases with increase in the valve of “”. The geometric representations ls called as signal space representation. @.4 Derive the expression for error probability of BPSK system with coherent detection. Ans. : ‘The steps that we are going to follow in order to obtain the ‘expression for the error probability, are exactly same as those followed to obtain error probability ofan ASK system. ‘We know tha the BPSK signal is represented as follows Binay1: x,(t) = ZF, cot 0, Binay0: x(t) = —fZP,c0s0,¢ Therefore x(t) = —x (2) We are going to use the matched iter for detection of BPSK signal. The expression for error probability of an optimum ‘iter is, A) ‘The expression forthe signal to noise ratio ofa matched filter is given by, fae’ = 2 Finte Using the Rayleigh energy theorem, is Bi t [ixcppar = Jemaefema —.c) Be =a 3 ‘The limits of integration ofthe lat term in Equation (3) are 0 to T because x ( t ) is present only over one bit interval 7. ‘Substituting Equation (3 into Equation (2) we get, iE , sats? s T x] Pwe ~() Butx(t) = x,(t)-x(1) and for BPSK, x,(t) = —x (t) X(t) = 2x, (t)=2 Substituting this value ofx (t) into Bguation (4) we get, TP, cos 0, pe-40 faceacty” | = rata 8) s Lt cos2o,t But cos*o,t = SASL Substitute this into Equation ow eet, ae 1st ee fomzat t{jand 0 fat eghcinaa ol} Spe)» ‘The value of second term in the RHS of Equation (6) is zero. [actacsacny” at a = nergy. %u(T)=xa(T)]? _ ae E 1 But P, Taking the aguae rot of both sides pasa. rn ‘Substitute this expression into Equation (1) to get the error probability for BPSK as ‘This is the expression for error probability of BPSK with matched filter receiver. This is the expression for the bit error probability Py. I indicates thatthe probability of error depends on ‘the energy contents of the signal “E". It does not depend on the shape of the signal. As the energy increases, the value of erfo function will decrease and the probability of error will also reduce, This result can be expressed in terms ofthe Q function as Py = fee This is because the relation between ef and Q functions 9) =Ferfe (AB) and ere (0)= 2-3) 9) (40) ommmmnmn ‘Communication (E&Te - MU) s, _ w=bnare-T,) 1 Phase a f 3. Higher than BPSK, 4 Higher than BPSK DPSK, 5. | Detection ‘Synchronous | Synchronous method 6, | Bffect ofnoise | Low Higher than BPSK 7. | Needof Needed Not needed synchronous 8. | Bit Basedon | Based on signal determination at | single bit | received in two thereceiver | interval successive bit intervals @.6 Explain the transmitter and receiver of DEPSK ‘system with block diagram, why error occur in pairs in DEPSK system ? Give suitable example. Ans. : DEPSK transmitter and receiver : ‘The transmitter of a DEPSK system is identical to the DPSK transmitter shown in Fig. 7.22), but the receiver is completely different. ‘The block diagram of DEPSK receiver is shown in Fig. 72(2, It shows that the signal b ( ) is recovered from the ‘received signal, using the synchronous demodulation technique. sje Batcand cde a epee 237 Fig. 7.2(a) : Recever block diagram of DEPSK system ‘This is same asthe BPSK detection. Once the signal b (1) 8 recovered, i s applied to one input of an EX-OR gate. The signal 1b (is also applied to atime delay circuit and the delayed signal ' (t=, is applied to the other input ofthe EX-OR gate as shown, in Fig. 72), fb (t)=b (tT, then ouput ofthe EX-OR gate willbe 0. A(t) = 0 ifB(H) BCT). And ifb (t)= "DC then output ofthe EX-OR gate willbe 1. a(t) = 1 ..ifb(t)= BGT) Errors in DEPSK System : ‘We have soen that in DPSK there isa tendency for bt erors to ocour in pairs but the single bit erors are also possible. However in DEPSK the errors will always occur in pars. This is shown in Fig. 72(0). In Fig. 7.2(6) the signals b(K), b(k— 1) and d (k) = bk) ‘bk 1) are error free signals, whereas signal bk ) i the same signal b() with one error. ‘Therefore its delayed version b’(k — 1 ) will also have one ‘error. When b'(k ) and b’ (k ~ 1) are added together (modulo-2 ‘addtion) in an EX-OR gate the resultant signal d'(k) bas two errors ‘as compared to the original eror fee signal d(k ). ‘ime bw: 01161107 bie}: or tori0d 3) =D) @ Bk=1): TOT O10 ¢-DEPSK ouput One er, bw: 011 tit 00 but): or MH to 0 6) =D(K)@ b(e-1): +1 OO] O10 +DEPEK ouput Two eror (30 Fig. 7.2(0): Errors in differentially encoded PSK occur in pairs QE Communication (E&Tc - MU) roa en ee geegae carne fBFSK and explain the working. Denke asd etbell otiecorectrrich orthogonal non-orthogonal signals. (el bel Sores DR rca signal haa Genaaton of BESK: ‘The block diagram of a BFSK generator is shown in Fig. 7.3(@. It consists of two oscillators which produce caries at frequencies fj, and f, respectively. The oscillator outputs are applied to the inputs of the multipliers (balance modulator). The other input tothe two multipliers are the signals p(t) and p, (t)- “These signals are derived from the data bts d (t) as follows ‘Value of [ Putt) [ PLC) a(t) Ss any || SUT |S4 av ape ‘The other inputs to the two multipliers are the reference signals u,(t) or @ (t)and w(t) or (t). which are generated by the two oscillators. y(t) = $)(t)= 27 Ty cos ot and y(t) = 4 (t)= V27Tpoos ot ‘The multiplier outputs are then added together to get the [BESK signal, given by Equation (1) of . 11. Thus when a binary “0” isto be transmited, p(t) = 1 and P(t) =0 and for a binary “I” tobe transmitted, py (+) = 1 and py (1) "0. Hence the transmitted signal will have a frequency of either fy off. 8 = 4,00 JETT) coe yt [Paterno =e) = JET ov8 yt Jones Perot (€-4») Fig. 7.3(a) : BFSK generation BFSK Receiver (Coherent Receiver) : ‘The BFSK receiver block diagram is as shown in Fig. 7.3(b). Its supposed to regenerate the original digital data signal ftom the [BESK signal atts input @s\ De-42 “The received BESK signal denoted by x(t). Tis applied to the two correlators. These corelatrs are supplied with locally aenerated coherent reference signals, (or uy ) and gy (or ty) “These ae th basis fietions OiCt)oru,(t) = 27H cosayt -) and (1) = u(t)=V27R coset 4) x) rooaived BFSK ‘signal Threshdld of 0 vots| ‘Choose 1 x5>0 att or (0 Choose Oifx5 <0 (410 Fig. 7.3(0) : Coherent BFSK receiver ‘The outputs of the two correlator are then subtracted to get a signal x, such that, % 7 am ‘he signal x, is then applied to a decision deviee which ‘compares it with a threshold level of zero volts. fx, > 0, then the receiver decides that a 1 was transmitted ‘whereas ifx, <0 then the decision i that a0 was transmitted @.8 Draw power spectra for BFSK modulated signal and state bandwidth requirement for transmission of signal. Ans. : ‘The BFSK output in terms ofthe variables py (t) and p, (+) is given by, Varsk (= YEP, Pu 008 (cog + Oy) +A(2P, py cos(at+e,) A) ach term in the Equation (1) looks like the signal AJiP.b (t) cos @, t which we have used in BPSK. In BPSK and BFSK the diffrence is that in BPSK, the signal (tis bipolar which can have two values, either +1 or— 1. In [BFSK the two signals p(t) and p, (t) ae polar, and they change their vals between 0 and ~ 1 ony. So now waite p, and pin the form of sums of @ constant and bipolar variable as follows : ¥ Digital Communication (E&Te MU) pat) = 3 +374 (t) a fie p(t) = 5 +3R, (1) -@) ic fC )aad gC 1) wo pe ye tek ae ‘Also they are complencatay signs Le. when pi, = 1, eS Substning the vales of py Ct) and py Ct ) fom qusions @) and) into the expression for BFSK Equation (1) weet Vowrse(t) = \E cxeg(t) +0, 144 eta, 00) oF cntoutoa se) ctxts03 to) ‘This expression wil help us to daw the spectrum of BESK. Power spectral ‘density Span) a (e-) Fig. 74 : Spectrum of BFSK ‘The first term in Equation (4) will produce a power spectral density which will be centered about and consists ofan impulse at fix. Similarly the second term produces power spectral density Which is centered about and consists of an impulse at f,. ‘The third and the fourth terms in Equation (4, together will produce the spectrum of a BPSK. The individual power spectral 1 we find that Ef, = ‘Therefore forthe bipolar NRZ. format 156 012 for R@= =A /4 ..forn=+1 (8.13) o forn> 1 ‘The basic pulse p(t) has the Fourier transform given by, PCE) = Tsine(T,) (8.14) Hence substitute Equation (13) and (14) into the Equation (7) that states thet, raf E Rm@ehr ‘To get the power spectral density of NRZ bipolar format as follows : 510 = Tyne Ty [4 (oseHey] « AB wteetott-enaetto) APT, sin? (FT) sin? (ET) 13) “The normalized form of this equation is shown plotted in “c” inFig. 83. 4 Consider a binary data sequence 1111101111. Draw the waveforms for the given binary data sequence, using bipolar AMI RZ and Manchester. Ans. at .. i) Ani peo| AMI RZ’ 7 =a) voces LEAS (e158 Fig. 84 G5 Consider a binary data sequence 10101010. Draw the waveforms for the given binary data sequence, using unipolar RZ and split_phase Manchester. Ans. : ‘The waveforms areas shown in Fig. 85. Gt nnnnn W digital communication (E&TCMU) 187 hal oe Loe eer wm Searpnace gee LEE: “NE 1035 Fig. 85 @.6 Write short note on intersymbol interference (ISI). Ans. : In a communication system when the data is being transmitted in the form of pulses (bits), the output produced at the receiver due to the other bits or symbols interferes with the output produced by the desired bt. ‘This is called as intersymbol interference (ISI). The intersymbol interference will introduce errors inthe detected signal at the receiver. Consider Fig. 86 which shows the baseband binary PAM system, The input signal consists of a binary data sequence {by} ‘with abit duration of T, seconds. ‘This sequence is applied to a pulse generator which produces ‘a discrete PAM signal (Line code) given by EZ ave-kt) a x@ = k where v (Q) denotes the basic pulse, normalized such that ¥ (0) = 1. The first block of the system ie. pulse amplitude ‘modulator converts this input sequence into polar form as follows : if b= 1 then q=1 and b= 0 ac-l ‘The PAM signal x (0 is then passed through a transmiting fiter, The output ofthe transmiting filter is then transmitted over transmission channel. The impulse response ofthe channel is h(t). ‘A random noise is then added tothe transmitted signal when it travels over the transmission channel. Thus the signal received at the receiving end is contaminated with noise. then + Tranaitg se — Transmission chara ee wont ne e299 Fig. 16: Baseband binary data transmission system Porasscorutions] ‘The channel output is applied to a receiving iter. This filter output is sampled synchronously with the transmitter. The sampling instants are determined by a clock or timing signal which is extracted from the receiving filter output. A sequence of samples js obtained at the output of receiving filter which is used to reconstruct the original data sequence with the help of a decision making deviee. ‘Each sample is compared with a predetermined threshold level inthe decision making device. If the amplitude of the sample is higher than the threshold level then itis desided that a symbol “1 is received. On the other hand if the signal has an amplitude lower than the threshold, then the decision is that “0” is received. “The receiving filter output can be written as, yO=w E apd-kty+no -@) where 1 is a scaling factor and the noise n (t) is the noise produced at the output of the receiving filter due to the channel ‘added noise. The term p (~KT,) represents the combined impulse response of the receiving filter. ‘The receiving filter output y (0) is sampled at the time instant 4417, with {= 0.1.2... This results in the sampled version of 1 follows y(t) = a ZX apaT.-kTy+n(t) y@entn LC ypiT—kT)+m(4) 4) kei ‘This is the receiver output y (f) at instant t= t, Equation (4) has two terms = 1. The first term “97” is produced by the i® transmitted bit “Theoretically, only this term should be present atthe receiver ‘output but practically 2. The second term represents the collective residual effect of al, ‘the transmitted bits, corresponding to the sampling time instant t=, This residual effect is known asthe “intersymbol interference”. (ISI), ‘The ISI results because the overall frequency response of the system is never perfect and pulse spreading is bound to take place. When a short pulse of duration T, seconds is transmited through a bandlimited transmission system, then various frequency ‘components present in the input pulse are differentially ‘attenuated and more importantly differentially delayed by the system. is not so. WF digtat Communication (E2TC-MU) ue to this the pulse appearing at the output of the system will be “dispersed” over an interval which is longer than “T,” seconds as shown in Fig. 8.6(a). Due to this dispersion, the adjacent symbols will interfere with each in time domain other ‘when transmitted over the communication channel. This will result in the intersymbol interference (ISI). The transmitted pulse of ‘duration T, soconds and the dispersed pulse of duration more than ‘T, seconds are shown in Fig. 8.6(8). Input [Bye patter isa patter displayed on the screen of a cathode ray oscilloscope (C-R.O.). The shape of this pattem is very similar to the shape of human eye. Therefore it is called as eye pattern. ye pattern is used for studying the intersymbol interference (ASI and its effects on various communication systems. The eye pattern i obtained on the C.R.O. by applying the received signal to vertical deflection plates (¥-plates) of the C:R.O. and a sawiooth ‘wave at the transmission symbol rate. (1 /T,) to the horizontal, deflection plates (X-plates) as shown in Fig. 8.7). The received digital signal and the corresponding, oscilloscope display are as shown in Figs. 8.7(a) and (©) respectively. The resulting oscilloscope display shown in Fig, 8:7(€) is called as the “eye patter”. This is due to its resemblance tothe human eye. 1.58 ‘The region inside the eye pattem is called as the eye opening, ‘The eye pattem provides very important information about the Performance of the system. The information obtainable is as follows (See Fig. 8.70). (€28» Fig. 87() : Interpretation of eye pattern Information obtained from eye pattern : 1. The width of the eye opening defines the time interval over Which the received wave can be sampled, without an error due to ISI. The best instant of sampling is when the eye pening is maximum. 2 The sensitivity ofthe system tothe timing eror is determined by observing the rate at which the eye is closing as the sampling rate is varied. 3. The height of eye opening at a specified sampling time ‘defines the margin over noise. 4. When the effect of ISI is severe, the eye is completely closed and itis impossible to avoid errors duc to the combined effect ‘of ISI and noise in the system. Q.8 What is equalization ? Explain with block diagram a tapped delay line equalizer. a . (@) Distorted binary wave ——_ ‘Sawtooth x ie mt) opening (b ) Oscilloscope connections (© Bye pattern seen on the CRO sereen eee (£25) Fig. £7 : Obtaining eye pattern Whenever a signal is passed through a communication ‘channel, distortion will be introduced. To compensate forthe linear distortion, we can use a network called “equalizer” connected in ‘cascade with the channel or system as shown in Fig, 8.8 Delayed version of Input ‘Channel ‘Equalizer channel input He Ha [ (6799 Fig. 88 : Block diagram of equalization Principle of Equalizer: “The equalizer is designed in such a way that within the ‘operating frequency band, the overall amplitude and phase @Grnmm Digital Communication (E&TC-MU) ae Digital Corn TA responses of the cascade system shown in Fig. 88, are approximately equal to amplitude and phase responses for the Aistortonlss transmission. Consider @ communication channel with a transfer function H, (4), Let the transfer function of an equalizer be H,,( ). Then the overall transfer function ofthe cascade connection is given by HL (£) Hyg (£). For the overall transmission through the cascaded connection of Fig. 88 to be distortionless, the overall transfer ‘unetion should satisfy the following expression H(£)Hy(f) = kel? Where k = Scaling factor and t, Constant time delay a ‘Therefore the transfer function of the equalizer is given by, ay K BAO ety Equation (2) shows the ideal value ofthe equalizer transfer function. Practically the equalizer transfer function should be as close as possible tothe expression given in Bquation 2). Practical realization of an equalizer: -@) ‘The equalizers can be practically realized using the structure of the “tapped-delay-fne filter”. Tapped Delay Line Filter : Consider a time invariant filer with an impulse response (0, We assume that, h(=0 fort <0 ie. the filter is causal. ‘The impulse response of the filter is of finite duration h(@)=0fort2T, We decespee to kles mand y © so-enal ee (0 spd itn flows T; v0 = [ no-x0-26 Q G) 2. Let the input x°(, impulse response h (Q) and output ¥ (be uniformly sampled, at a rate of V/At samples per second. t= nde =) and (5) 3. Where n and k both are integers and At is the sampling period. 4, If the value of Ax is very small then the product h (0) x (¢~ 2) of Equation (3) will remain constant in the range k Ac StS (k + 1) At forall values of k and t. Then we ean ‘approximate Equation (3) by the convolution sum as follows. <= kar 1-59 N= y@A) = E h(kAd-x(@Ar-kAr)Ar 6) k=0 when, NAt = Ty Substitute, h (k A®) At = w; into Equation (6) to get, NI yudy = E wex(nde-kar) 1) k=0 Equation (7) can be realized using the circuit shown in Fig. £8(a) which is called a5 a tappeddeley-ine fer or transversal fier, Because if we expand Equation (7) then we get, (ns) = Wax (ads) +w, x(n Ae Ar) * wx (ade —2 A) Foca +My yxnde-(N—D At] 8) “This expression is realized as shown in Fig. 8.8(a). xia a0) ay ar289) ‘upatyin a2) (e-mm) Fig. 8 8(a) : Tapped-delay-lin filter or transversal filter @9 Show that duobinary signaling suffers from error propagation while precoded duobinary signaling ‘does not. Explain with encoder and decoder block diagrams and decoding logic. Ans. : Duobinary - Encoder with Precoder (Differential Encoder) : ‘The modified duobinary system has an important limitation, ‘which is that if there is an error in the received signal then it will propagate in the other values of ( k ). The duobinary system is five from such a problem. The block diagram of duobinary encoder ‘with precoder is as shown in Fig. 8.92). J+ Moaied Duotinary Enooder Preooder or X(T) shown in Fig. 910), Asmime dt x, (1) wes the ‘itis fe fm ping rong T) is postive and lger than the vohage irene (1) x97 3 [22] a 7) show in Fig 9.19 then esp ome rere ey wee ty the receives. 3. Similarly if x, ( 1) was transmitted and at the time of sampling the noise n, (T') is negative and larger than the oes ee eT ies wil be nrodce. For eror introduction, 1) y(t acy s BO -@) 4. The noise is assumed to have a Gaussian distribution and we have already derived the expression for the PDF of n, (0) 1 Sin = Spee “This ploted as shown in Fig. 9.10). 5. The probability of error is equal to the probability that the noise n, ( T ) will have a magnitude larger than (oe eee Set I. ees: 4) ~ fone] = o[,<(@ OS™) ©) “Te probably of err is represented bythe shaded area in Fig 9.100). (€45 Fig. 9.1(0 : PDF of output noise n, (0) 6. Therefore probability of error is, f filme (TIM (t)] XT) = (7) z f eo mcr/2c (7) =e (T) 2 [01 CO} wa » Sle. : ie 2 2 d{a,@Q) = Vode Forn,() > = Ke 7 mS ie ee ee 2 (T) ‘Substituting all these values into Equation (6) we get, re faar” eu Or Xo Xe e180, 2 feo Rearrangng this exreston we get, ere Pe 2 Te my (T) xg (7) med aio ‘he expression inside the square bracket is the complementary eror fron "ny (T) =n (7 ro doe Ba “This isthe expres forthe err probity of en opium fier. ‘Conclusions from Equation (9) : 1. The complementary eor fimcion “er” is = monotonic decreasing fiction. Therefore the enor probably will reduce with increase in the difference [x,, (T ) —X,2(T )]- Digital Communication (E&TC-MU) 2. The error probability will reduce with decrease in the rms noise voltage 0. (The mean square value of noise is equal to ‘standard deviation «” since the mean value of Gaussian noise is zero). 3. Thus the optimum filter must maximize the ratio (CEC) ew tint te er probably P, 2 Derive the transfer function H () for an optimum far’ Wan a optimum tar con be called inated ter? ERATE TERE roa Te tani oon 4) fo ote tr mus be whee ae etate tals Xo (T) (7) 9 = MD) F 0) However for mathematical convenience, we shall actually maximize p* rather than p. ‘Ascumpions : ‘The seumptins made to obtain the expen forthe transfer fnctionH (Cas fllows 1 ag (T) xa (T= (1) (T) p= 30 2 2. un CT) be te somalzed ouput sgl power and f= TCT) = Mean square vale ofa, (1) =Nomalized ae power. 3, Three in order 19 minimize the probably of ere we wall have to maximize the rio, 2(T) 47) 5 Z -@) aC) 4. Let the input signal 0 the optimum fer be x(t) =x; (t)— x, (0) and let the corresponding signal output be yO =X) Xa Derivation : 1. The relation between the output and input of the optimum filter is given by, %O = xOrh® Where h() = Impulse response of te optimum fie. 2, Taking the Fourier transform of both the sides of Equation (4) we get, Grins xO} X,(f) = Xe Where H1() = Transfer function ofthe optimum filter ‘X,(f) = FT. [x, (0) and X(f) =F-T xO] a ‘To obtain x, (T) : To obtain the ratio p* we need to obtain the expression for x, (T). (1) = TEX (6) ) = J xw-emar 9 Substttng the value of, (£) we at x(T) = J a(x. eae O A. To obtain o : The noise at the filter input isn (0). Let the power spectral density of input noise be S, ( f). The output noise of the optimum filter is n, (1). Let the power spectral density of output noise be S,, ( f ). The relation between Sof) and $4 (1) is, Sa (1) = [HOP -SuCf) -®) “Therefore the nomalized noise powers given by, [Normalized noise power=o7 = JS, (f)af tO} “This is because normalized power is equal to the area under the PSD curve. Substituting the expression for Sy (f ) from Equation () into Equation (9) we get, o = J mer sna (10) 5. Subetinte the values of % ( T ) and fom uations (7) and (10) into Equation (3) gt, xt) io J ncnx(y- 2" at] ~an J icp? sscnae “To get the maximum value of p* we need to use the Schwarz inequality 6. The Schwarz inequality sates that given arbitrary complex functions Y (8 and 2. (9 of a common variable f, then Digital Communication (E&Te - MU) < J ivcpret J izcpar..a2y, ‘The equal sign applies when ¥(f) = KZ"(f) (3) Where K is an arbitrary constant and 2" (fis the complex ‘conjugate ofZ (f). 7. Now apply the Schwarz inequality to Equation (11) by substituting ¥@) = VO@H(E) Zt) = Fe ee J vcp-zcpacty J vcn-2ceyae] J ivcoper f jzcepar Ifand any if (£)= KZ" (£) ‘Substituting the values of ¥ (f) and Z (f) we get, 1 EHH) = Su(f)-H(f) Ye KX) per macs) face “This isthe expression fo the transfer function ofan optimum filer which ensures minimum probability of eror, Corespondingy, the maximum ratios given by, x (Ty xc? - AL ine aa x (Hem . H(t) = 18) 19) y= 14) e ie? Ee. Q.3 What is matched filter ? How it differs from J ivcpper ‘optimum filter 2 Mention two properties of i ‘matched fiter. Applying Schwarz nequaliy tothe numerator We ge, = 7 Ane. J ivenfer J izcypar Definition : se = An optimum filter which yields a maximum ratio Taser [E51)6? Js cated as the matched fier when the input noise -0 hit. Then power spectral density (pd ofthe input white nose es J izcopar nS) ‘Now substitute the value of Z ( f ) to get, cots shy Fcc fer 06 But [e*T| = i Q@xfT)+sin’ (2x1) }"? = 09 sary Tixente 8 We want the value of pto be maximum. To maximize the value of p* we have to conser the sign of equality in Equation (17). = D5 Oha a 4 But this is possible only when the condition for equality in ‘Schwarz inequality stated in Equation (13) is satisfied is given by Suit) = N/2 tt) Impulse Response of a Matched Filter : The impulse response h (}) of a matched filter ean be obtained fiom its transfer function H ( ), by taking an inverse Fourier transform (IFT). Use the expression for the transfer function of optimum fier as starting pint. 1 Transfer funtion H (of an optimum ters given by, KX), csr Saf) “® (Referring to Equation (18) 0fQ.2. For the matched filter the input noise (is assumed 19 be white noise. Therefore substitute, (£) =N,/2 0 get H(A) = (p= BED poe os 2. The conjugate property of Fourie transform stats that, x'(f) = Xf) 3) Digital Communication (E&Te - MU) Dc-65 Substituting hin Equation 2) we gt, H® = ae (nich lA) 3. The impulse response () =IFT TH (F] no = wr[Rexco-e** | O) 4. The inverse Fourier transform of X (~ is x (~1) and the term &* represents a time shif of T sec. FERED) = XCF) and FT[xXT-9] = XCf)ePT Therefore (= FELxC=1)) “6 S. But he input signal x (i given by, x0) = x,0-%0 2K ho = 3 ty@-1)-m-o) o ‘This is the required expression for the impulse response of ‘matched filter. This expression shows that the impulse response of ‘a matched filter isa time reversed and delayed version ofthe input signal. This is as shown in Fig. 92. AT == 040-00 © x) =f C0-1460) (4) xC=9 = (T= -24(7-0]=10, _ the impolse response of « matched filter E49) Fig. 92 Q.4 Derive expression for the probability of error of ‘the matched filter. Prove that an integrator Is a ‘Ans. : ‘The probeblity of error which resuts when we use the ‘matched filter, can be obtained by evaluating the maximum X(T sant w me a2 sven in Equation (19) of 2 forthe optimum filer, xT. Fixe) (@]_ = fae © 2. Fora matched fie, pad of input noise signal is Sy ( £) =N,/2 EAL (22ole -@) But x(T) = %(T)-xa(T) -@) Peeo 2] IxcePar a) 3. According othe Parsevals theorem we have, © © a J ixcnper = 7pan tame ) In the last integral of Equation (8) the limits are Oto T, this is ‘because x (1 persists for only atime T. Butx() = x1 0-m(9 +. Equation (4) can be written as, @ T J ixcorer = J x,0-nore ig ° T = [[Xo+Ge@-2% ome] a ° Boose eee + [Roa [Zoa-frnonoa Cea ase J ixcyPar = 5,+8)-28y OF Where, = Eneray of, () E, = Energy due to corelation between (and x (0. 4, Tfwe selet x, (0 =—x; (then we find that, E, = E,=-E,=E Communication (E&Te- MU) Substituting this into Equation (6) we get, J ixcnfar = p+E+2E=48 roy 5. Now substitute this value nto Equation (4) 10 8, (T)-ng(T)]}? 2B . Daa)" - 2a ~®) po Ene = 6. The expression for the error probability of an optimum filter is given by, Equation (9) to get the minimum error probability of a matched filter as Erna ol 2yt Peony = Zee VETR,] This is the required expression for the error probability of a ‘matched filter. cain) = 2 10) Conciusons from Equation (10): 1. The error probability depends only on the signal energy and noton the shape of he signal 2. The enor probability of matched filter is same as that ofthe integrate and dump receiver. Therefore integrate and dump receiver i matched fer 5 Mention two properties of matched filter. [ESE Ans. : From the impulse response derived forthe matched fier we ‘may state that @ filter which is matched to an input signal x (t) is characterized inthe ine domain by the impulse response h@ = xr- ) Here note that we have neglected the term ‘Equation (6) of Q. 5 for convenience. Thus the impulse response of ‘a matched filter isa time reversed and delayed version ofthe input signal x (0. Taking the Fourier transform of both the sides of ‘Equation (1) we obtain the transfer function of the matched filter as, H(f) = ET{hO]=FT[x(T-1)] Hi) = xe time | Commer ‘of matched fiter -Complex conjugate of Xi") eta ~@) “The transfer fmction ofthe matched filter is thus the complex ‘conjugate of the spectrum of the input signal X ( f.), except for the delay factor indicated by the term &>*". Based on these relations we are ging to derive some important properties of the matched Site. Property 1: ‘The spectrum of the ousput signal of a matched filter is, except fora time delay factor, proportional to the energy spectral density of the input signal. That means CB) = F(t Proof : Let the output signal be x, (t) and its Fourier transform be X, (£). Le the input signal be x ) and its Fourier teansform be X (1). XC) = Hx) -@) But H(f) = X'(f)-e*™ KC) = XC Px) = x(-x(Hem™ = XC) = 1x(nPee* Aa) But [X(£)P = (£)=Energy spectral density of inp signal x (0) 2 KC) = eee Oy Property 2: ‘The output signal x, (1) of a matched filter is proportional to the shifted version of the autocorrelation function of the input x (0 to which the iter is matched. That means x@ = R¢-1) Proof: Consider Bqution (5) which stats tht, x = #(f)emt ‘We can obtain the output signal x, (t) rom this expression by ‘aking the inverse Fourier transform. x0 = ETIX@l= J x00" of Substituting the value of X, (9 we et, x =f v(t ae 6) Gramm er eee ee Communication (ET. ‘Auto-correlation fancton R (and energy spectral density ‘¥ (£) form a Fourier transform pair. te nce) B ve R@)= J ¥(f)-0* ae ~O ‘That means Comparing tis equation with Equation (6) we concide that, % (0 = R(v)with =t-T 40 = RET) Property 3: “The output signal to noise ratio of a matched filter depends only onthe ratio ofthe signal energy to the power spectral density ofthe white noise at the iter input. That means SNR. RYE Proof “Te signal to noite ratio of an optimum ‘ter is given by ‘Equation (7) which states that, 2 _ PCTs pixie me Yu LaF eu Where S4(f) = pat of input noise to the filter For matched filer, the input noise is white noise with a pad Su(0)=N,/2 _ Signal to nose ratio a the matched Site output —| = Plixwmpe But [|X (f) df= Energy E ..AS per the Rayleigh’s + apa gations = baat 0 pe oop te oe cin optim i pene a ng ts thes Ban) emi her os tpn lng ft nt eet tie y be matin fret ety ono oe ‘input signal. Proved, @.6 A polar NRZ signal is applied at the input of ‘matched filter. The binary 1 is represented by a rectangular pulse of amplitude A and duration T ‘and binary 0 is represented by a rectangular pulse of amplitude — A and duration T. Obtain the impulse response of the matched filter and sketch it Ans. : From the information given in the question, x, = +A —forostsT and x,() = —A for 0 pCqim) foralli ek where p(q/m,) = Probebilty of observing 4 when mi transmitted and P(a/m,) ~ Probability of observing q when mis transmitted, ‘Thus this receiver will choose that signal which when transmitted will maximize the likelihood (probability) of observing the received signal “q’, Therefore this receiver is called as ‘maximum likelihood receiver. The practical implementation of the ‘maximum likelihood receiver is as shown in Fig 9.5. Parallel bank of — ‘The incoming signal is applied to a parallel bank of matched filters, The output ofeach fiter is sampled at instant t= Ty Then a constant (0, ay ns yy) i8 added to the filter output sample and the resultant are applied to @ comparator. The decision is made in favour ofthe signal for which the comparator output is largest. @.10 State and explain maximum likelihood decision rule. Explain the function of correlator receiver. ‘Ans. : For an AWGN. channel and when the transmitted signals 8) Wy 8 (ne Sy (Date equally likely, the optimum receiver consists of two subsystems which are shown in Figs. 9.6(a) and (b). J ut (a) Detector or demodulator Interproduct calovlator ® Su (b) Signal transmission decoder (©-1395) Fig, 9.6 : Two subsystems of an optimum receiver Fig. 9.6(a) shows the detector or demodulator part of the ‘optimum receiver. It is made of N number of correlators. The required N orthogonal basis functions 6. ) dy (8) are generated locally. This correlator bank operates on the input signal (0) to produce the observation vector X. Fig. 9.6(b) shows the other part of this receiver called signal transmission decoder. It is implemented in the form of maximum likelihood detector. The input to signal transmission decoder is the observation vector X and it produces an estimate fof the transmitted symbol m, with i= 1, 2 smM in such a way that ‘would minimize the average probability of symbol error. The ‘optimum receiver of Fig. 9.6 is often called ss a correlation receiver. digital Communicaton (E&Te- MU) D (ie) Digital Communication (MU) Statistical Analysis Chapter 1 20Marks | 21 Marks Chapter 2 : : Chapter 3 15 Marks | 21 Marks Chapter 4 20 Marks | 21 Marks Chapter 5 06 Marks | 10 Marks Chapter 6 ~ | 24 mars Chapter 7 39 Marks | 10 Marks Chapter 8 16 Marks | 21 Marks 20Marks | - =| 22 Marks: Chapter 4 : Probability and Random Variables [Total Marks : 20] Q.1(a) Explain autocorrelation and covariance of random variable. (5 Marks) Ans. : ‘Autocorrelation of process X (t): ‘The autocorrelation of process X (1) is defined as the expected value ofthe product of two random variables X (t,) ‘and X ({). These two variables are obtained by observing the sglven process at t=, and t; respectively. ‘The autocorrelation function is denoted by Ry (yt) and it is ‘expressed mathematically as fllows Reet) = EKG)XG] Sele 1% Bay fay Oy %) dx dx, (1) = Where fx, fy) (hy %)= Sevond onder probeblity density funtion. ‘Autocovariance function of a stationary process : The sutocovariance function of a stationary proces is mathematically expressed as follows, Celt) = BLK) my ()—mgI = Ret my (2) From Equation (2) we conclude that similar to. the ‘tutocorrelation function, the autocovariance function of & stationary process X ({) is dependent only on the time difference (t,~t,). — From this equation it is possible to calculate the ‘utocovariance ifthe mean and autocorrelation of the random process are known. ‘The mean and autocorrelation of random process are thus sufficient to describe the first two moments of a random Process but they only provide a partial description of the distribution ofa random process X (t). It is important to note that the mean and autocorrelation function are not sufficient to guarante that the given random process X (tis stationary. “ies eae Ya pp eS iste Q.1(b) What are the properties of CDF? (5 Marks) Ans. : Property 4 The CDF always has its value between O and 1 0 < R@st “0 ‘As per the definition of CDF, i is a probability function P (X Sx) and any probability must have a value between O and 1. ‘Therefore CDF will always has its value between 0 and 1. Property 2 This property states that, R@ = 1 -) Proof : Here Fy (@) = P (X < «). This includes the probability ofall the posible outcomes oF events. The random ‘arable X <-o thus becomes a “certain event” and therefore has a ‘Here Cy (t,~t,) represents the autocovariance function 100% probability. Qin 2) WF Digital Communion (E8TE-MU) (18-2 Property 3 : This property sates that, Rca = 0 ~@) Proof: Here Fy (-s©) =P (X-<~<:). The random variable X cannot have aay value which is less than or equal to ~ 2. Thus X <— © is a null event and therefore has a 0% probebilty. “Therefore its probability is equal to zero. Property 4: This property states that Fx (x) is = monotone rnon-decreasing function ie. Fg Cm) SFG) for 1 errors per word. Q.4(f) Justify / Contradict : Syndrome depends on ‘error pattern and received code word. (5 Marks) Ans. : Error vector (E) — Let “E represent the error vector which defines the corresponding error patter. ~ Therelation between X, ¥ and Bis as follows Y= X@E ) = Where ¥ is the recsived codeword and X is the transmited code word. = In the absence of errors (received codeword same as the ‘transmitted one) the eror vector “E” contains all zeros. — _A“I* in the error vector indicates the presence of errr in the ‘corresponding location inthe ecsived codeword, arm mnnminn Communication (E&Te - MU) ‘That means, , = 1 ...if'anerror has occurred in the i* position. 0 ...Noeror — Ifthe transmitted and received codewords X and Y are n bit Tong, then the error vector E also will be n-bit long. ‘Syndrome vector (S) : = We obtain the syndrome vector (S) by multiplying the received code vector (Y) with the H matrix. 8 = YH 2) — Ifthe received codeword is exactly same as the transmitted ‘on (no error) then the value of syndrome vector is zero. ite, S$ = 0... [fno errors present in the received codeword. = Rectived codeword (¥) has an order (1% n) and His ofthe order n x n~k, therefore the syndrome vector (S) will have (a) bits IY #X then, -@) = This expression shows that the syndrome is dependent ONLY on the error patter E. Q. 3{a) Linear block code having following parity check equations - a= dh + da + dh, co = dh + da, Ce = dy + ds. Calculate G and H matrix, error detection and correction capacity of the code, decode the received codeword 101100 (10 Marks) Ans. : Step 1 : To obtain the parity matrix P and generator matrix G ‘The relation Zbetween the check (parity) bits, message bits ‘and the parity matrix P is given by = [CxCsCalixa = [disda dslres Phas a) Puy Pia Poa [Cus] = teat Pa Bl ~@) 1 Pa Pas Cy = Prd @ Pai dr @ Py dy Cs = Pnd@Pnd@Pady } =) Cy = Pyd @Py dO Py dy Comparing Equation 3) withthe given equations for C, C;, Ce set, P, Pa=1 Py Pu=l Py@l PynO Piz] PyeO Py Diu ne the pry ai is shown below e- [tt 4| tou... ‘his te requied party main. The generar mati i sven by : G = th:Pl=0h:Pyed Tooritt a= [aio:ite O01:101 ‘Step 2: To obtain the code words : thas been given that, GO h@d C= @d, C= Od Using these equations we can obtain the check bits for various combinations of the bits dy, da, and ds. After that the ‘corresponding code words are obtained as shown in Table 1-Q.3Xa). Ford; dyd = 001 C= 40404-0000 © = 424-00 0-0 G = 484-06 1-1 4 C4CsCy = 101 and the code word is given by : Code word ford, dd, = 001 [ole Titer Data bits Chock bits eas) ‘Similarly the other code words are obtained. They are listed in Table 1-Q. (aX). ‘Table 1- Q.3(a)(a): Code words r [ ie kK ST. 000 |000 000 0 101 }oo1 101 3 rie fore 110 | 3 oi [or oi | 111 }100 111 4 010 |101 010 3 ool }110 oo1 3 1oo j1it 100 4 ‘Step 3: Error correcting capacity : The error correcting capacity depends on the minimam stance dy rnin Digital Communication (E&Tc —| From Table 1- Q. 3(2X2), dia =3. 5 Number of erors detectable is J, >s +1 32 stl s<2 So atthe most two errors can be detected, and gq 2 241 32 2+1 tsi ‘Thus athe most one error ean be corrected. ‘Step 4 : Decoding table : To write the decoding table, we have to calculate the syndrome (8) s = BHT @) So frst obtain the transpose of matrix H i.e. H". ‘The transpose of party check matrix is given by, w= [2] ey oe ‘Substituting the P matrix and the identity matrix we get, it 110 z.] ior # 100 ©, o10 001 ‘The exor vector E is a 1 x 6 size vector. Assuming the second bitin error we can get the error vector as: EB = [0@0000) ‘Where the encircled bit shows the bt in ero. “Hence the syndrome is given by, 8 = EH"=[00000]| =[110) s = 1119) ‘This isthe syndrome corresponding to second bit in eror. (Observe carefully that it is same as the second row of the HT tmatrix. The other syndromes can be obtained directly from the rows of H. The decoding table consisting of the error patterns and corresponding syndrome vectors is shown in Table 1-Q. 3(8X). Du 6 [ofo]olo] i] olo]1]o| cs*row oft” z[ofololo]ofrfofo]1 | o*row ork” ‘Step 5 : Decoding ofthe recelved words The first given code word is 101100. 2 Y, = [101100] ‘The syndrome for this code word is given by, 8, = ¥,H" par 110 5, = [o1t003] 99 | -110) o10 Loon ‘Thus the syndrome ofthe received word is [1 1 0] which is same as the second syndrome in the decoding table. Hence the corresponding error patterns, B = (00000) ‘And the coneet word canbe obtained as: X, = Yor X, = [111100] ‘This is the correct tansmited word. Similriy we can perform the decoding of 000110 Let X, = 000110 .. isthe second recived code word. The ‘syndrome for this can be obtained as : = YH 111 110 101 S, = (000110) + 54 |=[110] 010 on ‘The error pattern corresponding to this syndrome is obtained from the decoding table as: B = [@o00q) ‘Therefore the corect code word is given by, % = ¥,@5 = % = 1010110) ‘This i the correct transmitted word. Chapter 5 : Cyclic Codes [Total Marks : 06] @6 Write a short note on : Systematic and non-systematic block codes. (6 Marks) Ans. : Non-systematic block codes : = The code words of non-systematic block codes do not have 8 clear separation between message bits and parity bits. That ‘means the message and parity bits ean get mixed together. a SB_vigiat communication (E87 MU) D(19)-7 — The omsystematic code words can be obtained by multiplication of message polynomial with the generator Ans. : Comparison of ASK, FSK, PSI polynomial. Various code word polynomials X, (D), X, ©) vrete can be obtained a follows using the generator | | Ng. | Parameter 1 polynomial GD). XO) = MO) -0@) Da) eet es XD) = M;@) GO) lpaaee e[aE mn a X,0) = M,@)-O) sds on. Ge) Features: ioral ‘Some important features of nonsystematic codes are as follows; = tiene = as a 1. No lea division between the message and party bits ee 2. Codeword poiynomialX (D)=M () GD) os = a = 3. The complet of building the encoders high probably 4. The decoding complexity is high Performance | Bad Beter than | Beter 5. Compuution of codeword is Complex as message bis are inpreence ask | thn FSK scrambled wih pari bis of nose Systematic block codes % | Complesiy | Simple | Modertely | Very ‘An(o-k) systematic code is the one in which the first k Somples__| complet. vorteas of tales wos ot ee gs ome || 7 |e | Sale’ aaa, | Satie symbols (mg, m,,....) and the remaining (n ~ k) symbols are the Lies ee ee ait symbols (bby) 88 shown in Figt-Q. 6). re ons gummy | i | Sains oe een re me message "(9-4 pa ~ == opt a ei0spFig. [email protected](b) : Code word structure ofa ieee aa er systematic linear code ee ‘The message sequence is encoded ina systematic orm and 8 atlow bit | MODEMs | MODEMs cde word consists of separate message bits andthe party rate bits — The code word polynomial ofa systematic block code is of the folowing patter: ‘Code word polynomial X (D)=[D"-*M (D)]®R@) Where M(D) is the message polynomial and R ( D ) i the remainder polypomial. Features : Some important features of systematic codes are as follows; 1. Thereisa clear division beween the message and parity bit. ‘Codeword polynomial X(D)=[D"“*M(D)]®R(D) ‘The complexity of building the encoder i less “The decoding complexity is low. ‘Computation of codeword is easy as only the parity bits ae to be computed. Chapter 7 : Digital Modulation Techniques [Total Marks : 39] Q.4(d) Give @ comparison between the basic digital modulation techniques (ASK, FSK and PSK). (8 Marks) ari @.2{b) Explain the working of Minimum Shift Keying, ‘modulator and demodulator with the help of block diagram and waveform. (10 Marks) Ans. MSK is Minimum Shift Keying. MSK is basically QPSK. ‘with two major differences 1, In the MSK system, the baseband signal which is used to ‘multiply the quadrature cariers is @ “smooth” signal and not ‘rectangular signal as used for QPSK. 2. * The waveforms of MSK has an important property called ‘phase continuity. That means there are no abrupt changes in phase of MSK like QPSK. Due to this feature of MSK, the intersymbol interference caused by the nonlinear amplifiers is avoided completely Waveforms of MSK : The waveforms of MSK.are as shown in Fig. - Q.2(0). by Digital Communication (E&Te - MU) (©-20) Fig. 1-Q. 2(0) : MSK waveforms Generation and detection of MSK : ‘The MSK transmitter is as shown in Fig.-Q.2(0)- Operation : ‘The carrier signal sin eis multiplied with the cos Qt in a balanced modulator (multiplies) to produce the following output, Output ofthe first multiplier = sin cos Ot = F sin(o,+a)t +4sin(@,—2)t sO) 2h (0,40 Dag LL Peo meshes sare cotomcsine boCt)+b (LY al (@,-21) sin (o,+20)¢ +R 2] sino.) Vu 0“ VFF, CyO0sinoyt NTF, Gixsin,t 6) Thus the tanmiter of Fig, 2-0. (0) generates the MSK signal MSK Receiver: The bloc dingam of an MSK reeiver is a thom ia 3-0.2(0) This is the synchronous typeof detection. This ype of detection is performed by multiplication and integration over the: ‘symbol interval. i Wain (oon Wi) =n Heoe ng (5) Fig. -Q. 2(0) : MSK transmitter Equation (1) shows that the multiplier output contains the sum and difference components of and ©. This is applied to two ‘bandpass filters with center frequencies at (@, +82) and (@,-2) respectively. “The outputs ofthe two bandpass filters are given by, (6) Ompcortier,} nce, +2) (©) Ouputof BPF, Both these outputs-are applied to an adder and a subtractor. At the output of these adder and subtracor we get the signals x (1) ‘and y(t) respectively which are given by, X(1) = J sin(@,+0)43 sin(o,-0)¢ 7 sin(@,-2) = sino,teos at 1 per Equation 2) ie andy(t) = $ sin(o,+0)1-4 sin(@,-2)¢ 2 —sin 9, te08 Ot + 0s @, tin Ot) (sin @, tc08 Mt + 08 0, tin Ot 2 ¥(t) = case, tsin ot ® x(t) is then multiplied with JZ, b, (¢) while y (t) is ‘multiplied with-Y2P, b, (+). The outputs of these multipliers are ‘then added together to produce the MSK signal given by, Vues (1) = f2P, bg (1) sin of cos Ot + 2P, b,(1)co8 eusin Ot -@) t= ei] Tw by | wach at i=kh, 40) oo poy OT |*/, lo : iat] | (£26 Fig. 3-Q. 2(b) : MK receiver Operation : = The signals x (1 ) and y (1) are regenerated at the receiver. ‘Then the incoming MSK signal is multiplied by the signals x(t) and y (tin the two balanced modulators. The bit b, (tis determined from the multiplier integrator chain which uses signal x (2) and the bit b(t) is obtained from the multiplier integrator chain which uses signal y(t) — Both the integrators wil integrate over the symbol duration (of 2, seconds. At the end of each integration interval, the ‘integrator outputs are sampled and stored. — The switch S, at the oufput will then switch between the positions I and 2 atthe rate equal fo bit rate, so as to generate ‘the original data bit stream d, (1). (Q. 6(b) Justify that distance of 16-QAM is greater than 16-Ary PSK and less than QPSK. (10 Marks) Ans. : Geometrical representation of QASK (16-QAM) : ~The geometrical representation is also called as signal space representation. = Assume that using QASK we want to transmit a symbol consisting of & bits. That means N= 4 and there ae 2*= 16 Aiferent posible symbols. Hence the QASK system should be able to generate 16 different distinguishable signals. Qin +02'+92 oe a) Therefor, a = YOIE, -@ and @ = 2=2Y07E, -@) where, ‘2 QASK system (16- QAM) In the geometric representation of Fig. 1-Q. 5(b), each signal point is equally distant from its nearest neighbours. This distance is d= 2a, ‘Now assume that all the 16 signals are equally spaced. As these signal are placed symmetrically, we can determine the energy associated with a signal, by considering the four Signals in the first quadrant. ‘The average normalized energy of each signal is given by the average of the energy associated with signals in the first ‘quadrant. p, - SutEetBat Ea 7 Looking at Fig. 1-Q. S(b) we can write that, By = @ +a"), Ba=Ox+s) By = (x'+99") and Ey=(e +95") Substituting these values into expression for E, we get, = dette todd) + +92) E, = Normalized symbol energy. In this system, because ‘Moary PSK signal as shown in Fig. 1-Q. 60). Da ‘ach symbol consists of 4 bit, the nomnlized symbol energy isgiventy, R= 45 Where, ys the momalied energy per bt. ‘Subettting Equation (4 into Equations (2) and 3) we ge, a= YOR od d= 24048, 9) Equation (5) shows that the Euclidean distance “a” for QASK system is much less than the Euclidean distance ‘between the adjacent QPSK signals where d= 2, . But this distance “a” is greater than the 16ary PSK wher, JisqnarZ 2 VIER YR 6 ‘As we know that for QPSK d=2-VB, =-4E, whereas for 16 PSK itis d=2-YEs sin (w/I6). ‘This proves thatthe distance for 16 QAM is les then that for (QPSK and 16 PSK. ~@) @.6(c) Write a short note on power spectral density and bandwidth of 16-Ary PSK. (7 Marks) ‘Ans. : Power spectral density of 16-ary PSK : (QPSK signal as: ‘The expression for power spectral density of the baseband sin (fT, Srery ~ 27,7,[ SEED) ower QPSK is speci case of May PSK with M4 ‘Therefore we can use the above expression to write the expression for PSD of the baseband M-ary PSK system. The only modification required is, that we have to substitute T,=NT, in the above equation. sin(xfNT, J]? =e A) voter = 22,N7[ ENE Using Equation (1) we can plot the PSD of the baseband Ly, ow (£356) Fig. 1-Q. 6(¢) : PSD of baseband M-ary PSK signal ‘Bandwidth of 16-ary PSK : From the plot of PSD in Fig Fig. 1-Q. 6). 2 Bw = Cr orn WB_Digtat Communication (E8TC—MU) (18-10 Bu 1, = NT Bw pe b= 5 2 aw = 26 2) ‘The bandwidth of a BPSK system is 2 f, The above expression tells us that with increase in number of bits per message, the bandwidth reduces. @.6(q) Write a short note on coherent and non-coherent digital detection techniques. (7 Marks) Ans. : Coherent and non-coherent digital detection techniques : ‘The digital modulation techniques are classified into two categories ws 1. Coherent techniques 2. Noncoherent techniques. 4. Coherent techniques : — Inthe coherent digital modulation techniques, we have to use ‘a phase synchronized locally generated cartier at the receiver to recover the information signal from the received signal as shown in Fig. 1-0. 6(@). (anc Fig, 1-Q. 6(0): A coherent detector = The frequency and phase of this carrier produced at the receiver should be perfectly synchronized with that at the ‘transmitter. The reception will be unsuccessful if this ‘synchronization is lst. = Coherent techniques are complex but guarantee better performance. The error probability with coherent technique ‘is less than that withthe noncoherent technique. Examples : ‘The examples of coherent digital detection technique are BPSK, QPSK, coherent BFSK etc. 2. Noncoherent techniques : = In the noncoherent techniques, no phase synchronized local carrier is needed at the receiver. It is as. shown in Fig. 2-Q. 6). (©2065) Fig. 2-Q. 6(€): A non coherent detector. The noncoherent detector consists of a bandpass filter followed by an envelope detector and a regenerator. ‘These techniques are less complex. But the performance is not as good as that of coherent techniques. The eror probability with coherent technique is less than that with the non coberent technique. Examples : ‘The examples of non coherent digital detection technique are ‘on coherent BESK, non coherent BASK ete. Chapter 8 : Baseband Modulation & Transmission [Total Marks 16] Q.4{a) Discuss the problem of inter symbol Interference (ISI). Explain the measures to be taken to reduce ISI. How to study IS! using eye pattern ? (10 Marks) ‘Ans. : Problem of inter symbol interference : = Ifthe ISI and noise are absent totally then, the transmitted bit ‘can be decoded correctly atthe receiver. However errors will bbe introduced due to presence of ISI at the receiver ‘output Due to this the receiver can make an error in deciding Whether it has received a logic 1 ora logic 0. = Another effect of ISI is the crosstalk which may take place ‘due to overlapping ofthe adjacent pulses due to spreading, = It's necessary to use the special filters called equalizers in order to reduce ISI and its effect. Remedy to reduce the IS! : = Ithhas been proved that the funetion which produces a zero intersymbol interference is a “sine function”, Thus instead of | rectangular pulse if we transmit a sine pulse then the IST can be reduced to 20. = Using th since pulse for transmission is known as “Nyquist Pulse Shaping”. The sine pulse transmitted to have a zero IST is shown in Fig. 1-Q. (a). ‘Shapo ofthe Input forzor0 Teele ehape 9 (@) Ideal pulse shape for zero IST Frequency reponse lhe reconatucton fer (©) Frequency response of the filter (©299) Fig. 1-0. (a) ern PE SRS ee ere wee yng ye ae tal Communication (E&Te - MU) We know thet Fourier transform of a sine pulse is 2 rectangular function. Therefore to preserve all the frequency ‘components, the frequency response of the filter must be ‘exactly flat in the pass band and zero in the attenuation band ‘as shown in Fig. 2-Q. 48), ‘This type of filter is practically not available. Therefore practically the frequency response ofthe filter is modified as shown in Fig. 2-Q. 4a) with different roll off factors “a” to ‘obtain the practically achievable filter response curves. 300) Fig. 2-Q. 4(a): Practical filter characteristics Eye pattern : ye pater sa pate displayed onthe sree of cathode ray oscilloscope (C.R.O.). The shape ofthis pati i very similar to the shape of human eye. Therefore i is called as eye pattern. ye pater is used fr sting the intersymbol intererence AS and ts ees on various communication systems ‘The eye patter is obtained on the C.R.0. by applying the received sigal to vertical deletion plates (plates) ofthe CRO. and a savtoth wave atthe transmission symbol rate ic. (1/ Ty) tothe horional deflection plates (X-pates) as shown in Fig. 3-0. 4(0 ‘The received digial signal and the comesponding cxilloscope display are a shown in Figs. 3-0. 4a) and (6) respetively. ‘The resulting oscilloscope display shown in Fig. 3-Q. 4a) is called asthe “eye pattern”. This is de tots resemblance to the human exe {epee eet 4 b & 5 % Samping (oBlseredbeay vive Pa ono, Received ‘Sawtooth * ou ve ae ee Set inp cron (Bye pattern seen an the CRO sereen (20) Fig. 3.Q. 4(a): Obtaining eye pattern arr D (18-44 ‘The region inside the eye pattem is called asthe eye opening. ‘The eye pattem provides very important information about the performance of the system. The information obtainable is as follows (See Fig. 4-0. 4(a) Beat samping tie “Tine nena ver which he wave can be sampled (Ty) (2350 Fig. 4-0, 4(a): Interpretation of eye pattern Information obtained from eye pattern : ‘The width of the eye opening defines the time interval over Which the received wave can be sampled, without an etror due to ISI. The best instant of sampling is when the eye ‘opening is maximum, ‘The sensitivity of the system to the timing error is determined by observing the rate at which the eye is closing asthe sampling rate is varied. ‘The height of eye opening at a specified sampling time defines the margin over noise. es When the effect of ISI is severe, the eye is completely closed and it is impossible to avoid errors due to the combined ‘effect of ISI and noise inthe system. Q. 6(a) Write a short note on :nyquist criterion for zero Isl. (6 Marks) ‘Ans. : Nyquist’s criterion for zero ISI : I ISL is not present then only the first erm inthis equation would be present. That means, y@ = a a) ‘This expression shows that under these conditions, the # transmitted bit canbe decoded corecty In order to minimize the effects of ISI, we have to design the ‘transmitting and receiving filters properly. ‘The transfer function of the channel and the shape of transmitted pulse are generally specified. So it becomes the first step towards design of filters. From this information we have to determine the transfer fictions of the tansmiting and receiving fier, to reconstruct the transmitted data sequence (by). ‘This is achieved by first “extracting” and then “decoding” the corresponding sequence of weights from the output y(t). Digital Communication (E&Te MU) Wenme y(t) = # E ape-kt) — ~@) = This shows that outpt (i dependent ona the received pulse p) andthe scaling factor = Extraction : Extraction is basically the process of sampling. ‘The signal y() is sampled at instants t =i Ty, where iis an integer. Decoding : The decoding should be such that, the contribution of the weighted pulse Le. a, p (Ts —K Ty ) for {i=kbe free from ISI. This can be stated mathematically a, ik 1 @) o iek where p(0)= I due to normalizing. This is the condition for ero SI If p(t) i.e. received pulse satisfies the above equation, then the receiver output given by, ‘Equation 2) reduces to (4) = Ha, “ which indicates ero ISTin the absence of noise Chapter 9 : Optimum Reception of Digital Signals [Total Marks 20] PCT kT) = Q. 3(b) Derive the expression for the probability of error of the matched filter. (10 Marks) ‘Ans. : Probability of error of the matched filter : ‘The probability of error which resulls when we use the malched iter, can be obtained by evaluating the maximum signal |] to.noseratio| “| forthe optimum ite, ery yabscoline Pela ae 0) 1 poco bgt at od ote at Su(f) = N/2 x1) Paxcoe Eee ® But x,(T) = x (T)-x2(T) ~@) [BOOT elite 3, Accorting to the Pasevals theorem we bave, - * T Jixcprar- feoa-Jeon © ES ES 0 Da Tn telstra of Equation (the ints ar Oto this is because x (persist for only atime T. Bux = 1 O-n0 -. Equation (4) can be written as, . I f ixcnfar= fin@-nore ie 0 T ~ Jo+x0-2% 0x0] 4 0 in 7 T = froarfeoa-frnonoe as eae . J ixcnfar= &,+8,-2Ep 6) Whee, = Energy of, (0) E, = Enay duet coniation between x,@andx (0, A. we select (=x (then we in tha, B= Bye-EanE Substituting thisinto Equation (6) we at J ixcnfar = B+E+2B=4E 5. Now subst this ae into Bgation 4 0 ge, aes]. 2 eae [aces 8) -\E- ona &) (6. The expression forthe error probability of an optimum filter is given by, >, een al fom Equation (10) fey Oe, Equation (10) to get the minimum error probability of a matched fer as: 1 [2yB-yETN rao = Hee P, ferte [YET] (nin ‘This isthe required expression forthe error probability of a ‘matched filter. a) ‘@.A{b) Generator vectors of convolution encoder are 91 = 101, ge = 110, gs = O11. Draw encoder, state table, state diagram and code trellis. Calculate the code word for the message vector 101011. (10 Marks) Digital Communication (E&Tc - MU). D (18-13 Ans. Given: Generator vectors g; = (101), go = (110), = (011) Step 1: aa Gem a me es my M2 es mH 2 seo tala go: o,f Yio Ys eam Fig. 1-. 400%) ‘The required encoder as shown in Fig, -Q40X0), ann Fig. 1-Q. 4(6)(b): Encoder: ‘Step 2: Write the state table : “The state table is as shown in Table 1-0: 40). (©2025 Table 1-Q. 4(b) : State table ‘Step 3 : Draw state diagram Refer the state table 2-0. 4(b) and draw the state diagram as follows (easmy Fig. 1- Q. 4(by() : State diagram ‘Step 4: Trelis diagram : ‘The trellis diagram is as show in Fig. 1-Q. 4(DX4). Current state Output Noxt state ‘ eo (144) Fig. 1-0. 400) ° © 0m | 0 o@ | 0 0 o Step 5 : Find the codeword 1 oe ow | 1 o@ | 1 1 © ‘Message vector M= 101011 ° o 1m | oom [1 01 (2020 Table 2-0. 4(b) 1 o 1m | 1 o@ | o tt ° + 0@ | o 1m [1 1 0 1 1 o@ | 1 1@ | 0 0 © ° 7. 1@ | © 1H [oo 8 From Table 2-Q. 4(b) we get the output sequence as 1 + 1@ | 1 1@ {| + 1 0 ‘Output = 110011 011 011 O11 101 Ans. og WE digital Communication (E87 MU) Mcig-1 Chapter 1 : Probability and Random Variables [Total Marks : 21] Q. (a) Stating the relationship between PDF and CDF, sive the properties of PDF. (4 Marks) Ans. : The relationship between PDF and CDF: ‘The probability density function f (x) is defined as the Foroinput For 1 input ces) Fig. 1-Q. 1(0)() : State diagram '@.5(b) Consider a convolution encoder with the constraint length K = 3 and g’ = {1,0,1) and g = {0,1,1}. Find the code vector for the message stream 11010 using time domain approach. Verify the code vector using transform approach. (10 Marks) Ans. : Part |; Time domain approach : oo 0 = Gq +8, 082 )7 (100) 7. ef & € = G8, 82 Message = (mg, mi, ma My my) Fig. 1-Q.5(b) shows the encoder. 0,141) 2055 Fig. 1-Q. 5(b): Convolutional encoder Digital Communication (E&Te - MU) ‘Step 2: Obtain the bit stream Xj: o 2 x = x29 del m- acces sues ins asense) Ste oo o X, = gy em +e, m = den+oxner Similarly we get, ee 0. sand. 0 Xy = gp mre, m+E, Mm 1x0) +@x1)+0x1)=1 ah et oh X; = g) m+s,m+e,m x1) +Ox0+Cx1) = 141-0 = (MOD2 addition) O° \ mo X, = B, mtg, mete, m (1x0)+@x1)+0x0)=0 o_o oo Xs = gy mtg, mtg, m 0+@x0)+(x1)=1 OO ot Xe = gy mets, mtg, me = 0+0+(x0)=0 © o XX, xX, =(1110010) o ‘Step 3: Obtain the bit stream Xy: o X, = 8) m=Ox1)=0 @ oo X, = 8) mtg, m=0+0x)=1 X, = O+1+1=0 .- (MOD-2 addition) x) =0+0+1 x, =0+1+0 X, =0+0+1 X, = 1x0 "0 @ o Xiy =@101110) w= Q) = (10111001 011100) Part ll: Transform domain approach ‘Step 1 : Write the generator polynomials : Forth bottom ae in Fig 1-0. 5() we ve o © 8 7 Le, ~Omdg, =1 M19) O%) = a texts e=Ite 0) For the top adder in Fig. 1-Q.5() we have, ® oo a Se a Gq) = OFUxy+Cx=xte 4) ‘The message polynomial is, M(x) = my +m x+mx+myx?+mxt M@) = 1+x+x 6) Step 2 : Codeword polynomial for bottom adder : Xi@) = G'@).MO = d+) 4xtx) = (txt) t oes 24x) = Lexx +x 0xteaé 0x8 - Code wordX; = (1110010) ©. Step 3 : Codeword polynomial for the top adder : %@) = GO).M@ = Hs) (txt) = (tex) (e+ etx) = OFxt ort te sattet +Ox8 * Code wordX = (0101110) m Step 4: Final code wor Final codeword = (10111001 011100) Ans. ‘Note : We get the same codeword using both approaches. Chapter 7 : Digital Modulation Techniques [Total Marks : 10] Q. 3(b) Draw the signal space diagram for 16-PSK and 16-QAM and find thelr error probability. Also draw their PSD and determine bandwidth. (10 Marks) Ans. : 16-PSK: Signal space diagram For 16 PSK system : (2046) Fig, 1-Q. 3(0)() : Signal space diagram of a 16-PSK system armmmnn® W_digial communication ETE -MU) M(19)-9 For PSD & BW : Please refer Q. 6(c) of Dec. 2018, Error probability : The symbol error probability is denoted by P, (M)If the energy to noise ratio ofan M-ary PSK signal is large then the symbol error probability using the coherent psk is given by, =e] 0 Forte (x43) redet[ Foon FE sn] oe [VEIN one] In Equstions (1) and 2), Paw) = But Qa) PM) P.M) PM) = BR 'M_ = Number of messages being transmitted = 2" Probability ofsymboleror Energy per symbol = By (log; M) N= Number of bits per symbol ‘Equations (1) and (2) indicates thatthe probability of symbol error depends on the ration/E,7Ny and M. The error probability decreases i.e. error performance improves with increases in the value of -fE,7Ny_ but error probability increases with inerease in the value of M 2. For 16-QAM system : Geometrical Representation of QASK (16-QAM) : Please refer Q. 5(b) of Dec. 2018. Bandwidth of QASK system : The spectrum of QASK is shown in Fig. 1-0. 3¢0)(ywhieh is quite similar to that of a M-ary PSK. Power special density a Main Lobe ZA ZN “a, “ 2 4 a, jew 2, — (407 Fig. 1-Q. 3(0)(b): Frequency spectrum of QASK — From the Fig. 1-Q. 3(b)b), it is evident that main lobe ofthe frequency spectrum extends from ~ f, to + f, Therefore the Frequency ‘bandwidth of QASK is given by, 6) ~ Thus the bandwidth of QASK system is same a thet of an ‘Mary PSK system, Error probability of 16 QAM (16 QASK) : = The signal space diagram of 16 QAM is shown in Fig. 1-Q. 3(b}@)Caleulate the error probability for the symbol such as 5 in Fig. 1-Q. 3(6X¢) which is located at @a. ve) (@887 Fig. 1-Q. 3(b\(6) : Signal space of 16 QAM ‘This signal as the largest probability of error. The ‘> Sands, also will have the largest error probability. ‘The minimum distance is given by, feos ae Fe rasa re 2 5 achee[ SB] ~ 2ee[orB]” Chapter 8 : Baseband Modulation & ‘Transmission [Total Marks : 21] ‘{@) Over a long transmission line draw the following data format for the binary sequence10011101011. (i) Unipolar NRZ_ (li) Polar RZ. (ii) Manchester, Select the best and justify the answer. (4 Marks) Aen NW _vigtal communication (E8Te- MU) M(ig}-10 Ans. : ‘The required waveforms are shown in Fig. 1-2. 1(c). Peedi SS EE! TE RE A SR GR I x 3 unpour nee a a Aa aaa Polar RZ, a2] Manchester | e204) Fig. 1-0. 10) @.5(a) Discuss the problem of intersymbol | Operation : Interference (ISI). Explain the measures to be taken to reduce ISI. How to study IS! using eye pattern 2 (10 Marks) ‘Ans, : Please refer Q. 4(a) of Dec. 2018. @.6(a) Explain with the required diagrams modified duo-binary encoder (7 Marks) Ans. ‘Modified duobinary encoder : = The modified duobinary encoder is as shown in Fig. 1-0. 6(2) In this encoder, the correlation between binary digits takes place over the two bit duration i. 2 T, instead of Te (©: Fig. 1-Q. 6(a) : Modified duobinary encoder = From Fig. 1-0. 6(a) the output ofthe modified duobinary encoders given by. Vo(k) = b(k)-b(k-2) -) = In Equation (1), the bipolar NRZ Sequence b (will ave values £ 1, The ouput sequence Vp (k ) i a corelated sequence. As b (is a bipolar NRZ. sequence, Equation (1) ves three posible values of Vp (k ) caresponding to various combinations of (nd (k—2) = 168 (k) represents the estimate of b (k ) then its value is given by, Bae) = Vocwy+b (k-2) 2) = This equation canbe implemented by using the decoder of the modified duobinary system as shown in Fig. 1-Q.6(4) ¢Bue-2)) (310 Fig. 1-Q. 6a) Modified duobinary decoder Equation (2) indicates that if there isan error in Vk) then ‘his error will propagate in other values of ® (). og armnmon ital Communication (E&Tc - MU Q-4 Question Papers —— ————— Q.1 Answer the following (any Four): (20 Marks) (2) Explain autocorrelation and covariance of random variable. (b) What are the properties of CDF? (€) What is entropy of an information source ? When Is entropy maximum 7 Give a comparison between the basic digital ‘modulation techniques (ASK, FSK and PSK). Explain role of hamming distance in error detection and correction ? (f) Justify / Contradiet : Syndrome depends on error pattern and received code word. (a) @ @.2 (a) The nine symbols viz. As, Az, As... As have ‘corresponding probability of occurrences as 0.12, 02, 0.08, 0.25, 0.02, 0.04, 0.06, 0.13, 0.1 Determine the Huffman code, calculate the average code word length, entropy and coding efficiency. (10 Marks) Q.2. (b) Explain the working of Minimum Shift Keying, ‘modulator and demodulator with the help of block. diagram and waveform. (10 Marks) Q.3. (@) Linear block code having following parity check ‘equations - (10 Marks) Ir + de + ds, cs = dh + da, Co = dh + ds. Calelote Gand H mattis, eror deocton and correction capacity of the code, decode the received codeword 101100 (10 Marks) (b) Drive the expression for the probability of error of the matched fier. (10 Marks) {a) Discuss the problem of inter symbol interference (SD. Explain the measures to be taken to reduce ISI, How to study ISI using eye pattern 2 (10 Marks) (b) Generator vectors of convolution encoder are 91 = 101, g2 = 110, gs = 011. Draw encoder, state table, state diagram and code trellis. Calculate the ccode word for the message vector 101011 (10 Marks) a4 aa (a) What are the random processes ? Explain central limit theorem. (10 Marks) 5 (b) Justify that distance of 16-QAM is greater than 16-Ary PSK andless than QPSK. (10 Marks) Poieasy solutions) Q.6 Write ashortnote on (Any Three): (20 Marks) (a) Nyquist criterion for zero ISI (b) Systematic and non-systematic block codes. (c) Power spectral density and bandwidth of 16-Ary PSK. {@) Coherent and non-coherent digital detection techniques. ago (@) Stating the relationship between PDF and CDF, ‘ive the properties of POF (20 Marks) (b) Define entropy of an information source ? When is the entropy maximum (@) Over a long transmission line draw the following data format forthe binary sequence 10011101011. () Unipolar NRZ (i) Polar RZ (ji) Manchester ‘Select the best and justify the answer. (@) Explain the role of Hamming distance in error detection & correction ? For impulse responses g' = (1,10), = {01.0}, g°= (1.1.1) design the state diagram AA discrete memoryless source has an alphabet of ‘six symbol with their probabilities as shown (10 Marks) Me | Me | Me 0.08 | 0.10 ©) @ ‘Symbol_| Ms Probability | 0.3 Me 0.25 My 0.18: 012. () Determine the Minimum Variance Huffman 1nd average code-word length and hence find entropyof the system. code-words (i) Verify the average code-word length using ‘Shannon Fano. (il) Compare and comment on the results of both. (b) A convolution encoder has a constraint length of 3 and code rate of 1/3. The impulses for each are g’ = 100 o* g° = 111. Draw :() encoder (i) state diagram (ii) code transfer function (10 Marks) 01, Q.3. (a) State and prove the conditional probability. (10 Marks) WF _digta! Communication (E8Te- MU) (b) Draw the signal space diagram for 16-PSK and | @.8 (a) Discuss the problem of inter symbol interference 16-QAM and find their error probability. Also draw (IS). Explain the measures to be taken to reduce their PSD and determine bandwidth. (10 Marks) ISI. How to study ISI using eye pattern 7 Q.4 (a) A parity check matrix of a (7,4) Hamming code is (10 Marks) SVS ws Sona iyo Maries) (b) Consider a convolution encoder with the we FELTOESS coavaha Mayers ota ght ee * latotoo4 9° = (0,1,1). Find the code vector for the message stream 11010 using time domain approach. Verify (Find generator matrix, using which find out - és the code-words of 1100 and 0101, baer cars dearth ay (ii) Determine the error detecting and correcting tia capability of system, Q.6 Explain with the required diagrams (Any Three) : (i) Draw the encoder forthe above block code. (20 Marks) (b) Sketch the encoder and syndrome calculator for (2) Modified duo-binary encoder. the generator polynomial g(x) = 1 + x’ + x’ and (b) Shannon Hartley theorem for channel capacity. ‘obtain the syndrome for the received code-word a 1101011, (10 Marks) {e) See (@) Define the folowing terms and give their significance : () Mean (i) Central moment (i) Variance (v) Standard deviation. ago ari

You might also like