0% found this document useful (0 votes)
47 views

Itc Unit 1

class notes

Uploaded by

Rohit AGARWAL
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
47 views

Itc Unit 1

class notes

Uploaded by

Rohit AGARWAL
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 25
5 bits are there in a symbol | (494) Block length in bits = 31% 5 = 155 bits (492) Minimum distance = dyin = 24141 nok = 2 31-15 = 2xt 16 = 2 4 = 8 symbols . (49.3) don 4, ‘nan 241 2x841=17 symbols (494) Introduction to Convolutional Codes [SPP Dee 31.18.16, May 1213.17] 4.10.1 | Definition of Convolutional Coding A convolutional coding is done by combining the fixed number of input bits. The input bits are stored in the fixed length shift register and they are combined with the help of mod-2 adders. This operation is equivalent to binary convolution and hence it is called convolutional ‘coding. This concept is illustrated with the help of simple example given below Fig. 4.10.1 shows a convolutional encoder. The above convolutional encoder operates as follows. Operation : Whenever the message bit is shifted to position ‘m’, the new values of v and v2 are generated depending upon 1%, mand mp. m and m store the previous two This bit represent current message bit This bits the part Of shift register Message bits input easoeg es wccrae raat. _ amie sass oee message bits. The current bit is present sn m. Thus we can write, 0 = mom em 4103) and = mom 4102) The output switch first sample 2 and then oo The shift register then shifts contents of ™m 1 of mom, Nest input bit is then taken and nereied according, 6104) contents stored in m Again v5 and v are gp this new combination of mm, and m (equation and equation (4.102)), The output switch then samples 0; then vp, Thus the output bit stream for successive input bits will be, Vm nyopoypy0402.... and so on 4103) Here note that for every input message bit two encoded output bits v, and v2 are transmitted. In other words, for a single message bit, the encoded code word is two bits ie. for this convolutional encoder, Number of message bits, k= 1 Number of encoded output bits for one message bit, na? 4.10.11] Code Rate of Convolutional Encoder The code rate of this encoder is, 1 ~ ke} z (4104) In the encoder of Fig. 4.10.1, observe that whenever a particular message bit enters a shift register, it remains in the shift register for three shifts ie, Previous two successive message Sutput Fig, 4.10.1 Convolutional encoder with K = 3, k= 4 and n= 2 First shift + Message bit is entered in position ‘nr Second shift + Message bit is shifted in position m, Third shift + Message bit is shifted in position m And at the fourth shift the message bit is discarded or simply lost by overwriting. We know that vj and vy are combinations of m, m, mp. Since a single message bit remains in m during first shift, in m, during second shift and in m during third shift; it influences output v4 and v for ‘three’ successive shifts 410..2| Constraint Length (K) The constraint length of a convolution code is defined as the number of shifts over which a single message bit can influence the encoder output. It is expressed in terms of message bits. For the encoder of Fig. 4.10.1 constraint length K = 3 bits, This is because in this encoder, a single message bit influences encoder output for three successive shifts. At the fourth shift, the message bit is lost and it has no effect on the output. 4.10.1.3] Dimension of the Code The dimension of the code is given by n and k We know that is the number of message bits taken at a time by the encoder. And ‘n’ is the encoded output bits for one message bits. Hence the dimension of the code is (n, K). And such encoder is called (n, k) convolutional encoder. For example, the encoder of Fig. 4.10.1 has the dimension of (2, 1) Time Domain Approach to Analysis of Convolutional Encoder Let the soquence ff denote the impulse response of the adder which generates 7) in Fig. 4.10.1 Similany, Let the sequence ia sag denon the impulse response of the adder which generates vin Fig. 4.101. These impulse responses are also called generator sequences of the code. Let the incoming message sequence be (mp, Mm, Mp. ‘The encoder generates the two output sequences v, and Sy % These are obtained by convolving the gener, top sequences with the message sequence. Hence the nay convolutional code is given. The sequence 0 is given aed Sem jeot2 za (105) Here m1 = 0. for all 1>i, Similarly the sequence i given as, aye oe $n £0, 1,2 one (4106) i Note All additions in above equations are as per mod-2 addition rules. As shown in the Fig, 4.10.1, the two sequences 2 and o, are multiplexed by the switch, Hence the output sequence is given as, teh = {04 of? of PAP of? 0 Here oy = 4 = £60 9 26 269. and ty = xf?) = {22 x) oP a... Observe that bits from above two sequences are multiplexed in equation (4.10.7) The sequence {o;} is the output of the convolutional encoder. fo h107 Example for Understanding Ex 4101 For the convolutional encoder of Fig. 4.102 determine the following : i) Dimension of the code ii) Code rate iii) Constraint length iv) Generating sequences (impulse responses) %) Output sequence for message sequence of m=(10011) Sol.: In the Fig. 4.102 observe that input of flip-top 1 is the current message bit (m), The output of flip-flop 1 is the previous message bit ie. m, The output of flip-lop 2 fore authors define constraint lengih as numbar of oulput bits Infuenced By @ Single message bit ie Constraint length (k) = (nM) bits whore n= number of encoded output bits for every input bit and M = number of storage elements in the shit register For the encoder of Fig. 5.1.1 Constraint longth = 23-6 bits, int = 4.10.8) Current message bitin) \ Message sequence Fig. 4.10.2 Convolutional encoder of Ex previous to previous message bit ie. m. Hence above Fig. 4.10.3 Convolutional encoder of Fig. 4.10.1 redrawn ‘altornately Observe that the above encoder is exactly similar to that of Fig. 4101 ) Dimension of the code Observe that encoder takes one message bit at a time. Hence k= 1. It generates two bits for every message bit. Hence n= 2. Hence, Dimension = (1, k) = (21) ") Code rate Code rate is given as, |W) Constraint tongth Here note that every message bit, affects output bits for three successive shifts, Hence, Constraint length K = 3 bits ') Generating sequences | Fig 4103 observe that vy ie. uf!) is: generated by ‘adding all the three bits, Hence generating sequence sf ‘is given ay 0.4 =i Here sf!) = 1 represents connection of bit (4109) gf?) = 1 represents connection of bit ™ sf? = 1 represents connection of bit m 1 ie. of?) is generated by addition of first and last bits. Hence its generating sequence is given as, a = (10) Here gf?) = 1 represents connection of bit m (4.10.10) gf? = O represents that m is not connected a = 1 represents connection of bit m The above sequences are also called impulse responses. ¥) To obtain output sequence ‘The given message sequence is, sm = (omy mm my mq) = (1.00.11) To obtain output due to adder 1 Then from equation (4.10.5) we can write, = Son 101» with i =0 above equation becomes, a San, ® = ag TALS L Here gf! <1 and my =1 4 1s equation (410.11), of? = 6m @ ging = (s0)@ 1) = 1 Here note that additions are mod-2 type. =2 in equation (4.10.11), of = gm + g{!m + gh! my = x0) (1x0) 8 (X= 1 £3 in equation (4.10.11), o(? = g{? nm ® g{!m ® gf!m = (x1) & (1x0) (XO) 71 i= 4 in equation (4.10.11), vf? = gh! ny ® gflmy @ gf" im, = (x1) @ (1X1) (1X0) = 0 =5 in equation (4.10.11), ef! = gf? ® gl" my ® hm = gfmq@ ghmy since ms is not available = axe axy =0 i=6 in equation (4.10.11), vl of" mg @ gts @ ging = By since mp and mare not available = bd +1 ‘Thus the output of adder 1 is, fayeg = 111100 | To obtain output due to adder 2 Similarly from equation (4.10.7), a =F sPm Zo And with im = O for all I>. 0 in above equation we get, of) = Png = (x1) = 1 Here gf?) =1 and m=1 This bit represent ‘current message With f= 1 of? = gm @ emp = = (x08 Ox1) -0 With f=2, of? = g@me@ gm @ 9 = (x0) @ (x0) @ (x1) “1 AP = Bim 8 Pm 2 Pm = (1x1) ® (0x0) (x0) “1 sffmy ® gPmm @ 3m = (1x1) ® (x1) @ (1x0) “1 1 = gm @ g'my = (0x1) @ x1) 1 = g@my = 1x1 <1 With With i=4, vf? With i =5, With i= ‘Thus the sequence vis, v= of = (1011111) To obtain multiplexed sequence of oy andz a3 per ‘equation (4.10.8) (4.1012) ‘The two sequences 0 and 92 are multiplexed to get the final output ie. PPD L MG yao =(11,10,11,11,01L0L11 4.10.3 | Code Tree, Trellis and State Diagram for Convolution Encode Now Jet's study the operation of the convolutional encoder with the help of code tree, trellis and sta diagram. Consider again the convolutional encoder of Fig. 4.104. It is reproduced below for convenience Previous two successive message uueeereneee Soe Sutput %2 Fig. 4.104 Convolutional coder with k= t and n= 2 7 [110.4] State of tho Encoder in above Hite the previous two succenive message bits sm anil mh Fepresents state. The input meanape bit my alts the ‘sate of the encoder at well ay outputs . turing thal state. Whenever new message bi ie aitind 0 ‘nf the OMENS Of my andl my define ane soto And wand x2 are also. changed scoring 10. NeW state mm and message bit m Lots Ase states as shown in Table 4.10.1 and outputs, defi Lot the initial valuen of bite stored in mand m be zero, That is myn =00 initially and the "t encoder is in 1 1 a Table 4.104 States of the encoder of Fig. 4.10.4 410.32] Development of the Code Tree Let us consider the development of code free for the message sequence m = 110. Assume that mm, =00 initially state | my m State of encoder | ° : 4 0 ; | | | 1) When m=1 ic first bit The firot message input is m = 1, With this input 1% and v2 will be calculated as follows, ow state 1 =1@0@0=1 Ciel To Ay 2180-1 A cua tht Before shit ‘ator anit The values of vyvy = 11 are transmitted to the output and register contents are shifted to right by one bit Position as shown. ‘This indicates output white going trom node, PO ee Fig, 4.10.5 Code tree from node ‘a’ Node or state ‘Thus th i =01 or ‘¥ and the new state of encoder is mm output trans vyv)=11 This shows that if put transmitted are vy2=1 encoder is in state ‘a’ and if input is m= = state is ‘Y and outputs are 04 =11 The first 10 Table 4.10.2 illustrates this operation. ‘The last column of this table shows the code diagram. The code tree diagram starts at node or state ‘d. The diagram is reproduced as shown in Fig. 4.10.5. tree Observe that if m= 1 we go downward from node ‘a. Otherwise if m= 0. We go upward from node ‘a. It can be verified that if m= 0 then next node (state) is ‘a’ only. Since m= 1 here we go downwards toward node b and output is 11 in this node (or state). 2) When m=1 i.e. second bit Now let the second message bit be 1. The contents of shift register with this input will be as shown below. vy = 16180=0 % = 180=1 These values of vjvp=01 are then transmitted to output and register contents are shifted to right by one ™ ™ 2 bit, The next state formed is as this pitis discarded shown. ‘Thus the new state of the encoder is mim =11 or ‘d’ and the outputs transmitted are 91% =01. Thus the encoder Bees from state ‘b’ to state ‘d if input is ‘1’ and fransmitted output vy, =01 This operation is illustrated by Table 4.102 in second row, Upward ariow indicates that message bitism = 0 Downward arrow incicates that message bite m = 1 bg This is naw state ‘OF node when m= 4 to 'b’ 3) When m= 0, ie 34 bie input bit is m = 0, the path of the 1268 is tiny ward arrow towards no (On Mates ‘Tp a Similarly 37 row of the Table 4.102 ilustrated the PFN atom lowatds rade lon ate) That operation of encoder for 3"! input message bit as m = 0. o Put 8 982 Oh Now observe in the code tree of last column. Since Downward arrow mean message bitis of "¢ 4, Code tree form = 1 Code Tree diagram dow “ Code tree form = tt oO ed 01, ted New state Tranem- eg ° - ig: SL - ° S| ° - - Hm Hates a ser eet l= be] acl be] ele injcl. “|, SF sf 3 gar oa 8h Hljes 6 pte Ee ks : - IT14 % fs ° ff ° é " elf 3 gl el - 4 pois, Heo ‘Se. No. Table 4.10.2 Analysis of convolutional encoder of Fig. 4.10.4 m me oad ee Outputs © this portion —roo 4}, = mestenrepetaton a [ote attne cove tree P 7 Pot ee 4 a Upward pat ote indeates > input med Lt, * | ofa san_| 4 Outputs a uf Downward | tieb path incites | ie 1046 inputim= 0 / bors, » |, i o fee" 2b Lot, 10 7 z Fig. 4.10.6 Code tree for convolutional encoder of Fig. 4.10.5 Complete code tree for convolutional encoder Fig. 410.6 shows the code tree for this encoder. The code tree starts at node ‘a’. If input message bit is ‘I’ then path of the tree goes down towards node ‘b’ and output is 11. Otherwise if the input is m = 0 at node ‘a’, then path of the tree goes upward towards node ‘a’ and output is 00. Similarly depending upon the input message bit, the path of the tree goes upward or downward. The nodes are marked with their states «, b, cor d. On the path between two nodes the outputs are shown, We have verified the part of this code tree for first three message bits as 110. If you carefully observe the code tree of Fig. 4.10.6, you will find that the branch pattern begins to repeat after third bit This is shown in figure. The repetition starts after 3d bit, since particular message bit is stored in the Shift registers of the encoder for three shifts. If the length of the shift register is increased by one bit, then Mi Pattern of code tree will repeat after fourth message it 4.0.3.3 | Code Trellis (Represents Steady State Transitions) Code trellis is the more compact representation of the code tree. We know that in the code tree there are four states (or nodes). Every state goes to some other state depending upon the input code. Trellis represents the single, an unique diagram for such transitions. Fig. 4.10.7 shows code trellis diagram. Current state Output o1-b Fig. 4.10.7 Code trellis of convolutional encoder of Fig. 4.10.4 The nodes on the left denote four possible current states and those on the right represent next state. The solid transition line represents input m = 0 and broken line represents input m=1 Along with each transition line the output vj, is represented during that transition, For example let the encoder be in current state of ‘a’. If Anpat me When eet slate wll be fa’ with outa Hy 11 Thm code teellia ia the compact representation: [410,94] tate Diagram Howe combine the curent and next states, then we Maly atat lag, For example conaider that the encater In alate ‘Ws IL Input m= 0, then ext alate ta mame fe a (Le. 00) with outpuite yyy 08 vie te nhown hy: wef loops at node fa? in the state diagram, KE iyput mt, then state diagram, shows that nest state fe ‘7 with outputs vv) <1, Fig, 4.10.8 State diagram for convolutional encoder of Fig. 4.40.4 Comparison between code troo and trellis diagram : Table 4.10.4 shows the between code tree jcturen to generate spare and (vellin diagram awa graphic nt and decode convolutional et Sr No, Code tree ‘Trellis diagram | 1 Code tre ‘Trellis geom indicates | Indicaten flow — traralfionw from current t0 of the coded next wlaten | nigral along the nodew of the tee | { 2 Carte tree te Code trellis diagram ty | lenglhy way abotier or compact way of | of ruprenonting coding, process. | feprenenting | coding proces: | 3. Decoding, bn Decoding is little complex | very almply bing treiin diagram | unin, code f j | ide trow repoats after number of stage ned in the encoder, ‘Trollin dingraan repeats In every wtate, In ptencly ate, trellls diagram hes only | tage, | Code tree i complex to implement in progsamming Table 410.3 Comparison betwoon code tree anc! {ela dingran, Trellis diagran is simpler tp Implement in prograntming Hix, 410.2 A convolutional enceser is defined by the fling generator polynonals g= Teor ea eol goatee deol, ged bre 1) What is the constraint length of ths code ? ii) How many states are in the trellis diagram of this code ? iii) Wht 4 the cave rate of this eode ? Sol: Vhere are three module-2 adders for gy(0), 10) and 4,(0). Hence for every message bit, three output bits are generated, Hence k= 1 and 1 = 3. 1 n°3 ‘To obtain constraint length To obtain code rate 4 Code rate = ‘The highest degree of g(v) is 4. This means every message bit can affect output bits for 4 successive shifts © constraint length, K = 4 bits ‘To obtain number of states in trellis diagram ‘The state is formed by previous bits of input available {in storage locations, Since constraint length is 4, there will be four storage locations. Out of which first location contains present input. Hence remaining ‘3° bits will reprevent the state, Number of states = 2° = 8 [an] Polynomial Description of Convolutional Codes Jn the previous section we observed that the convolution of generating sequence and message sequence takes place. ‘These calculations can be simplified by applying the transformations to the sequences. Let the impulse responses be represented by polynomials. i BEE) = Sh + PP PA ot gO eM Similarly, 8%) = she gPe eg Pts, ‘Thus the generating ww (410) oe (411) polynomials can quences. ‘The be written for other ariable_‘p’_is_unit_delss above equations. It represents the or time d “. pits in impulse response, lelay A aniy we can write (DC Polynomial for menage Foy 1 wg) OT HEP ME +m yx (ana) gre Lis the Tenath of the message sequence, The Tottion sms are converted” to polynomial pipcatons in the transform domain, ie, | aay = €PdMx) | | | eke) = 8 %x-mxy J | w= (4114) ne aor cron ae the output polynomials of sequences tf) and 2° oie All additions in above equations are as per od-2 addition rules. examples for Understanding By A111 Repeat part (V) of example 4.10.1 using transform demain calculations (polynomial multiplications) Sol: a) To obtain generating polynomial for adder-1 : The first equation (4.11.5) ie., of? = 111) Hence its polynomial can be obtained as follows : generating sequence is given by gx) = 141 xxt1xx? = dtx+x? vw A115) b) To obtain generating polynomial for adder-2 The second generating sequence is given by ‘equation (4.10.10). Le., f= 100) Hence its polynomial can be obtained (equation 4.11.2) a follows Px) = 14Oxx tH 1x2? ar vw (11.6) °)70 obtain message polynomial The message sequence is, m = (10011) Hence its polynomial can be obtained (equation 4.11.3) as, nlx) = 1400 40x22 1x38 + Dt 3,4 7) =x 4) To determine the output due to adder-t Now v!!(x) can be obtained from equation (4.114) ie. ANG) = gx) kx) = (txtx2y(ie +x) we Lerexte deel ‘The above polynomial can also be written as, uM x)= 14 (13x) + (1x2) + x x4) + (0% x4) + Ox 9) + (1x =°) Thus the output sequence 0”) is, AY = (1111001) 8) To determine the output due to adder-2 Similarly polynomial (x) can be obtained as, x) = gx) mx) = (1+ x2) +29 +4) a texte exte de® ‘Thus the output sequence 0) is, f= (1011111) 1) To determine the multiplexed output sequence The multiplexed output sequence will be as follows fof = 1110111101011) Here note that very few calculations are involved in transform domain. Ex A112 Constrict a convolutioal encoder for the following apsfenion ae eftency = 2 matant length = 4 The connections from the shift registers to modulo 2 adders. are desrbed by following uations g) = 1+ a) =v delermine the output codeword for the input message [1 1 1 0) Sol. ‘© Here ¢\(0) and go(e) has only two inputs. Hence we ‘must interpret the constraint length as aM. koa «+ Here rate efficiency = 5 = 5 <3 n= 2 ‘# Since constraint length is given as 4, mM = 4 2xM=4 0 = Mn2 Message ™ ] Das rex Fig. 4.11.1 Convolutional encoder ‘* Thus there are two storage locations. Fig. 4.11.1 shows the convolutional encoder as per above requirements. sIn above figure, v, is generated due to ¢,(0) = 1+0. Hence both m and m, are connected to modulo-2 adder. + And v, is generated due to g,(v) = v. Hence only m, is connected. To obtain output codeword 80 @) = Tex and gPMayer m= (1110) mfx) = Pa ox Mx) = gery may = (1+ 2) (22 + 2 4 2) ow taxedrxtor0ert x : (01001) And olay = ga) mia) er@rPevertever 040+ Pa Peat Wf) (oorna) Hence output sequence after multiplexing of! and vf?) will be v, = 010010111) [4:2] Generator Matrix in the Time Domain 1. The convolutional codes can be generated by a generator matrix multiplied by the information sequences 2 Let ml, m2, uk are the information sequences and 01, 02 .. on the output sequences. 3. Arrange the information sequences as m= (mil, 0, m2, 0, .., mk, O, ml, 1, m2, 1, ky Ay oo Wy HD, By coop hy Bo) = (u0, wl, .. wl.) and the output sequences as v= (01,0, 02,0, ... om, 0, ol, 1, 22 1... om, Teeny DL, 6 02, 2 om, bd = (20, 21, zh...) 4. vis called a codeword or code sequence. 5. The relation between v and u can characterized as ou ae Where G is the generator matrix of the code. The generator matric is GG Go Gy aG Gn-1 Gm & Gu-2 Gm-1 Gu ) | G- with the kx n submatrices fe) od 2) (a ot. 3 #8 hh | 7. The element gl’), for i ¢ (1, k] and je [1, m] are the impulse, 4.13] Decoding Methods of Convolutional ee SOUS SS UNPAENC NTA These methods are used tor decoding ot convolutional codes. They are viterbi algorithm, sequential decoding and feedback decoding, Let’s consider them in details in subsequent sections 4.13.1 | Viterbi Algorithm for Decoding of Convolutional Codes (Maximum Likelihood Decoding) Let's represent the received signal by Y. Convolutional encoding operates continuously on input data, Hence there are no code vectrs and blocks as such Let's assume that the transmission error probability of symbols U's and 0's is same. Let’s define an integer variable metric as follows, metric crepancy betweer ° : yas the diserepa the received signal Y and gcoded signal at particular node. This metric ear te the [gover few nodes f0r & particular path a surviving Path sis is the path of the decoded signal with minimum metric jn viterbi decoding a metric is assigned to. each raniving path. (Metric of a particular path is obtained my adding individual metric on the nodes. along. that path), Y is decoded as the surviving path with smallest metric. consider the following example of viterbi decoding, Let the signal being received is encoded by the encoder of rig. 4101. For this encoder we have obtained code tells in Fig. 4.10.7. Let the first six received bits be y-nou 4) Decoding of first message bit for Y= 11 Note that for single bit input the encoder transmits two hits (0j%) outputs. These outputs are received at the decoder and represented ty Y. Thus Y given above represents the outputs for three successive message bits. ‘Assume that the decoder is at state ap. Now look at the code trellis diagram of Fig. 4.10.7 for this encoder. It shows that if the current state is ‘a’, then next state will be ‘d'or ‘b’. This is shown in Fig. 4.13.1. Two branches are shown from ap. One branch is at next node a representing decoded signal as 00 and other branch is at 4, representing decoded signal as 11. Branch for m = 0 with ouip 00 Cumutive or path discrepancy Ot Pmt) is two Discrepaiicy or metric iszer0 Fig. 4.13.1 Viterbi decoder results for first message bit The branch from ayby to represents decoded output as 11 which is same as received signal at that node i. 1 Thus there is no discrepancy between received signal and decoded signal. Hence ‘Metric’ of that branch is zero. This metric is shown in brackets along that branch ‘The metric of branch from &y 40 4 Is two. The encodes number near a node shows path metric reaching to the node. ¥) Decoding of second message bit for Y= 01 When the next part of bits ¥ =01 is received at nodes 4 and by, then from nodes a, and b; four possible next States a7, b,c and d, are possible. Fig. 4.132 shows all these branches, their decoded outputs and branch metrics corresponding to those decoded outputs. The encircled number near aby, and 4, indicate path metric emerging from 4 For example the path metric of path ,a;, a is ‘three’. The path metric of path ahd, is zer0. Fig, 4.13.2 Viterbi decoder results for second message bit ¢) Decoding of 3"! message bit for Y 1 Fig. 4.133 shows the trellis diagram for all the six bits of x Fig. 4.13.3 Paths and their metrics for viterbi decoding Fig. 4.133 shows the nodes with their path metrics on the right hand side at the end of sixth bit of Y. Thus two paths are common to node ‘d, One path is aeayayty with metric 5. The other path is dghyept with metric 2 Similarly there are two paths at other nodes also. ‘According to viterbi decoding, only one path with lower metric should be retained at particular node. As shown in Fig. 4.133, the paths marked with x (cross) are cancelled because they have higher metrics than other path coming to that particular node. These four paths with Jower metrics are stored in the decoder and the decoding continues to next received bits. 4) Further explanation of viterbi decoding for 12 message bits Fig. 4.134 shows the continuation of Fig. 4.13.3 for a message 12 bits. Observe that in this figure, received bits Y are marked at the top of the decoded value of output ie. Y+E is marked at the bottom and decoded message signal is also marked Only one path of particular node is kept which is having lower metric. In case if there are two paths have same metric, then any one of them is continued. Observe that a node ‘am,’ only one path arrives with ‘metric two. This path is shown by a thick line. Since this path is lowest metric it is the surviving path and hence Y is decoded from this path. All the decoded values of output are taken from the outputs of this path. Whenever this path is broken it shows message bit m=1 and if it is continuous, message bit m=0 between two nodes. This is the complete explanation of viterbi decoding. The method of decoding used in viterbi decoding is called maximum likelihood decoding. ©) Surviving pathy During decoding you will find that # viterbi decoder hay to store four survivity paths for four nodes Surviving paths = 2(K-D* (4131) Here K is constraint length and k is number of message bits For the encoder for Pig. 4.13.1 K=3 and k =1 4 Surviving paths = 20°11 ‘Thus the viterbi decoder has to store four surviving paths always. If the number of message bits to be decoded are very large, then storage requirement is also large since the decoder has to store multiple (in present example four) paths. To avoid this problem metric diversion effect is used. |) Metric Diversion Effect For the two surviving paths originating from the same node, the running metric of less likely path tends to increase more rapidly than the metric of other path within about 5(k~1) branches from the common node. This is called metric divergence effect. For example consider the two paths coming from node b, in Fig. 4.132 One path comes at as and other path comes at ds, The path at as is less likely and hence its metric is more compared to the path at ds. Hence at node ds only the surviver path is selected and the message bits are decoded. The fresh paths are started from ds. Because of this, the memory storage is reduced since complete path need not be stored. ‘Maximum | likelihood | path Fig. 4.13.4 Viterbi decoding Sequential Decoding for Convolutional Sequential decoding uses. metric divergence effect Fig. 4135 (a) shows the code trellis for the convolutional encoder of Fig. 4.10.4. The same code trelis we have seen in the last subsection following are the important points about sequential decoding. 1) The decoding starts at a). It follows the single path by taking the branch with smallest metric. For example as shown in Fig. 4.135 (a), the path for frst three nodes is aghyd, since its metric is the lowest. 2) If there are two or more branches from the same node with same metric, then decoder selects any one branch and continues decoding, 3) From (2) above we know that if there are two branches from one nodes with equal metrics, then any one is selected at random. If the selected path is found to be unlikely with rapidly increasing merit, then decoder cancels that path and goes back to that node. It then selects other path emerging from that node. For example observe in the Fig, 4.13.5 (a) that two branches with same metric emerge from node d;. One path is dad;cyas (or path marked “B’) with metric ‘3’ at as. Therefore decoder drops this path and follows other path. 4) The decision about dropping a path is based on the expected value of running metric at a given node. Running metric at a particular j" node is given as, Running metric = jn. es 13.2) where j is the node at which metric is to be calculated. ‘nis the number of encoded output bits for one message bit. and cis the transmission error probability per bit. The sequential decoder abandons a path whenever its running metric exceeds (jna + 4) Here A is the should be above jno. at j" node. Fig. 4135 (b) shows the running metric at a particular node with respect to number of that node. The two dotted lines shows the range of threshold 'A’ above jna at a particular node. Observe that since metric of path 'B’ exceeds the threshold at 5" node, it is abandoned and decoder starts from node 2 again. Similarly path ‘A’ is also abandoned. 5) If the running metric of every path goes out of threshold limits then the value of threshold ‘A’ is increased and decoder tries back again. In Fig. 4.13.5 (b) the value of «=1/16, for encoder of Fig. 4.10.1 we know that 1 =2 . Let's calculate jn o at 8" node. At 8" node jna=8x2x1/16=1 The value of A=2. ‘Therefore threshold will be, Fig, 4.13.5 Sequential decoding Threshold = jn + A=1+2=3 at 8% node. Similarly, the threshold at other nodes can be calculated. The computations involved in sequential decoding are less than viterbi decoding But the back tracking in sequential decoding is complex. The ouput ore probability is more in case of sequential decoding Both ceiterbi decoding and sequential decoding methods can be implemented with the help of computer sofware efficiently Free Distance and Coding Gain lock and qyclic codes we have seen that the coor correction or detection power depends upon the cnirwm distance between the code vectors. A cervelutional encoder does not divide the output cnooded signal into different code vector, but complete transmitted sequence can be considered as a single code vector Let X represent the transmitted sequence. We Jnow that minimum distance between the code vector is equal to minimum weight of the code vector. Therefore 2 free distance is equal to the minimum distance between the code vectors. Since minimum distance is equal to minimum weight of the code vector we can write, Free distance (d;) = Minimum distance between code vectors 413.3 For the bl = Minimum weight of the code vectors dy = [hn and X is non zero ie (4.13.3) Here [w(X)]jn is the minimum weight of code vector, For convolutional coding free distance (dy) represents the error control power. Coding Gain : Coding gain is used as a basis of comparison for different coding methods. To achieve the same bit error rate the coding gain is defined as, { & Pcoded “(& No oo .. (4.13.4) | eaded For convolutional coding the coding gain is given as, rd, ape eres os 413.5) here '7 is the code rate and dy is the free distance Examples for Understanding Ex 4131 The figure belrw depicts 2 rate 12, constraing lth N=2 concolutional code encoder. Sketch the coe tre the sare. : Input inary sequence “4% Fig. 4.13.6 Convolutional encoder Sol. : a) Define states of encoder of the given convolutional encoder The constraint length is K=2 Its rate is } means for single message bit input, two bits vj and vp are encoded at the output. 's represents the input message bit and % stores the previous message bit. Since only one previous message bit is stored, this encoder can have states depending ‘upon this stored message bit. Let's represent, = 0 state ’a and =i state'h b) Outputs of encoder Let’s assume that the contents of s, ands) are ze initially. From encoder of Fig. 5.6.1 it is clear tat outputs vj and v2 are given as, mae } % = 20H . 4138) c) Prepare state diagram Before drawing code tree, we will first prepare the state diagram for this encoder. State diagram represents compact version of code tree. Table 4.13.1 shows the present and next states corresponding to different inp of message signal. First row of the table shows that present state of encoder (which is defined by %) * ; J and if input 5 (ie. m= 0, then outputs 2 22-00 nett state of the encoder will be ‘a! Second table shows that in present state of encoler ene input is 1 then contents of s 5 @ and =10. Then the ny ty =11 and next state of : output ne te of encoder is ‘ti. Similarly other two rows represent how encoder operates if it state is 'B. aT Based on the result of Table 4.13.1, we can draw th raw the state diagram as shown in Fig. 4.13.7. Taput message sh Status of shift register ‘This line shows transition froma’ to! when input Imessage bit is 1’ and KO Sica of ee ye i Fig. 4.13.7 State diagram for the encoder of Fig. 4.13.8. Santee Une represent input a 0 and, otied tine represents taput oo New atate after ‘transmitting | outputs ONAN 5, & sist 4 wnt ie ie, Sie tae % 1|9 10 : se ‘ AY B f iT; |G ie ie a ss 4 1 1 1 Table 4.13.4 Operation of encoder of Fig. 4.13.7 As shown in state diagram, above encoder remains in state ‘d if input is zero. In the above diagram continuous lines are used wher input measage bit is ‘0’ and dotted lines are used when message bit is 'V, The arrow on the Jine shows the transition towards next state. ‘The numbers marked on line represent outputs vj e2 during that transition 4) To obtain code tree The code tree is derived from the state diagram is shown in Fig. 4.137. Observe that the branch pattern of the code tree repeats after two successive message bits. This is because any message bit remains stored in the encoder register for two successive shits, Mn a 0 a oe a ee | Ce wo f, 7 : ae i ll dy ip 7 T Fy fogs Fig. 4.13.8 Code tree for encoder of Fig. 4.13.6 Ex. 4.13.2. Drmo the trellis diagram for following encoder, NE Fig. 4.13.9 Gol. : Step 1; Determine specifications Number of input message bits = 1, K= 1 For one input bit, there are two output bits, = 2 kl Code rate, R= F= 5 Here output is influenced by three successive bits, K «3 ‘The encoder can be redrawn as in Fig. 4.13.9), anes % Fig. 4.13.9(a) Convolution encoder drawn altemataly Step 2: Logic table : i Carrent Ont mel | SENo, State Inpat m Some om a 1 0 | 00 (0 vive tj 10 fetus 2 | beo1 | 0 | 11 (1 vise 1 [01 a tiee 3 0 o | oOo. O Oiee 1 11 Otis 4 d-11 0 10 1 Oiec 1 oo 1 tied | Table 4.13.2 Logic table of encoder in Fig. 413.9 ‘Stop 3 : To obtain Trellis diagram : Coren see a ., Neto Bes Fig. 4.13.9(b) : Trellis diagram Ex, 413.3 For the convolutional encoder arrangement shown in Fig, 413.10, draw the state diagram and hence trellis diagram. Determine output digit sequence for the data digits 11 01.01.00, What are the dimensions of the code (1, k) and constraint length 7 Use viterbi’s algorithm to decode the guerre, 100-120 121.101 001 107 007 010, __— 1) To obtain dimension of the code : Observe that |_|) To determine output sequence sol one message it taken at a time in the encader af 4) Determine generator potynomials fig 41340 Hence I= 1. There are three output bite tor soc of!) front very message Bit Hence "3. Therefore the dimension of The generating, sequence can be written for Wy the code is (1K) =, 1). Fig. 4.13.10 as, 1 Constraint length gi) = 1,0.) Here note that every message bit affects three output since only m is connected bits, Hence for of? will be, Similarly, generating, sequence for Constraint length K ~ 3 bits. ee 6 — gf = 11,0, 1) - = ‘since m and rm are connected. ‘And generating sequence for of? will be, ria) = {110} . since mand m are connected. ay if a wi ha Hence the corresponding generating, polynomials can be >IT written a5, rout sequence , Fig. 61310 Encoder of Ex. 413.2 Maat iil) To obtain code trellis and state diagram gx) = 14x? Let the states of the encoder be as given in Table 5.1.1. la) = 14x Fig. 4.13.11 shows the code trellis of the given encoder. b) Determine message polynomials mm 0 ose s-0 0 The given message sequence is, m = (11010100) Hence the message polynomial will be, 1 ore ! mia) = Ttxt8 425 ©) Obtain output for g?) ‘The first sequence of) is given as, Fig. 4.13.11 Code trellis of encoder of Fig. 4.13.10 of = Qa) mx) = 10+ x4+23 4 x8) The nodes in the code trellis can be combined to form state diagram as shown in Fig. 4:13.12. | = txts Hence the corresponding sequence will be, {of} = 1010 4) Obtain output for g°?) The second sequence v\” is given as, of) = gXx)-m(x) = (144) L+ x48 +25) = Leet ater? Hence the corresponding sequence will be, Fig. 4.13.12 State diagram of the encoder of Fig. 4.13.10 ® = (1t10000) ©) Obtain output ors The third sequence 2 is given as, of) ga) mx) = (4x) exter) w esteatert esses Hence the corresponding sequence is, = oriiiy PP To multiplex three output sequences The three sequences (9, and ef) are made equal in length ie. 8 bits. Hence zeros are appended in sequence of? and of?). These sequences are shown below. of 2 of) = (10111110) The bits from above three sequences are multiplexed to give the output sequence i.e. (e)} = (111110 011 101 001 101 001 o10) (11010109, f11100001) ¥) Viterbi algorithm to decode the given sequence Fig. 4.13.13 (See Fig. 4.13.13 on next page) shows the diagram based on viterbi decoding. It shows received sequence at the top. The decoded (¥ + E) sequence and decoded message sequence is shown at the bottom. ‘The dark line shows maximum likelihood path. It has the Jowest running metric, ie. 3. Other path are also shown for reference. At any point only four paths are retained. The decoded message sequence is, m = (11010100 Ex. 4134 A convolutionel encoder has single shift register with tao stages three modulo: aiders and an output multiplexer. The following generator sequences are combined by the multiplexer to produce the encoder output - $y = LOU: 8) = 110) 8 > LLY i) Draw block digram of the encoder. fi) For the message sequence (30011), determine encoded sequence. If above hardware is enhanced by increasing number of stages in shift register and number of mod-2 adders respectively, what is the effect on a) Generated ouput sequence _ b) Periodicity of the codetree. Solution; i) To obtain block diagram of the encoder The shift register is two stage but every output gg and g3 combines three inputs. Fig. 4.1314 shows it encoder. Fig. 4.13.14 Block diagram of the convolutional of Ex. 4.134 ii) To obtain output sequence form = (1.001 1) 4) Obtain generator polynomials The polynomials of gy, g and g, can be written as, B= (101) > gy(x)=142? 2 =(110) > gx(x) =14+2 837 (IL) > g3(x) =14 2427 8) Obtain message polynomial The message polynomial becomes, m=(10011) = — m(x)=1+x3 4x4 ©) Output sequence due to g, Output of sequence g, is given as, e1(2)=81(8) mx) = (1422) (1425+ x4) = leet de rte sd ers Hence >= (lon) 4) Output sequence due to gy Similarly output of g2 is given as, A(x) =82(x) mx) = (14x) (Lead ext) = tere ded Hence v= (110101) 6) Output sequence due tog Output of g4 is given as, un(p)=sa(4)m(x) = (Lex?) (142 + x) = Lert etext Hence 03 = (1111001) lor 00 tor ou my sovendes HA Fig. 4.13.13 Viterbi decoding for example 4.13.3 P Multiplexing the sequences due to gy,g9 and gy The multiplexer will multiplex the bite of v4.0) and vy as follows Output sequence = (111011101111 100110 100) Note that vp contains only 6 output digits, Hence ita 7 output digit is assumed zero in above multiplexed sequence. W hardware is enhanced by adding shift registers and adders 4) Effect om generated output sequence For each input message bit, three output bits are generated (Sce Fig. 413.14). There are three mod-2 adders in the encoder. Therefore three output bits are generated. If mod-2 adders are increased, then output bits for every message bit are increased. Therefore length of the ‘coded sequence increases. b) Effect on periodicity of code tree Periodicity of codetree is related to number of stages in the shift register. In Fig. 4.13.14, observe that three message bits are present at any time in shift register. Hence code tree will be periodic after fourth message bit. If number of stages are increased, then period of code tree also increases. Ex.4.135 For the Y3 compolution encoder has generating vectors as gy = (100), gp = (101) and (g3) = (110). Sketch the encoder. Use viterbi algorithms to decode 100 110 111 101 001 101 001 101 001 O10 Sol. : i) To obtain encoder diagram Rate R=1/3, k= 1 and nes There will be three stage register. It will contain mm ard my first output 9, since gy = (100), y= m second output v2, since go = (101), 0) = m@ m third output vy, since go = (110), uj = m@ m Fig, 4.13.5 shows encoder diagram based on above description. ii) To obtain the state table | Output | Curent Inpu | Sr.No, State "vam | Next State) | mm om emim,| mm | Voemom | 1 | e000 000 00 iee | EU eC attnt ies ec +] 120 [tas 3 | eto | 0 | O10 OF ice a| 101 [nies ere iio o1 10, ie 1 100 Nieg Table 4.13.3 Logle table iii) To obtain decoded sequence algorithm. using _viterbi Viterbi algorithm to decode the given sequence | Fig, 413.16 shows the diagram based on viterbi | decoding. It shows received sequence at the top. The decoded (Y + ) sequence and decoded message sequence is shown at the bottom. The dark line shows maximum likelihood path. It has the lowest running metric, ie. 3. Other path are also shown for reference. At any point only four paths are retained. The decoded message sequence is, m = {11010100} Ex. 4.13.6 Diagram an convolution encoder of Oy = 110 and Os = O11, draw state table and use viterbi Alyortium to devae Be cxodel euence WO TTT ‘Ans. ; 1) To obtain encoder diagram Rate R= 2, ‘There will be three stages register. It will contain mn and rm k= Land n=2 Fig, 4.13.16 Viterbl decoding for Ex. 4135 first output 0 since O, = 110, Dy =m my second output v2, since O, = 011, 2% =m, ® m Fig. 4.13.17{a) shows the encoder diagram based on above discussion, cup Fig. 4.13.17 : Encoder of Ex. 4.13.6 ii) State table : [ I | Current Output | [SNe Site [amp m Yiymam New tat | | | my 2m, | oa a=0 0 | ° 00 00, ie a} | 2 10 Or ie b) {2 veo] 0 11° (10, ie ce} 1 01 1 ie d) 3 c=1 0 oO o1 00, ie a jo 11 O01 ie 8} 1 4. d=11 0 10 10, ie ¢ 1 oo Nie a Table 4.13.4 Stable table of Ex. 4.13.6 iii) To obtain Decoded output : Using viterbi algorithm, the decoded output is obtained as shown in Fig. 4.13.18. (See on next page) Ex. 413.7 For the following convolutional encoder, find the coded output if input message is 10110000 H Sol: The convolutional encoder has two paralle inputs. -. R= 2 It has three outputs m= 3 k 2 Rleode rate) = Kye. R= Now, there are two shift registers having constraint length of two each, . K=2+2=4 she pectcation wb, (2 2) Logic table : ‘The encoder can be in any one of four states 00, 01, 10 and 11, From each state, there will be four outgoing paths. The logic table is as shown in Table 4.13. From encoder diagram, = MD im, — = MO rm —- y= HB M 3) Logie table : Output Se. Current state Input 0, =", state Now| mz mm my vy= mH Om om Mm 05 =m Om, Lh a=0 00.0. 0 000 diea | Oleic orcad eH nies 010; 1 0 Of Oiee | O11) 1 014 tee 2 |b-0 10 0. 1 1 00 Oiee Poa 1410 tied 110.0 101 Oise | Pia. @ 1 ait tied fa lent 0/0 0) 0 1 1 ous 001,01 0 tind | 010 11th one O11) 11 Of tied ba laer 10 0. 1 0 1 [0 dies i 10 1 1 0 0D tied | | 110. 0 0 1/1 vise | 14d 0 0 of tied Table 4.13.5 Logic table 4) The state diagram : It is a shown below ows 117000 Fig. 4.13.20 State diagram Note : 00/000 online Here 00 is input and 000 is output 5) Encoding of sequenc: + Input 10110000 __Inpat Output 1 0 | 1 0 0 [ 1 too o 0 i 0 1 0 7 0 o 0 + Output is 100 110 101 000 | Review Question | 1 Explain viterbi decoding and its uses [414] transfer Function of the Convolutional Code The state diagram of the convolutional code gives information about distance properties and error rate performance, Consider the state diagram shown in Fig 55.1. We will use this state diagram to obtain the distance properties of the convolutional code. Let us label the branches of this state diagram as D°,D, D?or D®. Here the exponent of D represents the number of 1s in the output corresponding to that Fig, 4.14.1 The state diagram of rate 3, k= 3 convolutional code branch. Thus the exponent of D is equivalent tg hamming distance of the output with respect to all zerq output, Such reorganized state diagrams with labels p, D? and D9 is shown in Fig. 4.14.2 Dy 5 3 G+;-#---6) Fig. 4.14.2 Stato diagram of Fig. 4.14.1 with distance Tabols on the Branches The self loop at node ‘a’ has all zero outputs, ie. D?=1 Hence it is not shown in Fig. 4.14.2. This self loop with all zero output does not contribute to the distance Properties of the code sequence relative to all zero cade Sequence. The node ‘a’ is split into two nodes. One node is called ‘a only and it is like input node and the other node is ‘e’ and it is like output node of the state diagram (See Fig. 4.14.2), Along the branches between the nodes, D with proper exponents are written. For example there is one branch from ‘a’ to ‘c’ with output 111, Hence D® is written along this branch, Similarly, the branch from ‘b’ to ‘t has output O11 in Fig. 4.14.1. This is shown as branch from ‘ to ‘e’ (output node ‘e’, which is obtained due to splitting ‘a’) with label D? as shown in Fig. 4.142 Similarly all other branches are labeled in Fig. 4.142 There are five nodes in Fig. 4.142 The four sate ‘equations can be written for state diagram of Fig. 4142 as follows : X= DX, + Dx, Xy= DX DN, Xa DX, + Dy, X, = DX, (ary The stat fon for the node branches upon that node. inedent branches from nodes ‘8 written from incident has and (b. The transfer function of the code is defined ag, Xe TD) = ye (4.14.2) on solving the state equations ine ‘quation (4.14.1) we can write above transfer function as, : ve T(D) = ()* opt > DS+2D84 4D! ygpn (4.143) The first term of the transfer function is DS, 1t means there is single path of hamming distance d = § between node “a” and ‘e'. As. shown in Fig. 414.2 this path is acbe. The second term in above equation is 2D* It means there are two paths of distance d= 8 between nodes @ to © in Fig, 4.14.2, These two paths are acdbe and acbcbe. Actually the path from node a to ¢ means the path starting from node ‘a’ and comming back to node ‘a’ only, There is all zero output starting and ending at node ‘a’, The distances of the various paths obtained above are the hamming distances from the all zero output path on node ‘a’, Thus the distance properties of the convolutional code can be obtained from the transfer function. ‘The minimum distance of the code is called as minimum free distance. It is denoted by dep In this example dee ~ 6 a1 University Questions with Answers Consider the decoding of (15, 5) error correcting BCH code with generator polynomial (x) having 1,02, 03, 04, a8,a% us roo!s. The roots a, 02, 0 have the same minintum polynomial 4 10%) = 021%) = 94K) <1 4X +X! The roots a3 and a® have same minimum polynomial ; 4%) = G6(K)=14X 4 KPH WPM te The — minimum — polynomial of _ az as as az OX TA XY 1) Find glx) ax LEM (400, 300, @30O1 ii) Let the received word be (0001010000 00.100) that is nix) x3 + x84 x17 find the syndrome compone nts given that 1+ 08 +c = cel? and 2, Qld paces cel = a8 iii) Through iterative procedure if the error location polynomial is 14X+05X3.. having roots what are the error location uinber ard error pattern e(x) ? (Refer sections 42 and 4.3) AUIS, 5) BCH triple error correcting the follow generator matrix 0) =P Ope he Peet Find the corrected code word if transmitted code word is x7 + x? Take the primitive polynomial x4 +04 1 (Refer section 4.3) on Design @ (15, 11) RS code. F ‘message polynomial is given as x + (Refer section 4.5) Explain the advantages of RS cove code, Refer Ex. 45.3) ELIE UST ay Design generator polynomial 3) BCH {tiple error correcting code, Find transmitted cae vector for message bit = 110110 Take primitive polynomial = x4 Refer section 4.2) txt code using GF with pr ) Jr double error cor © polynomial x3 + x41 A! Systematic ewe for (O11, O01 (Refer section 4.5) reeting iad 110 Es gma (7, 3) RS double or the Des find for astematic One fOr the Message cota’, (Refer section 4.5)

You might also like