Performance Comparison of LDPC Codes and Turbo Codes
Performance Comparison of LDPC Codes and Turbo Codes
htm
1. Introduction
Low-Density Parity-Check (LDPC) Codes were introduced by Gallager [3]. Its shown that LDPC Codes can achieve performance close to the channel capacity at low complexity when iterative decoding is used. Recently LDPC Codes have increasingly been drawn attention due to its superior error correction capability and low complexity [4]. Furthermore they are suited for implementations that make heavy use of parallelism. Gallager considered only regular LDPC codes whose parity check matrix have a fixed number of 1 in each row and also have a fixed number of 1 in each column. It has been shown that the performance of LDPC Code can be improved using irregular scheme [2]. Turbo codes were first introduced in 1993 by Berrou, Glavieux, and Thitimajshima, and reported in [5, 6], where a scheme is described that achieves a bit-error probability of 10 -5 using a rate
466
1/2 code over an additive white Gaussian noise (AWGN) channel and BPSK modulation at an Eb/N0 of 0.7 dB. The codes are constructed by using two or more component codes on different interleaved versions of the same information sequence. This paper compares the error performance of LDPC codes with message passing decoding and Turbo codes for various receive diversity. The remainder of the paper is organised as follows. Section II describes the overview of LDPC codes which include the basic representation and types of LDPC codes. Section III describes the encoding method of LDPC codes and section IV describes the various decoding schemes of LDPC codes. Section V presents the concepts of Turbo codes. Experimental results are given in section VI that compares the error performance of LDPC codes with Turbo codes. Finally conclusion and further work is given in section VII.
(1)
2.1.2. Graphical Representation Tanner introduced an effective graphical representation for LDPC codes. Not only provide these graphs a complete representation of the code, they also help to describe the decoding algorithm.
Figure 1: Tanner graph corresponding to the parity check matrix in equation (1).
467
Tanner graphs are bipartite graphs. That means that the nodes of the graph are separated into two distinctive sets and edges are only connecting nodes of two different types. The two types of nodes in a Tanner graph are called variable nodes (v_nodes) and check nodes (c_nodes). Figure 1 is an example for such a Tanner graph and represents the same code as the matrix in 1. The creation of such a graph is rather straight forward. It consists of m check nodes (the number of parity bits) and n variable nodes (the number of bits in a codeword). Check node fi is connected to variable node cj if the element hij of H is a 1. 2.2. Regular and Irregular LDPC Codes A LDPC code is called regular if wc is constant for every column and wr = wc (n/m) is also constant for every row. The example matrix from equation (1) is regular with wc = 2 and wr = 4. Its also possible to see the regularity of this code while looking at the graphical representation. There is the same number of incoming edges for every v-node and also for all the c-nodes. If H is low density but the numbers of 1s in each row or column arent constant the code is called a irregular LDPC code. 2.3. Constructing LDPC Codes Several different algorithms exist to construct suitable LDPC codes. Gallager himself introduced one. Furthermore MacKay proposed one to semi-randomly generate sparse parity check matrices. This is quite interesting since it indicates that constructing good performing LDPC codes is not a hard problem. In fact, completely randomly chosen codes are good with a high probability. The problem that will arise is that the encoding complexity of such codes is usually rather high.
3. LDPC Encoding
Low-density parity-check (LDPC) codes have been adopted by high-speed communication systems due to their near Shannon limit error-correcting capability. In order to achieve the desired bit error rate (BER), longer LDPC codes with higher code rate are preferred in practice. As in the case of block codes, we define a generator matrix (G) and parity check matrix (H). In order to achieve a systematic LDPC code G must be in the following form G = [I k P ] (2) Where Ik is an identity matrix and P defines the parity bits. In some cases, a code may be specified by only the H matrix and it becomes necessary to solve for the G matrix. The H matrix is often in an arbitrary format, it must be converted into echelon canonical form shown here H = P T I nk (3) Where Ink is an identity matrix and defines the parity Bits [7]. Typically, encoding consists of using the G matrix to compute the parity bits and decoding consists of using the H matrix and soft-decision decoding. This conversion can be accomplished with the assistance of a computer program. Afterwards, the G matrix can be observed by inspection. In the encoding stage, the main task is identifying the fixed bits position. As we know, in the systematic LDPC codes, the value of the transmission codeword is the same with the value of the H matrixs message word. So we can fix some codeword bits in the encoders codeword. Think of the binary LDPC code, H = [hi , j ]MXN is the check matrix of the LDPC code, C = {c1 , c2 ,......., c N } is the
encoders codeword, and fixed bits set is S = {ci , ci ,....., ci ,.....ci },0 i j N (S is chosen randomly), order ci = 0 or ci = 1 , then the encoding method is
1 2 j tp
468 (4)
That is to say, we dont translate the messages of variable nodes and check nodes in the positions of ci j S , only translate the fixed number 0 or 1. After this step, both the encoder and decoder know the value of the fixed bits. And the decoder can use the value of the fixed bits, which can improve the accuracy of decoding, to decode.
4. LDPC Decoding
Low-density parity-check codes are usually iteratively decoded using the belief propagation algorithm, also known as the message-passing algorithm. The message-passing algorithm operates on a factor graph, where soft messages are exchanged between variable nodes and check nodes. The algorithm can be formulated as follows: in the first step, variable nodes are initialized with the prior log-likelihood ratios (LLR) defined in (5) using the channel outputs yi , where 2 represents the channel noise variance. This formulation assumes the information bits take on the values 0 and 1 with equal probability.
Lpr ( xi ) = log
Pr ( xi = 0 | yi ) 2 = 2 yi Pr ( xi = 1 | yi )
(5)
The variable nodes send messages to the check nodes along the edges defined by the factor graph. The LLRs are recomputed based on the parity constraints at each check node and returned to the neighboring variable nodes. Each variable node then updates its decision based on the channel output and the extrinsic information received from all the neighboring check nodes. The marginalized posterior information is used as the variable-to-check message in the next iteration. 4.1. Sum-Product Algorithm The sum-product algorithm is a common form of the message-passing algorithm. Variable-to-check and check-to-variable messages are computed using equations (6) and (7), L q ij = L rij ' + L pr ( x i ) (6)
( )
j 'Col [i ]\ j
( )
X sgn (L(q )) ) [ ]
i' j i 'Row j \ i
(7)
where ( x) = log(tanh( 1 )), x 0. . 2x The messages qij and rij refer to the variable-to-check and check-to-variable messages, respectively, that are passed between the ith variable node and the jth check node. In representing the connectivity of the factor graph, Col[i] refers to the set of all the check nodes adjacent to the ith variable node and Row[ j ] refers to the set of all the variable nodes adjacent the jth check node. The posterior LLR is computed in each iteration using the update (8).
L ps (x i ) =
j 'Col i
[L] (r ) + L (x )
pr ij ' i
(8)
A hard decision is made based on the posterior LLR in every iteration. The iterative decoding algorithm is allowed to run until the hard decisions satisfy all the parity check equations or when an upper limit on the iteration number is reached, whichever occurs earlier.
Equation (7) can be simplified by observing that the magnitude of L(rij ) is usually dominated by the minimum L(qi ' j ) term, and thus this minimum term can be used as an approximation of the magnitude of L( rij ) . The magnitude of L( rij ) computed using such min-sum approximation is usually overestimated and correction terms are introduced to reduce the approximation error. The correction can be in the form of an offset, shown as in the update (9). L (rij ) = max min L (q i ' j ) ,0 X
i 'Row [ j ]\ i
i 'Row [ j ]\ i
sgn (L(q ))
i' j
(9)
4.3. Reordered Schedule The above equations can also be rearranged by taking into account the relationship between consecutive decoding iterations. A variable-to-check message of iteration can be computed by subtracting the corresponding check-to-variable message from the posterior LLR of iteration n 1 as in (10), while the posterior LLR of iteration n can be computed by updating the posterior LLR of the previous iteration with the check-to-variable message of iteration, as in (11). ps Ln (qij ) = Ln (10) 1 ( xi ) Ln1 rij ,
L ( xi ) = L
ps n ps n1
( )
j Col[i ]
(11)
5. Turbo Codes
The first turbo code, based on convolutional encoding, was introduced in 1993 by Berrou et al. [6]. Since then, several schemes have been proposed and the term turbo codes has been generalized to cover block codes as well as convolutional codes. Simply put, a turbo code is formed from the parallel concatenation of two codes separated by an interleaver. The generic design of a turbo code is depicted in Fig. 3. Although the general concept allows for free choice of the encoders and the interleaver, most designs follow the ideas presented in [6]: The two encoders used are normally identical; The code is in a systematic form, i.e. the input bits also occur in the output (see Fig. 3); The interleaver reads the bits in a pseudo-random order.
Figure 3: The Generic Turbo Encoder
The choice of the interleaver is a crucial part in the turbo code design. The task of the interleaver is to scramble bits in a (pseudo-)random, although predetermined fashion. This serves two purposes. Firstly, if the input to the second encoder is interleaved, its output is usually quite different from the output of the first encoder. This means that even if one of the output code words has low weight, the other usually does not, and there is a smaller chance of producing an output with very
470
low weight. Higher weight, as we saw above, is beneficial for the performance of the decoder. Secondly, since the code is a parallel concatenation of two codes, the divide-and-conquer strategy can be employed for decoding. If the input to the second decoder is scrambled, also its output will be different, or uncorrelated from the output of the first encoder. This means that the corresponding two decoders will gain more from information exchange. Decoding of Turbo codes consists of two decoders for outputs from both encoders. Both decoders provide estimates of the same set of data bits, albeit in a different order. If all intermediate values in the decoding process are soft values, the decoders can gain greatly from exchanging information, after appropriate reordering of values. Information exchange can be iterated a number of times to enhance performance. At each round, decoders re-evaluate their estimates, using information from the other decoder, and only in the final stage will hard decisions be made, i.e. each bit is assigned the value 1 or 0. Such decoders, although more difficult to implement, are essential in the design of turbo codes.
Figure 5 shows the bit error rates (BERs) of LDPC code under the message passing with various receive diversity counts. Also shown are those of the Turbo code under the iterative decoding, which has the comparable codeword length and code rate. All the message passing decoding schemes
471
are assigned 20 iterations. It can be seen that the LDPC code outperforms Turbo codes for any number of receive diversity.
Figure 5: Performance comparisons between of LDPC codes and Turbo codes
References
[1] [2] [3] [4] Abood X. Y. Hu, E. Eleftheriou, and D. M. Arnold, 2005. Regular and irregular progressive edge-growth tanner graphs. IEEE Trans. Inf. Theory, 51(1), pp. 386398. D. J. C. MacKay and R. M. Neal, 1996. Near Shannon limit performance of low-density parity-check codes. Electron. Lett., 32:16451646. R. Gallager, 1962. Low-density parity-check codes. IRE Trans. Information Theory, pp.2128. D. MacKay and R. Neal, 1995. Good codes based on very sparse matrices. In Cryptography and Coding, 5th IMA Conf., C.Boyd, Ed., Lecture Notes in Computer Science, pp. 100-111, Berlin, Germany: Springer.
Performance Comparison of LDPC Codes and Turbo Codes [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
472
Berrou, C., Glavieux, A., and Thitimajshima. P., 1993. Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes, IEEE Proceedings of the Int. Conf. on Communications, Geneva, Switzerland, pp. 1064-1070. Berrou, C. and Glavieux, A., 1996. Near Optimum Error Correcting Coding and Decoding: Turbo-Codes, IEEE Trans. on Communications, vol. 44, no. 10, pp. 1261-1271. Aaron E. Cohen, Keshab K. Parhi, 2009. A Low-Complexity Hybrid LDPC Code Encoder for IEEE 802.3an (10GBase-T) Ethernet, IEEE transactions on signal processing, vol. 57, no. 10, pp. 845-856. Zhengya Zhang, Venkat Anantharam, Martin J. Wainwright, Borivoje Nikolic, 2010. An Efficient 10GBASE-T Ethernet LDPC Decoder Design With Low Error Floors, IEEE Journal of Solid-State Circuits, Vol. 45, No.4, pp. 843-855. Tadashi Wadayama, Keisuke Nakamura, Masayuki Yagita, Yuuki Funahashi, Shogo Usami, and Ichi Takumi, 2010. Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes, IEEE transactions on communications, vol. 58, no. 6, pp. 1610-1614. Yeong-Luh Ueng, Chung-Jay Yang, Kuan-Chieh Wang, and Chun-Jung Chen, 2010. A Multimode Shuffled Iterative Decoder Architecture for High-Rate RS-LDPC Codes, IEEE Transactions on circuits and systems-I, vol. 57, no. 10, pp. 2790-2803. Enrico Paolini, Marc P. C. Fossorier, Marco Chiani, 2010. Generalized and Doubly Generalized LDPC Codes With Random Component Codes for the Binary Erasure Channel, IEEE Transactions on information theory, vol. 56, no. 4, pp. 1651-1672. Shu-Tao Xia and Fang-Wei Fu, 2008. Minimum Pseudoweight and Minimum Pseudocodewords of LDPC Codes, IEEE Transactions on information theory, vol. 54, no. 1, pp. 480-485. Thomas J. Richardson and Rdiger L. Urbanke, 2001. Efficient Encoding of Low-Density Parity-Check Codes, Transactions on information theory, vol.47, no. 2, pp. 638-656. Thomas J. Richardson, M. Amin Shokrollahi, Rdiger L. Urbanke, 2001. Design of CapacityApproaching Irregular Low-Density Parity-Check Codes, Transactions on information theory, vol.47, no. 2, pp. 619-637. Hossein Pishro-Nik and Faramarz Fekri, 2007. Results on Punctured Low-Density ParityCheck Codes and Improved Iterative Decoding Techniques, Transactions on information theory, vol.53, no. 2, pp. 599-614.