Practical Error-Correction Design For Engineers 2ed 1991
Practical Error-Correction Design For Engineers 2ed 1991
~ CUIER
ZERO
DETECT
dl + CORRECTED dl
If you wish to receive updates to this book, please copy this form, complete it and
send it to the above address.
Name
Title
Organization ________________________ ~ ______ ~ _______
Address ______________________________________--------
Published By:
Cirrus Logic - Colorado
Broomfield, Colorado 80020
(303) 466-5228
Cirrus Logic believes the information contained in this book to be accurate. However,
neither Cirrus Logic nor its authors guarantees the accuracy or completeness of any
information published herein and neither Cirrus Logic nor its authors shall be respon-
sible for any errors, omissions or damages arising out of use of this information. Cirrus
Logic does not assume any responsibility or liability arising out of the application or
use of any software, circuit, method, technique, or algorithm described herein, nor does
the purchase of this book convey a license under any patent rights, copyrights, tradem-
ark rights, or any other of the intellectual property or trade secret rights of Cirrus
Logic or third parties.
This work is published with the understanding the Cirrus Logic and its authors are
supplying information but are not attempting to render engineering or other professional
services. If such services are required, the assistance of an appropriate professional
should be sought.
No part of this book maybe reproduced in any form or by any means, electronic or
mechanical, including photocopying, recording, or by any information storage and re-
trieval system, without the prior written permission of DATA SYSTEMS TECHNOLOGY,
CORP. or CIRRUS LOGIC, INC.
Published by:
CIRRUSLOGIC-COLORADO
INTERLOCKEN BUSINESS PARK
100 Technology Drive, Suite 300
Broomfield, Colorado 80021
Phone (303) 466-5228 FAX (303) 466-5482
Second Printing 1990.
ISBN #0-927239-00-0
To my children, To the memory of my parents,
Rhonda, Karen, Sean, and Robert Robert and Constance
Neal Glover Trent Dudley
PREFACE
The study of error-correcting codes is now more than forty years old. There are
several excellent texts on the subject, but they were written mainly by coding theorists
and are based on a rigorous mathematical approach. This book is written from a more
intuitive, practical viewpoint. It is intended for practicing engineers who must specify,
architect, and design error-correcting code hardware and software. It is an outgrowth
of a series of seminars presented during 1981 and 1982 on practical error-correction
design.
An engineer who must design an error-control system to meet data recoverability,
data accuracy, and performance goals must become familiar with the characteristics and
capabilities of different types of EDAC codes as well as their implementation alterna-
tives, including tradeoffs between hardware and software complexity, speed/space/ cost,
etc. Our goal is to provide this information in a concise manner from a practical
engineering viewpoint. Numerous examples are used throughout to develop familiarity
and confidence in the methods presented. Most proofs and complex derivations have
been omitted; tllese may be found in theoretical texts on error correction coding.
We would like to thank our friends for their assistance and advice. The engineers
attending DST's seminars also deserve thanks for their suggestions.
Neal Glover
Trent Dudley
Broomfield, Colorado
August 1988
v
ABOUT CIRRUS LOGIC - COLORADO
Cirrus· Logic - Colorado was originally founded in 1979 as Data System Technology
(DST) and was sold to Cirrus Logic, Inc., of Milpitas, California, on January 18, 1990.
Cirrus Logic - Colorado provides error detection and correction· (BDAC) products and
services to the electronics industries. We specializes in the practical implementation of
EDAC, recording and data compression codes to enhance the reliability and efficiency of
data storage and transmission in computer and communications systems, and all aspects
of error tolerance, including framing, synchronization, data formats, and error manage-
ment.
Cirrus Logic - Colorado also develops innovative VLSI products that perform
complex peripheral control functions in high-performance personal computers, worksta-
tions and other office automation products. The company develops advanced standard
and semi-standard VLSI controllers for data communications, graphics and mass storage.
Cirrus Logic - Colorado licenses EDAC software and discrete and integrated circuit
designs for various EDAC codes, offers books and technical reports on EDAC and recor-
ding codes, and conducts seminars on error tolerance and data integrity as well as
EDAC, recording, and data compression codes.
PRODUcrS
CONSULTING SERVICES
Consulting services are offered in the following areas:
• Semiconductor memories and large cache memories
• Magnetic disk devices
• Magnetic tape devices
• Optical storage devices using read-only, write-once, and erasable
media
• Smart cards
• CommUlrications
Consulting services offered include:
• Code selection
• Design of discrete hardware and integrated circuits
• Design of software
• Advice in the selection of synchronization, header, and defect man-
agement strategies
• Complex MTBF computations
• Analysis of existing codesand/or designs
• Establishing EDAC requirements from defect data
• Assistance in system integration of integrated circuits implementing
Cirrus Logic's EDAC technology.
vii
PROLOGUE
THE COMING REVOLUTION
IN ERROR CORRECTION TECHNOLOGY
INTRODUcnON
The changes that are occurring today in error detection and correction, error tol-
erance, and failure tolerance are indeed revolutionary. Two major factors are driving
the revolution: need and capability. The need arises from more stringent error and
failure tolerance requirements due to changes in capacity, through-put, and storage
technology. The capability is developing due to continuing increases in VLSI density
and decreases in VLSI cost, along with more sophisticated error-correction teChniques.
This preface discusses the changes in requirements, technology, and techniques that are
presently occurring and those that are expected to occur over the next few years.
Some features of today's error-tolerant systems would have been hard to imagine a
few years ago.
Some optical storage systems now available are so error tolerant that user data is
correctly recovered even if there exists a defect situation so gross that the sector
mark, header and sync mark areas of a sector are totally obliterated along with dozens
of data bytes.
Magnetic disk drive array systems under development today are so tolerant to
errors and failures that simultaneous head crashes on two magnetic disk drives would
neither take the system down nor cause any loss of data. Some of these systems will
also be able to detect and correct many errors that today go undetected, such as tran-
sient errors in unprotected data paths and buffers and even software errors that result
in the transfer of the wrong sector. Some magnetic disk drive array systems specify
mean time between data loss (MTBDL) in the hundreds of thousands of hours.
The contrast with prior-generation computer systems is stark. Before entering de-
velopment I spent some time on a team maintaining a large computer at a plant in Cali-
fornia that developed nuclear reactors. I will never forget an occasion when the head
of computer operations pounded his fist on a desk and firmly stated that if we saw a
mushroom cloud over Vallecito it would be the fault of our computer. The mainframe's
core memory was prone to intermittent errors. The only checking in the entire com-
puter was parity on tape. Punch card decks were read to tape twice and compared.
viii
By the mid-seventies, the computer industry had come a long way in improving
data integrity. I had become an advisory engineer in storage-subsystem development,
and in 1975 I was again to encounter a very unhappy operations manager when a micro-
code bug, which I must claim responsibility for, intermittently caused undetected erro-
neous data to be transferred in a computer system at an automobile manufacturing plant.
Needless to say, the consequences were disastrous. This experience taught me the im-
portance of exhaustive firmware verification testing and has influenced my desire to
incorporate data-integrity features in Cirrus Logic's designs that are intended to detect
and in some cases even correct for firmware errors as well as hardware errors.
Changes in hardware and software data-integrity protection methods are occurring
today at a truly revolutionary rate and soon the weaknesses we knew of in the past and
those that we live with today will be history forever.
ix
mE CHANGING TECHNOLOGY
VLSI density continues to increase, allowing us to incorporate logic on a single in-
tegrated circuit today that a few years ago would have required several separate boards.
This allows us to implement very complex data-integrity functions within a single IC.
Cirrus Logic's low-cost, high-performance, Reed-Solomon code Ie's for optical storage
devices are a good example. As VLSI densities increase, such functions will occupy a
small fraction of the silicon area of a multi-function IC. The ability to place very
complex functions on a single IC and further to integrate multiple complex functions on
a single IC opens the door for greater data integrity. Our ability to achieve greater
data integrity at reasonable cost is clearly one of the forces behind the revolution in
error and failure tolerant technology.
Even with the development of cheaper, higher density VLSI technOlogy, it is often
more economical to split the implementation· of high-performance EDAC systems between
hardware and software. Using advanced software algorithms and buffer management
techniques, nearly "on-the-fly" correction performance can be achieved at lower cost
than using an all-hardware approach.
IBM has announced a new version of its 3380 magnetic disk drive that employs
multiple-burst error detection and correction, using Reed-Solomon codes, to achieve
track densities significantly higher than realizable with previous technology. Single-
burst error correction can handle modest defect densities, but defect densities increase
exponentially with track density. On-the-fly, multiple-burst error correction and error-
tolerant synchronization are required to handle these higher defect densities. On earlier
models of the 3380, IBM corrected a single burst in a record of up to several thousand
bytes. Using IBM's 3380K error-correction code, under the right circumstances it would
be possible to correct hundreds of bursts in a record. A unique feature of the 3380K
code is that it can be implemented to perform on-the-fly correction with a data delay
that is roughly 100 bytes.
x
The impact of this IBM announcement, coupled with the general push toward high-
er track densities, the success of high-performance error detection and correction on
optical storage devices, and the availability of low-cost, high-performance EDAC IC's,
will stimulate the use of high-performance EDAC codes on a wide range of magnetic
disk products. Cirrus Logic - Colorado itself is currently implementing double-burst
correcting, Reed-Solomon codes on a wide range of magnetic disk products, ranging
from low-end designs which process one bit per clock edge to high-end designs which
process sixteen bits per clock edge.
• Failed hardware, such as a failed error latch that never flags an error.
It is important to understand that no error-correction code is perfect; all are
subject to miscorrection when an error event occurs that exceeds the code's guarantees.
However, it is also important to understand that the miscorrection probability for a
code can be reduced to any arbitrarily low level simply by adding enough redundancy.
As VLSI costs go down, more redundancy is being added to error-detection and error-
correction codes to achieve greater detectability of error events exceeding code guaran-
tees. New single-burst error-correction code designs use polynomials of degree 48, 56,
and 64 to accomplish the same correctability achieved with degree 32 codes several
years ago, but with significantly improved detectability. If correctability is kept the
same, detectability is improved more than nine orders of magnitude in moving from a
degree 32 code to a degree 64 code.
Error-detection codes are not perfect either; they are subject to misdetection.
Like miscorrection, misdetection can be reduced to any arbitrarily low level by adding
enough redundancy. Unfortunately, the industry has not, in general, increased the level
of detectability of implemented error-detection codes significantly in the last twenty-
five years. Two degree 16 polynomials, CRC-16 and CRC-CCITT, have been in wide use
for many years. For many storage device applications, there are degree 16 polynomials
with superior detection capability, and moreover, the requirements of many applications
xi
would be better met by error-detection polynomials of degree 32 or greater.
In the last few years, the industry has been doing a better job of avoiding pattern
sensitivities of error-detection and error-correction codes. Cirrus Logic - Colorado
avoids using the Fire code because of its pattern sensitivity, and we use 32-bit auxiliary
error detection codes in conjunction with our Reed-Solomon codes in order to overcome
their interleave pattern sensitivity.
Auxiliary error-detection codes that are used in conjunction with ECC codes to en-
hance detectability have special requirements. The error-detection code check cannot
be made until after correction is complete. It is undesirable to run corrected data
through error-detection hardware after performing correction due to the delay involved.
It is also not feasible to perform the error-detection code check as data is transferred
to the host after correction, since some standard interfaces have no provision for a
device to flag an uncorrectable sector after the transfer of data has been completed.
To meet these requirements, some error-detection codes developed over the last few
years are specially constructed so that their residues can be adjusted as correction
occurs. When correction is complete, the residue should have been adjusted to zero.
Cirrus Logic - Colorado has been using such error-detection codes since 1982, and such
a code is included within Cirrus Logic - Colorado Reed-Solomon code IC's for optical
storage. IBM's 3380K also uses such an auxiliary error-detection code.
As the requirements for data integrity have increased, Cirrus Logic - Colorado has
tightened its recommendations accordingly. One of the areas needing more attention in
the industry is synchronization framing error protection. To accomplish this protection,
Cirrus Logic - Colorado now recommends either the initialization of EDAC shift regist-
ers to a specially selected pattern or the inversion of a specially selected set of EDAC
redundancy bits.
The magnetic disk drive array segment of the industry is making significant gains
in detectability. Some manufacturers are adding two redundant drives to strings of ten
data drives in order to handle the simultaneous failure of any two drives without losing
data. The mean time between data loss (MTBDL) for such a system computed from the
MTBF for individual drives may be in the millions of hours. In order for these systems
to meet such a high MTBDL, all sources of errors and transient failures that could
dominate and limit MTBDL must be identified, and means for detection and correction of
such errors and failures must be developed. For these systems, Cirrus Logic - Colorado
recommends that a four-byte error-detection code be appended and checked at the host
adapter. We also recommend that the logical block number and logical drive number be
included in this check. This allows the detection with high probability of a wide vari-
ety of errors and transient failures, including the transfer of a wrong sector or transfer
of a sector from the wrong drive.
CHANGES IN SELF-CHECKING
As data integrity requirements increase, it becomes very important to detect tran-
sient hardware failures. New designs for component IC's for controller implementations
are carrying parity through the data paths of the part when possible, rather than just
checking and regenerating parity. Cirrus Logic - Colorado sees this as a step forward,
but we also look beyond, to the day when all data paths are protected by CRC as well.
It is especially important to detect transient failures in EOAC hardware. Some
companies have implemented parity-predict circuitry to continuously monitor their EOAC
shift registers for proper operation.
When possible, Cirrus Logic - Colorado has incorporated circuitry to divide codew-
ords on write by a factor of the code generator polynomial and check for zero remaind-
er. This function is performed as close to the recording head as possible.
Cirrus Logic - Colorado's 8520 IC uses dynamic cells for the major EOAC shift
registers. To detect transient failures in the shift registers themselves, we incorporated
a feature whereby the parity of all bits going into a shift register is compared with the
parity of all bits coming out of the shift register.
xiv
CONTENTS
Preface . . . . . . . . . . . . v
Prologue. . . . . . . . . . . viii
CHAPTER 1 - INTRODUCTION. 1
xv
CHAPTER 4 - APPLICATION CONSIDERATIONS. · 213
4.1 Raw Error Rates and Nature of Error · 213
4.2 Decoded Error Rates · 215
4.3 Data Recoverability . · 223
4.4 Data Accuracy . . . · 230
4.5 Performance Requirements. .240
4.6 Pattern Sensitivity. . . . , . · 241
4.7 K-Bit-Serial Techniques. · 243
4.8 Synchronization . . . . · 250
4.9 Interleaved. Product and Redundant Sector Codes. ., .270
xvi
CHAPTER 1 - INTRODUCTION
A byte, word, vector, or data stream is said to have odd parity if the number of
'l's it contains is odd. Otherwise, the byte, word, vector, or data stream is said to
have even parity. Parity may be determined with combinational or sequential logic.
The parity of two bits may be determined with an EXCLUSIVE-QR (XOR) gate.
The circled ' +' symbol is understood to represent XOR throughout this book.
dl
=)u=.. Odd
-OR- dl~
dO dO t
'~D}~
d3
D -OR-
~
d2
~~
en -OR-
d3
d2
==0t
l Odd
•
dl0J dl
dO
~
dO
Parity of a bit stream may be determined by a single shift register stage and one
XOR gate. The shift register is assumed to be initialized to zero. The highest num-
bered bit is always transmitted and received first.
d3 d2 dl dO .~. p
p = d3 + d2 + dl + dO or P = d3 e d2 e dl e dO
- 1-
The circuit below determines parity across groups of data stream bits.
d6 d5 d4 d3 d2 d1 dO
PO = dO + d3 + d6
P1 = d1 + d4
P2 d2 + d5
The circuit below will also determine parity across groups of data stream bits.
d6 d5 d4 d3 d2 d1 dO ~
------------------~~~
PO = d4 + d3 + d2 + dO
P1 = d5 + d2 + d1 + dO
P2 = d6 + d3 + d2 + d1
The contribution of each data bit to the finaI shift register state is shown below.
Each data bit affects a unique combination of parity checks.
contribution
Data Bit P2 P1 PO
d6 100
d5 010
d4 001
d3 101
d2 111
d1 110
dO all
The contributions to the f'mal shift register state made by several strings of data
bits are shown below.
contribution
string P2 P1 PO
d6,d4 => 101
d3,d2,dO => 001
d4,dO => 010
- 2 -
SUPPLEMENTARY PROBLEMS . . . . . .370
APPENDIX A. PRIME FACTORS OF 2K 1 . .374
APPENDIX B. METHODS OF FINDING LOGARITHMS AND
EXPONENTIALS OVER A FINITE FIELD. · 375
ABBREVIATIONS. · 403
GLOSSARY . . . .404
BIBLIOGRAPHY. · 419
INDEX . . . . . · 463
xvii
CHAPTER 1 - INTRODUCTION
A byte, word, vector, or data stream is said to have odd parity if the number of
'l's it contains is odd. Otherwise, the byte, word, vector, or data stream is said to
have even parity. Parity may be determined with combinational or sequential logic.
The parity of two bits may be determined with an EXCLUSIVE-OR (XOR) gate.
The circled '+' symbol is understood to represent XOR throughout this book.
dl
=)~
Odd
-OR- dl~
dO dO t
d3
d2 lD}D Odd
• -OR-
~
d2
~~
en -OR-
d3
d2
=0t
~ Odd
•
dl
=)D
dO
d1&I dl
dO
~
Parity of a bit stream may be determined by a single shift register stage and one
XOR gate. The shift register is assumed to be initialized to zero. The highest num-
bered bit is always transmitted and received first.
d3 d2 dl dO ~. p
P = d3 + d2 + dl + dO or P = d3 $ d2 $ dl $ dO
- 1-
The circuit below determines parity across groups of data stream bits.
d6 d5 d4 d3 d2 d1 dO
PO = dO + d3 + d6
P1 = d1 + d4
P2 = d2 + d5
The circuit below will also determine parity across groups of data stream bits.
d6 d5 d4 d3 d2 d1 dO ~
------------------~.~
PO = d4 + d3 + d2 + dO
P1 = d5 + d2 + d1 + dO
P2 = d6 + d3 + d2 + dl
The contribution of each data bit to the final shift register state is shown below.
Each data bit affects a unique combination of parity checks.
contribution
Data Bit P2 P1 PO
d6 100
d5 010
d4 001
d3 101
d2 111
d1 110
dO 011
The contributions to the fInal shift register state made by several strings of data
bits are shown below.
contribution
string P2 P1 PO
d6,d4 => 101
d3,d2,dO => 001
d4,dO => 010
- 2 -
The contribution to the final shift register state by each string is the XOR sum of
contributions from individual bits of the string, because the circuit is linear. For a
linear function f:
f(x+y) = f(x)+f(y)
The parity function P is linear, and therefore
P(x+y) = P(x)+P(y)
Circuits of this type are the basis of many error-correction systems.
-3-
1.1.2 A FIRSI WOK ATERROR CORRECIJON
This discussion presents an introduction to single-bit error correction using a code
that is intuitive and simple. Consider the two-dimensional parity-check code defined
below.
Check-Bit Generation S~ndrome Generation
PO dO + d4 + dB + d12 SO = dO + d4 + dB + d12 + PO
P1 d1 + d5 + d9 + d13 Sl = d1 + d5 + d9 + d13 + P1
P2 d2 + d6 + d10 + d14 S2 = d2 + d6 + d10 + d14 + P2
P3 = d3 + d7 + dll + d15 S3 d3 + d7 + d11 + d15 + P3
P4 d12 + d13 + d14 + d15 S4 = d12 + d13 + d14 + d15 + P4
P5 dB + d9 + d10 + d11 S5 dB + d9 + d10 + d11 + P5
P6 = d4 + d5 + d6 + d7 S6 = d4 + d5 + d6 + d7 + P6
P7 = dO + d1 + d2 + d3 S7 = dO + d1 + d2 + d3 + P7
dO d1 d2 d3
d4
dB
d12
d5
d9
d13
d6
d10
d14
d7
dll
d15
J!ilP4
Row Checks
dO
:-l
:0 ---r-~-'_+ - - S7 = dO + d1 + d2 + d3 + P7
_d_2_......It L-§--1
d3
On write, each row check bit is selected to make the parity of its row even.
Each column check bit is selected to make the parity of its column even. The data bits
and the parity bits together are called a codeword.
On read, row syndrome bits are generated by checking parity across each row,
including the row check bit. Column syndrome bits are generated in a similar fashion.
Syndrome means symptom of error. For this code, syndrome bits can be viewed as the
XOR differences between read checks and write checks. If there is no error, all syndr-
ome bits are zero.
When a single-bit error occurs, one row and one column will have inverted syndro-
me bits (odd parity). The bit in error is at the intersection of this row and column.
The circuit above shows the logic necessary for generating the write-check bit and
the syndrome bit for one row. For parallel decoding, this logic is required for each
-4-
row and column. Also, 16 AND gates are required for detecting the futersections of
inverted row and column syndrome bits. In addition, 16 XOR gates are required for
inverting data bits. The correction circuit for one particular data bit is shown below.
Raw Data Bit dlO
--------------~) D Corrected dlO •
S2
S5
ALLOW CORRECTION
Two data bits in error will cause either two rows, two columns, or both to have
inverted syndrome bits (odd parity). This condition can be trapped to give the code the
capability to detect double-bit errors in data.
All single check-bit errors are detected, but not all double check-bit errors. One
row and one column check bit in error will result in miscorrection (false correction). If
an overall check bit across data is added, the code is capable of detecting all double-bit
errors in data and check bits. This includes the case where one data bit and one parity
bit are in error. The overall check bit can be generated by forming parity across all
row or all column check bits. With the overall check bit added, all double-bit errors
are detectable but uncorrectable.
Miscorrection occurs when three bits are in error on three comers of a rectangle.
For example:
CJ overall Check
Column Checks
The three errors which are illustrated above cause the decoder to respond as if
there were a single-bit error at location m. Miscorrection does not result for all
combinations of three bits in error, only for those where there,are errors on three
comers of a rectangle.
Miscorrection probability for three-bit errors is the ratio of three-bit error com-
binations that result in miscorrection to all possible three bit-error combinations.
-5 -
Misdetection (error condition not detected at all) occurs when four-bits are in
error on the comers of a rectangle. For example:
CJ Overall Check
Column Checks
This error condition leaves all syndrome bits equal to zero.
Misdetection does not result for all combinations of four bits in error, only those
where there are errors on four comers of a rectangle. Misdetection probability for
four-bit errors is the ratio of four-bit error combinations that result in misdetection to
all possible four-bit error combinations.
Check bits
Syndromes
Codeword
Correctable error
Detectable error
Miscorrection
Misdetection
Miscorrection probability
Misdetection probability
- 6 -
PROBLEMS
d6 d5 d4 d3 d2 dl dO
' - - - - - - . . PO =
~----------... Pl =
d6 d5 d4 d3 d2 dl dO
PO =
~-----------... Pl =
~-------------------------- P2
3. Generate a chart showing· the contribution of each data bit to the fmal shift
register state for the circuits shown above.
If the data stream is zeros except for d3 and dl, what is the fmal shift register
state?
- 7 -
1.2 MATHEMATICAL FOUNDATIONS
- 8-
Theorem 1.2.1. Every integer a>1 can be expressed as the product of primes, (with
at least one factor).
Examples: 3 =3
6 =2*3
15 =3*5
Dtifinition 1.2.5. Integers a and b are relatively prime if their greatest common
divisoris 1.
Examples: 3, 7
3, 4
15, 77
Theorem 1.2.2. Let integers a, b, and c be relatively prime in pairs, then a*b*c
divides d if, and only if, each of a, b, and c divide d.
Examples: 3115,5115, 7%15, therefore, (3*5*7)%15
31210,51210,71210, therefore, (3*5*7)1210
Theorem 1.2.3. Let an integer a be prime, then a divides b*c*d if, and only if, a
divides b or c or d.
Definition 1.2.7. Let x and y be any real numbers. x modulo y, written as x MOD
y, is defined as follows:
x MOD Y = x - y*INT(x/y)
Examples: 5 MOD 3 = 2
9MOD3 = 0
-5 MOD 7 = 2
1.2.2 SOME DEFINmONS. TlIEOREMSAND ALGORITHMS FOR POLYNOMIALS
Definition 1.2.9. The greatest common divisor of two polynomials is the monic
polynomial of greatest degree which divides both.
Dt;/inition 1.2.10. The least common multiple of a(x) and b(x) is some c(x) divisible
by each of a(x) and b(x) , which itself divides any other polynomial that· is divisible by
each of a(x) and b(x).
Dt;/inition 1.2.11. If the greatest common divisor of two polynomials is 1, they are
said to be relatively prime.
Theorem 1.2.4. Let a(x) , b(x) , and c(x) be relatively prime in pairs, then
a(x)' b(x)' c(x) divides d(x) if, and only if, a(x) and b(x) and c(x) divide d(x).
Theorem 1.2.5. Let a(x) be irreducible, then a(x) divides b(x)' c(x) • d(x) if, and only
if, a(x) divides b(x) or c(x) or d(x). .
Definition 1.2.13. A function is said to be linear if' the properties stated below
hold: .
- 10 -
1.2.3 THE CHINESE REMAINDER METHOD
There are times whIm integer arithmetic in a modular notation is preferred to a
fixed radix notation. The integers are represented by residues modulo a set of relative-
ly prime moduli.
Integer(k} Residues(rO,r1}
0 0 0
1 1 1
0 2 2 2
3 0 3
0'
4 1 4
0 5 2 0
20,
4 6 0 1
7 1 2
8 2 3
3 2 9 0 4
10 1 0
MODULUS = 3 MODULUS 5 11 2 1
12 0 2
13 1 3
14 2 4
15 0 0
16 1 1
Notice that the integer k has a unique representation in residues from k =0 through
k=14. The integer k=15 has the same representation as k=O. In this case, the total
number of integers that have unique representation is 15. In general, the total number
of integers n having unique representation is given by the equation:
n = LCM(eo,e I, ••• )
where the ei are moduli.
There are also times when an integer d must be determined if its residues modulo
a set of moduli are given. This can be accomplished with the Chinese Remainder Me-
thod. This method is based on the Chinese Remainder Theorem. See any number theory
text.
11 _
METHOD
ei = Moduli (The ei must be relatively prime in pairs)
ri = Residues
EXAMPLE
n = LCM(3,5) = 15
mO = n/eO 15/3 = 5
This calculation
ml n/el 15/5 3 is performed at
development time.
AO*5 MOD 3 1, therefore AO = 2
d rO d
-eO = nO +
eO
and simultaneously
- 12 -
Rearranging gives
d = ro + eo
I
+ eO + • • • = r1 + e1____________
~I
+ e1 + ~
nO times n1 times
A procedure for finding d based on the relationship above is detailed in the fol-
lowing flowchart.
dO = do + eO
- 13-
1.2.4 MULTIPLICATION BY SHIFIlNG, ADDING, AND SUBTRACl7NG
Many 8-bit processors do not have a' multiply instruction. This discussion de-
scribes techniques to minimize the complexity of multiplying a variable by a constant,
when these processors are used. These techniques provide another alternative for
accomplishing the multiplications required in performing the Chinese Remainder Method.
On an 8-bit processor any shift that is a multiple of 8 bits can be accomplished
with register moves. Therefore, multiplying by a power of 2 that is a multiple of 8 can
be accomplished by register moves. Any string of ones in a binary value can be repre-
sented by the power of 2 that is just greater than the highest power of 2 in the string
minus the lowest power of 2 in the string. These results can be used to minimize the
complexity of multiplying a variable by a constant using register moves, shifts, adds and
subtracts.
Examples: In all examples, x is less than 256. The results are shown in a
form where register moves and shifts are identifiable.
y 255*x
= (2 8 -1)*X
= 2 8 *x-x
y 257*x
= (2 8 +1)*x
= 2 8 *x+x
y = 992*x
= (29+28+27+26+25)*x
= (2 1O -2 5 )*x
= 2 1O *x-2 5 *x
y 32131*x
= (214+213+212+211+210+28+27+21+20)*x
(215_29_27+21+20)*x
= 215*x-29*x-27*x+21*x+2o*x
= 28*(27*x)-(27*x)-28*(21*x)+(21*x)+x
In the last example, only two unique shift operations are required even though
the original constant contains nine powers of 2. This particular example is from the
Chinese Remainder Method when moduli 255 and 127 are used.
- 14 -
PROBLEMS
1. Find the GCD of 70 and 15.
a = INT(7/3) =
a = -INT(1I3) =
a = INT(-1I3) =
a = 10 MOD 3 =
a = -3 MOD 15 =
a = 254 MOD 255 =
9. Is 2· x2 + 1 a monic polynomial?
10. Write the residues modulo the moduli 5 and 7 of the integer 8.
11. The residues for several integers modulo 5 and 7 are listed below. Compute the Ai
of the Chinese Remainder Method. Then use the Chinese Remainder Method to
determine the integers.
aMOD5 = 4, aMOD7 = 6, a =?
a MOD 5 = 3, a MOD 7 = 5, a = ?
a MOD 5 = 0, a MOD 7 = 4, a = ?
What is the total number of unique integers that can be represented by residues
modulo 5 and 7?
12. Define a fast division algorithm for dividing by 255 on an 8-bit processor that does
not have a divide instruction. The dividend must be less than 65536.
13. What is the total number of unique integers that can be represented by residues
modulo 4 and II?
- 15 -
1.3 POLYNOMIALS AND SHIFf REGISTER SEOUENCES
Example #.2:
. x 3 + xx ++ 11 -or- 11
• 1011
x4 + x 3 11
x2 + x 11
x + 1 11
3 2
x4 + x + x + 1 11101
In example 12, unlike in ordinary polynomial multiplication, the two x terms cancel.
- 16 -
DIVISION OF POLYNOMIALS
Division is just like ordinary division of polynomials. except the addition of coeffi-
cients is accomplished with the XOR operation (modulo-2 addition).
Exarogle U:
x2 + 1 101
x3 + x + 1 x5 + 1 -OR- 1011 100001
x5 + x3 + x2 1011
x3 + x2 + 1 1101
x3 + x + 1 1011
x2 + x 0110
Exarogle i2:
x2 + 1 101
x3 + x + 1 x5 + x2 + 1 -OR- 1011 100101
x5 + x3 + x2 1011
x3 + 1 1001
x3 + x + 1 1011
x 0010
- 17 -
1.3.2 INTRODUcnON TO SHIff REGISTERS
A linear sequential circuit (LSC) is constructed with three building blocks. Any
connection is permissible as long as a single output arrow of one block is mated to a
single input arrow of another block.
CONSTANT MULTIPLIERS.
Single input, single output.
Latches are clocked by a synchronous clock. The output of a latch at any point
in time is the binary value that appeared on its input one time unit earlier.
The output of a modulo-2 adder at any point in time is the modulo-2 sum of the
inputs at that time.
For now, a constant multiplier ' ·a' will be either ' ·1' or ' ·0'. If such a constant
multiplier is '. 1', a connection exists. No connection exists for a constant multiplier of
, ·0'.
AN EXAMPLE OF AN LSC
A linear sequential circuit of the above form is also called a linear feedback shift
register (LFSR), a linear shift register (LSR) or simply a shift register (SIR).
ANEQUlVALENTCIRCUITWHEREaJ - Q.lI2 - 1.«3 - I
- 18-
SHIFT REGISTER IMPLEMENTATION OF MULTIPLICATION
Polynomial multiplication can be implemented with a linear shift register.
The circuit below will multiply any input bit stream (input polynomial) by (x + 1).
The product appears on the output line. The number of shifts required is equal to the
sum of the degrees of the input polynomial and the multiplier polynomial plus one.
OUTPUT
I •
INPUT
~
Example #1: Assume the input polynomial to be (x5 + x3 + 1).
Input Shift Reg Output
Bit State Bit
1 (x 5 ) 1 1 (x 6 )
0 0 1 (x 5 )
1 (x 3 ) 1 1 (x4)
0 0 1 (x 3 )
0 0 0
1 (1) 1 1 (x)
0 0 1 ( 1)
NOTE: The shift register state is shown after the indicated input bit is clocked.
- 19 -
3 The circuits below will multiply any input bit stream (input polynomial) by
(x + x + 1).
OUTPUT
•
INPUT
Shift Register "A"
OUTPUT
I •
INPUT ~
t t~
Shift Register "B"
Example #1: Assume the input polynomial to be x3 .
NOTE: The shift register state is shown after the indicated input bit is clocked.
- 20 -
A GENERAL MULTIPLICATION CIRCUIT
OUTPUT
The circuit shown above multiplies any input polynomial D(x) by a fixed polynomial
P(x). The product appears on the output line.
The number of shifts required is equal to the sum of the degrees of the input
polynomial and multiplier polynomial, plus one.
MULTIPLY CIRCUIT EXAMPLES
OUTPUT
I
INPUT1~0~
Multiply by x2 + 1
OUTPUT
. I
INPUT 1
G~~
~ 1
Multiply by x4 + x3 + 1
OUTPUT
I
INPUT 1
G~
~ 1
Multiply by x5 + x3 + x2 + 1
- 21 -
SHIFT REGISTER IMPLEMENTATION OF DIVISION
Polynomial division can be implemented with an LSR.
The circuit below will divide any input bit stream by (x + 1). One shift is re-
quired for each input bit. The quotient appears on the output line. The final state of
the LSR represents the remainder.
OUTPUT
INPUT
x6 + x5 + x4 + x3 + x + 1.
Input Shift Reg Output
Bit State Bit
1 (x~) 1 (1) 0
1 (x ) 0 1 (x5)
1 (x~) 1 (1) 0
1 (x ) 0 1 (x3)
0 0 0
1 (x) 1 (1) 0
1 (1) 0 1 (1)
- 22 -
The circuit below will divide any input bit stream by (x3 + x + 1).
OUTPUT
INPUT
NOTE: The shift register state is shown after the indicated input bit is clocked.
- 23 -
A GENERAL DIVISION CIRCUIT
OUTPUT
t I I I I
~~ @ C20 ®
l l l l
INPUT
The circuit above divides any input polynomial D(x) by a fixed polynomial P(x).
The quotient appears on the output line. The remainder is the final shift register state.
INPUT
~I
Divide by x2 + 1
OUTPUT
INPUT
~~I
Divide by x4 + x 2 + 1
OUTPUT
INPUT r
Divide by x 6 + x 5 + x4 + x3 + 1
- 24 -
SHIFT REGISTER IMPLEMENTATION OF SIMULTANEOUS
MULTIPLICATION AND DIVISION
It is possible to use a shift register to accomplish simultaneous multiplication ang
division. The circuit below wi~ multiply any input bit stream (input polynomial) by x
and simultaneously divide by (x + x + 1). The number of shifts required is equal to
the degree of the input polynomial plus one. The quotient appears on output line. The
remainder is the (mal state of shift register.
OUTPUT
INPUT
~
j
Example #1: Assume the input polynomial to be (x5 + 1).
Input Shift Reg Output
Bit State Bit
NOTE: The shift register state is shown after the indicated input bit is clocked.
- 25 -
A CIRCUIT TO MULTIPLY AND DIVIDE SIMULTANEOUSLY
A general circuit to accomplish simultaneous multiplication by a polynomial hex) of
degree three and division by a polynomial g(x) of degree two is shown below. The
multipliers are all'· l' (connection) or '·0' (no connection).
OUTPUT
INPUT
- 26 -
A GENERAL CIRCUIT FOR SIMULTANEOUS MULTIPLICATION AND DIVISION
(5 ... OUTPUT
C20
f
(±) (3)'
! ! !
.......
Q1 r
INPUT
The circuit above multiplies any input polynomial by Pl(x) and simultaneously divides by
P2(x).
INPUT
4W r
Multiply by x3 + 1 and divide by x4 + x 2 + 1
OUTPUT
I I I I
INPU~
~
Multiply by x5 + 1 and divide by x5 + x3 + x2 + 1
- 27 -
. SIMULTANEOUS MULTIPLICATION AND DIVISION
WHEN THE MULTIPLIER POLYNOMIAL HAS A HIGHER DEGREE
The circuit below shows how to construct a shift register to multiply and divide
simultaneously when the multiplier polynomial has a higher degree. The number of
shifts required is equal to the degree of the input polynomial, plus the degree of the
multiplier polynomial, minus the degree of the divider polynomial, plus one. Register
. states are labeled below for the multiply polynomial and above for the divide polynomial.
OUTPUT
I x2 x I 1 I
INPUT
The circuit below will multiply an input polynomial a(x) by a fixed polynomial x3 +
x + 1 and simultaneously multiply an input polynomial b(x) by the fixed polynomial x2 +
1 and sum the products. The sum of the products appears on the output line. The
number of shifts required is equal to the sum of the degrees of the input polynomial of
the highest degree and the fixed polynomial of the highest degree plus one.
OUTPUT
a(x)
b(x)
0 1 (x5) 101 0
0 0 010 1 (x~)
1 (x3) 1 (x3) 010 1 (x )
0 0 100 0
0 0 000 1 (x4)
0 1 (1) 101 0
0 0 010 1 (x2)
0 0 100 0
0 0 000 1 (1)
NOTE: The shift register state is shown after the indicated input bit is clocked.
- 29 -
" " -.
SHIFT REGISTER IMPLEMENTATION TO COMPUTE A SUM OF PRODUCTS
MODULOA DIVISOR
A single shift register can be used to compute the remainder of the sum of pro-
ducts of different variable polynomials with different fixed polynomials when divided by
another polynomial e. g. [a(x). hi (x), + b(x) .h2(x)lMOD g(x).
The J,:ircJJ!tbelo\V. will !llultlply an input polynomial a(x) bya fixed polynomial x2
+ x + 1 and simultaneously multiply an input polynomial b(x) by th~ fixed polynomial x2
+ 1 and sum the prod:ucts.Thesum of the products is reduced modulo x +x +1.
The shift register contents at th~ end of the ope~tion is the result.·· The number
of shifts required is equal to the degree of the input polynomial of the highest degree
plus one.
OUTPUT
a(x}
b(x}
- 30 -
OTHER FORMS OF THE DIVISION CIRCUIT
The circuit examples below are implemented using the internal-XOR form of shift
register.
OUTPUT OUTPUT
~.
~
t r
Premultiply by x 3 and
divide by x 3 + x + 1 Divide by x 3 + x + 1
The circuit shown below can accomplish the circuit function of either of the
circuits shown above. If the gate is enabled for ~e entire input polynomial, the circuit
function is to premultiply by x"3 and divide by (x + x + 1). However, if the gate is
disabled for the last m (~ is 3 in this· case) bits of the input polynomial, the circuit
function is to divide by (x + x + 1) without premultiplying. In the following general
discussion, g(x) is the division polynomial and m is the degree of the division polyno-
mial.
OUTPUT
- 31 -
EXTERNAL-XOR FORM OF SHIFT REGISTER DIVIDE CIRCUIT
There is another form of the shift register divide circuit called the external-XOR
form that in many cases can be implemented with less logic than the internal-XOR form.
An example is shown below.
External XOR form of shift register divide circuit
0-
INPUT
OUTPUT
INPUT
OUTPUT
•
The external-XOR form of the shift register can be implemented two ways.
1. The shift register input is enabled during the entire read of the input polynomial.
In this case, the circuit function is premultiply by xm and divide by g(x).
2. The shift register input is disabled during the last m bits of the input polynomial.
In this case, the circuit function is divide by g(x).
- 32 -
Example #1. Input to shift register enabled during entire read of input polynomial.
INPUT
OUTPUT
SIR OUTPUT
100
000
000 ~ ] -Remainder
LSB
x
2. After read is complete, disable the gate and clock m more times to place the
remainder on the output line.
x5 + x3 + x 2 + x
x3 + x + 1 (x 8 because of premultiply)
+ x5
+ x4 + x3
+ x4 + x3
+ x3 + x2
+ x2
+ x2 + x
x
- 33-
E;"ample #2. Input to shift register disabled during iast m bits of input poiynomiaL
Circuit function..= a(x)/g(x) ~
where a(x) = x.) and g(x) = x" + x + 1
INPUT
OUTPUT
Output
1. Up to the last m bits, the output is the quotient.
2. During the last m bits, the output is the remainder.
x2 + 1
x3 + x + 1 x5 .., ~
x5 + X" + X"
x3 + x2
x3 + x + 1
x2 + x + 1
- 34-
PERFORMING POLYNOMIAL MULTIPLICATION AND DIVISION
WITH COMBINATORIAL LOGIC
Computing parity across groups of data bits using the circuit below was previously
studied.
a(x) = d6·x 6 + d5·x 5 + d4·x 4 + d3·x 3 + d2·x 2 + di·x + dO
INPUT
~--? PO = d4 + d3 + d2 + dO
~---------------? Pi = d5 + d2 + di + dO
~-----------------------? P2 d6 + d3 + d2 + di
Now that polynomials have been introduced, ~e function of this ~ircuit can be
restated. It premultiplies the input polynomial by x and divides by (x + x + 1).
Obviously, the parity check equations can be implemented with combinatorial logic.
Therefore, the circuit function can be implemented with combinatorial logic.
d4 d5 d6 ----,
d3 d2 d3
- PO - Pi - P2
d2 di ------ d2
dO ----' dO ----' di
The combinatorial IOjic circuit above ~mputes the remainder from premultiplying a
7-bit input polynomial by x and dividing by (x + x + 1).
~:.35 -
THE SHIFT REGISTER AS A SEQUENCE GENERATOR
Consider the circuit below:
If this circuit is initialized to '001' and clocked, the sequence below will be gener-
ated.
The sequence repeats every seven shifts. Tfle length of the sequence is seven.
The maximum length that a shift register can generate is 2m-I, where m is the shi ft
register length. Shift registers do not always generate the maximum length sequence.
The sequence length depends on the implemented polynomial. It will be a maximum
length sequence only if the polynomial is primitive.
- 36 -
1.3.3 MORE ON POLYNOMIALS
Polynqmial Period. The period of a polynomial P(x) is the least positive integer e
such that (xe + 1) is divisible by P(x).
- 37 -
A PROPERTY OF RECIPROCAL POLYNOMIALS
The reciprocal polynomial can be used to generate a sequence in reverse of that genera-
ted by the forward polynomial.
Clock contents
001
1 010
2 100
3 011
Shift Register "A" 4 110
Transfer the contents of shift register" A" to shift register "B" and clock four times.
Clock contents
-.., ~ 1
2
3
110
011
100
010
Shift Register "B" 4 001
Shift register "B" retraces in the reverse direction the states of shift register" A " .
The property of reciprocal polynomials described above will be used later for decoding
some types of error-correcting codes.
- 38 -
DETERMINING THE PERIOD OF AN IRREDUCIBLE POLYNOMIAL
WITH BINARY COEFFICIENTS
The algorithm described below for determining the period of an irreducible polyno-
mial g(x) with binary coefficients requires a table. The table is used in determining the
residues of powers of x up to (2m-I).
The table is a list of residues of x,x2 ,x4 , - - - ,x2m- I modulo g(x), where m is the
degree of the g(x). Each entry in the table can be computed by squaring the prior
entry and reducing modulo g(x). The justification is as follows.
x 2 *a MOD g(x) = (xaox a ) MOD g(x)
= ([x a MOD g(x)]-[x a MOD g(x)]} MOD g(x)
[xa MOD g(x)]2 MOD g(x)
50 The example below illustrates the use of the table for determining the residue of
x modulo g(x).
x 50 MOD g(x) = [X32+16+2] MOD g(x) = [X32_x16·x2] MOD g(x)
= {[f32 MOD g(Xl]o[f 16 MOD g(Xl]·[f 2 MOD g(xl]} MOD g(x)
I I I
Select these residues from the table.
The period of an irreducible polynomial of degree m must be a divisor of (2m-I).
For each e that is a divisor of 2m-I, compute the residue of xe modulo g(x) by
multiplying together and reducing modulo g(x) an appropriate set of residues from the
table.
The period of the pognomial is the least e such that the residue of xe modulo
g(x) is one. If the period is 2 -1, the polynomial is primitive.
- 39 -
DETERMINING THE PERIOD OF A COMPOSITE POLYNOMIAL
MTH BINARY COEFRCfflNTS
Let fi(x) represent the irreducible factors of f(x). If,
and there are no repeating factors, the periode of f(x) is given by:
e = k oLCM(el,e2,e3,ooo)
where k is the least power of two which is not less than any
of the mi.
Example: The period of (x3 + x2 + x + 1) = (x + 1)3 is 4.
the shift register until it returns to the '00 ·01' state. The number of clocks required
0 •
This method can be used to compute the period of composite as well as irreducible
polynomials. However, it can be very time consuming when the period is large.
- 40 -
NUMBER OF PRIMITIVE POLYNOMIALS OF GIVEN DEGREE
The divisors (factors) of (x2m- I + 1) are the polynomials with period 2m-lor
whose period divides 2m-I. This may include polynomials of degree less than or greater
than m.
The divisors (factors) of (x2m- I + I) that are of degree m are the primitive poly-
nomials of degree m.
where U(x) is Euler's phi function and is the number of positive integers equal to or
less than x that are relatively prime to x:
<I> (x) = I I (Pi) ei- 1 • (Pi -1)
i
where
Pi = The prime factors of x
ei = The powers of prime factors Pi
Example: There are 30 positive integers that are equal to or less than 31 and rela-
tively prime to 31. Therefore, there are 6 primitive polynomials of degree
5.
- 41 -
SHIFT REGISTER SEQUENCES USING A NONPRIMITIVE POLYNOMIAL
Previously, a maximum length sequence generated by a primitive polynomial was
studied. Nonprimitive polynomials generate multiple sequences.
The state sequence diagram shown below is for the irreducible nonprimitive polyno-
mial
X4 + x 3 + x 2 + x + 1
0101 oofol
1010 LJ
1011
1001
1101
The state sequence diagram shown below is for th,e reducible polynomial
x4 + x 3 + x 2 + 1 = ex + 1)·eX 3 + x + 1)
Each of the four sequences directly above contain states with either an odd num-
ber of bits or an even number of bits, but not both. This is caused by the (x + 1)
factor.
- 42 -
REDUCTION MODULO A FIXED POLYNOMIAL
It is frequently necessary to reduce an arbitrary polynomial modulo a fixed polyno-
mial, or it may be necessary to reduce the result of an operation modulo a fixed poly-
nomial. .
The arbitrary polynomial could be divided by the fixed polynomial and the remain-
der retained as the modulo result. .
X4 = x2 + x.
Other examples of arbitrary polynomials reduced modulo (x3 + x + 1) are shown
below.
X4 + x 2 = (x2 + x) + x 2
=x
x9 = x7 + x 6
= (x 5 + x 4 ) + ~x4 + x3)
= x5 + x 3
= (x3 + x 2 ) + f3
= x2
- 43-
DIVIDING BYA COMPOSITE POLYNOMIAL
Sometimes it is necessary to divide a received polynomial C'(x) by a composite
polynomial p(x) = pI (x)· p2(x) • p3(x)' ••• , where pl(x),p2(x),p3(x),'" are relatively prime,
in pairs. Assume the remainder is to be checked for zero.
The remainder could be checked for zero after dividing the received polynomial by
the composite polynomial. However, dividing the received polynomial by the individual
factors of the composite polynomial and checking all individual remainders for zero
would be equivalent.
~r ~I
INPUT INPUT
Individual
Remainder
@ xO +
r2 (x)
The received polynomial could be divided directly by each factor of the composite
polynomial to get individual remainders. However, the following two-step procedure
would be equivalent.
In many cases, a slower process can be used in step 2 than in step 1 because
fewer cycles are required in dividing the composite remainder.
- 44 -
The diagram below shows an example of computing individual remainders from a
"composite remainder using combinatorial logic.
Example #2
Composite Remainder r(x~,
x 1 1
Individual Remainders
It is also possible to compute a composite remainder from individual remainders, as
shown below.
- 45 -
Example #3
IndivIdual Individual
Remainder rl(x) Remainder r2(x)
x 1
In the examples above, the factors of the composite polynomial are assumed to be
relatively prime. If this is the case, the Chinese Remainder Theorem for polynomials
guarantees a one-to-one mapping between composite remainders and sets of individual
remainders.
To understand how the connections in circuit Examples #2 and #3 were determined,
study the mappings below. To generate the first mapping, the individual remainders
corresponding to each composite remainder are determined by dividing each possible
composite remainder by the factors of the composite polynomial. For the second mapp-
ing, the composite remainder corresponding to each set of individual remainders is
determined by rearranging the first mapping.
The boxed areas of the first mapping establish the circuit connections for Example
#2. The boxed areas of the second mapping establish the circuit connections for Ex-
ample #3. There are other ways to establish these mappings. The method shown here
has been selected for simplicity. However, in a practical sense it is limited to polyno-
mials of a low degree.
- 46 -
FIRST MAPPING
Corresponding
Composite Individual
Remainder Remainders
0000 000 a
0001 001 1
0010 010 1
0011 011 0
0100 100 1
0101 101 0
0110 110 0
0111 111 1
1000 011 1
1001 010 0
1010 001 0
1011 000 1
1100 111 0
1101 110 1
1110 101 1
1111 100 0
SECOND MAPPING
Corresponding
Individual Composite
Remainders Remainder
000 0 0000
000 1 1011
001 0 1010
001 1 0001
010 0 1001
010 1 0010
011 0 0011
011 1 1000
100 0 1111
100 1 0100
101 0 0101
101 1 1110
1100 0110
1101 1101
111 0 1100
111 1 0111
- 47 -
PROBLEMS
1. Write the sequence for the circuit below.
x3 + x + 1 x3 + x2 + x + 1 x5 + 1
x + 1 X3 + 1 x3 + x + 1
4. Perform the division operations below. Show the quotient and the remainder.
x3 + x + 1 I x6 + x + 1 x3 + x + 1 I x3 + x
5. Determine the period of the following polynomials:
x3 + 1, x3 + x 2 + 1
6. Show a circuit to multiply by (x3 + 1).
7. Show a circuit to divide by (x3 + 1).
8. Show a circuit to compute a remainder modulo (x3 + x2 + I) using combinatorial
logic. The input polynomial is 7 bits in length.
9. Is (x2 + x + 1) reducible?
10. Compute the reciprocal polynomial of(x4 + x + 1).
11. How many primitive polynomials are of degree 4?
- 48 -
CHAPTER 2 - ERROR DETECTION
AND CORRECTION FUNDAMENTALS
OUTPUT
PropertY #1
If the shift register above is recelvmg a stream of bits, the last m. bits (in this
case three) must match the shift register contents in order for the final shift register
state to be zero. This is because a difference between the input data bit and the high
order shift register stage causes at least the low order stage to be loaded with '1'.
Assume an all-zeros data record. Any burst of length m or fewer bits will leave
the shift register in a nonzero state~ If an error burst of length greater than m bits is
to .leave the shift register in its zero state, the last m bits of the burst must match the
shift register contents created by the error bits which preceded the last m bits of the
burst.
PropertY #2
Assume the shift register is zero. Receiving an error burst of length m or fewer
bits has the same effect as placing the shift register at the state represented by the
sequence of error bits.
When reading an all-zeros data record, an error burst of length m or fewer bits
sets the shift register to a state on its sequence that is b shifts away from the state
representing the error burst, where b is the length of the burst.
- 49 -
SELECI'ING CHECK BITS
Property #1 implies that for all-zero data, any burst of length m or fewer bits is
guaranteed to be detected. Property #2 indicates that for all-zero data, it may be
possible to correct some bursts of length less than m bits by clocking the shift register
along its sequence until the error burst is contained within the shift register. .
Clearly, we must find a way to extend these results to cases of nonzero data if
they are to be of any use. The following discussion describes intuitively how check bits
must be selected so that on read, the received polynomial leaves the shift register at
zero in the absence of error.
Assume a shift register configuration that premultiplies by xm and divides by g(x).
On write, after clocking for all data bits has been completed, the shift register will
likely be in a nonzero state if nonzero data bits have been processed. If we transmit
as check bits following the data bits, the contents of the shift register created by
processing the data bits, then on read in the absence of error, the received data bits
will create the same pattern in the shift register, and the received check bits will
match this pattern, leaving the shift register in its zero state.
The concatenation of the data bits and their associated check bits is called a
codeword polynomial or simply a codeword. A codeword C(x) generated in the manner
outlined above by a shift register implementing a generator polynomial g(x) has the
property:
C(x) MOD g(x) =0
This is a mathematical restatement of the condition that processing a codeword
must leave the shift register in its zero state.
Theorem 2.1.1. The Euclidean Division Algorithm. If D(x) and g(x) are polynomials
with coefficients in a field F, and g(x) is not zero, there exists polynomials q(x) (the
quotient) and rex) (the remainder) with coefficients in F such that:
- 50 -
The Euclidean Division Algorithm provides a formal justification for the method of
producing check bits outlined above. By the Euclidean Division Algorithm,
D(x) = q(x) • g(x) + r(x)
where
D(x) = Data polynomial
g(x) = Generator polynomial
q(x) = Quotient polynomial
r(x) = Remainder polynomial
Rearranging gives
The following symbology will be used in our discussion of error detection and
correction codes:
D(x) = Data polynomial
k = Number of information symbols = degree of D(x) + 1
g(x) = Code generator polynomial
m = Number of check symbols = degree of g(x)
W(x) =Write redundancy (check) polynomial
= xm. D(x) MOD g(x)
C(x) =Transmitted codeword polynomial
=xm·D(x) + W(x) = xtn.D(x) + [xm.D(x) MOD g(x)]
n = Record length = degree of C(x) = k + m
E(x) = Error polynomial
C'(x) = Received codeword polynomial
=C(x) + E(x)
- 51 -
An implementation of the encoding process using the intemal-XOR ~orm of shift
regis,r circuit is shown below. This particular example premultiplies by x and divides
by (x + x + 1).
DATA
MUX
WRITE DATA/CHECK BITS
After all DATA bits have been clocked into the shift register, the CHECK-
_BIT_TIME signal is asserted. The AND gate then disables feedback, allowing the check
bits to be shifted out of the shift register, and the MUX passes the check bits to the
device.
o
D
D
MUX
L...-----t-___to-ll WRITE DATA/
After all DATA bits have been clocked into the shift register, the CHECK-
_BIT_TIME signal is asserted. The upper AND gate then disables feedback and the
lower AND gate blocks extraneous DATA input to the ODD parity tree, whose output
the MUX passes as check bits to the device.
- 52 -
SINGLE-BURST DETECflON SPAN FOR AN ERROR-DETECfION CODE
The single-burst detection span for a detection-only code is equal to the shift
register length. This is obvious from Pro~!1Y #1 discussed earlier. Assume a shift
register configumtion that premultiplies by xm and divides by g(x). Assume the shift
register to be initialized to zero and assume an all zeros data record. The only '1' bits
to enter the shift register will be from an error burst. The first bit of the burst sets
certain shift register bits to 1, including the low order bit.
In order to set the shift register to zero, the next m error burst bits must match
the shift register contents. Therefore, in order for an error burst to set the shift
register to zero, it must be longer than the length of the shift register.
This can be also be demonstmted mathematically. It must be shown that the
length of an error burst required to leave the shift register at zero is greater than m
bits. For an error burst to leave the shift register at zero, it must be divisible by the
genemtor polynomial. It must be shown that to be divisible by the polynomial, a burst
must be greater than m bits in length.
Let E(x) contain a single error burst of length m or fewer bits. Let the lowest-
order nonzero coefficient of E(x) be the coefficient of the xl term of C'(x). Then:
E(x) = xjob(X)
where the lowest-order nonzero coefficient of b(x) is that of x~ and the length of the
burst is equal to the degree of b(x) plus one. It is clear that xl and g(x) are relatively
prime, so if g(x) is to divide E(x) it must divide b(x). This is impossible, since if the
burst is of length m or fewer bits, b(x) is a polynomial of degree at most (m-I) and is
clearly not divisible by g(x), which is of degree m.
- 53 -
THEOREMS FOR ERROR-DETECTION CODES
Theorem 2.1.2. All single-bit errors will be detected by any code whose generator
polynomial has more than one term. The simplest example is the code generated by the
polynomial (x + 1).
Theorem 2.1.3. All cases of an odd number of bits in error will be detected by a
code whose generator polynomial has (XC + 1) where c is greater than zero, as a factor.
The check bit ~enerated by (x + 1) is simply an overall parity check. All polyno-
mials of the form (x + 1) are divisible b! (x + 1). Therefore, any code whose generator
polynomial has a factor of the form (x + 1) automatically includes an overall parity
check.
Theorem 2.1.4. A code will detect all single- and double-bit errors if the record
length (including check bits) is no greater than the period of the generator polynomial.
Theorem 2.1.5. A code will detect all single-, double-, and triple-bit errors if its
generator polynomial is of the form (xC + 1). P(x) and the record length (including check
bits) is no greater than the period of the generator polynomial.
Theorem 2.1.6. A code generated by a polynomial of degree m detects all single
burst errors of length no greater than m. Note that a burst of length b is defined as
any error pattern for which the number of bits between and including the first and last
bits in error is b.
Theorem 2.1.7. A code with a generator polynomial of the form (xC + 1)·P(x) has
a guaranteed double-burst detection capability provided the record length (including
check bits) is no greater than the period of the generator polynomial. It will detect
any combination of double bursts when the length of the shorter burst is no greater
than the degree of P(x) and the sum of the burst lengths is no greater than (c+ 1).
This theorem allows selection of a code by structure for accomplishing double-burst
detection. Codes which do double-burst detection can also be selected by a computer
evaluation of random polynomials.
Theorem 2.1.8. The misdetection probability Pmd, defined as the fraction of error
bursts of length b>m where m is the degree of the generator polynomial, that go un-
detected is:
= -1- if b = (m+l)
2 m- 1
When all errors are assumed to be possible and equally probable, Pmd is given by:
Pmd :::: -1
2m
If some particular error bursts are more likely to occur than others (which is
generally the case), then the misdetection probability depends on the particular poly-
nomial and the nature of the errors.
- 54 -
MULTIPLE-SYMBOL ERROR DETECfION
An error-detection code can be constructed from the binary BCH or Reed-Solomon
codes to achieve multiple-bit or multiple-symbol error detection. See Sections 3.3 and
3.4.
CAPABILITY OF A PARTICULAR ERROR-DETECfION CODE: CRC-CCITF CODE
The generator polynomial for the CRC-CCITT code is:
x 16 + x12 + x5 + 1 = (x + 1).(x15 + x14 + x l3 + x12 + x4 + x3 + x2 + x + 1)
When the code is used with a 2088-bit record, it has a guaranteed detection capa-
bility for the following double bursts:
Length of Length of
First Burst Second Burst
1 1to6
2 1to5
3 1to4
4 1to4
5 1 to 2
6 1
- 55 -
2.2 CORRECTION FUNDAMENfALS
This section introduces single-bit and single-burst error correction from the view-
point of shift register sequences.
The examples given use very short records and small numbers of check bits. How-
ever, the same techniques apply to longer records and greater numbers of check bits as
well.
Both the encode and $ecode shift registers premultiply by xm and divide by g(x).
Again m is three and g(x) = x + x + 1.
ENCODE CIRCUIT
WRITE DATA
d3 d2 dl dO
MUX
For encoding, the shift register is first cleared. Data bits d3, d2, dl, and dO are
processed and simultaneously passed through the MUX to be sent to the storage device
or channel.
After data bits are processed, the gate is disabled and the MUX is switched from
data bits to the high order shift register stage. The shift register contents are then
sent to the storage device or channel as check bits.
- 56 -
DECODE CIRCUIT
RAW DATA CORRECTED
7 BIT FIFO BUFFER DQ~----
DATA
C
Decoding takes place in two cycles; the buffer load cycle and the buffer unload
cycle. A syndrome is generated by the shift register circuit as the buffer is loaded.
Correction takes place as the buffer is unloaded. The shift register is cleared just
prior to the buffer load cycle.
001
010
OE}
100
Oll
110
III
101
Assume an all-zeros data record. Assume data bit dl is in error. The contents of
the decode shift register during buffer load would be as shown below.
- 57 -
The syndrome remains in the shift register as the buffer unload cycle begins. The
shift register is clocked as data bits are unloaded from the buffer. As each clock
occurs, the shift register clocks through its sequence. Simultaneously, the gate mon-
itors the shift register contents for the '100' state. Correction takes place on the next
clock after the '100' state is detected.
The shift register contents during the buffer unload cycle is shown below.
* The three-input gate enables after this clock because the '100' state is det-
ected.
010
100
011 d1 clock Forces SIR to this point on the sequence.
110 dO crock Advances SIR to this point on the sequence.
111 p2 clock "
101 p1 clock "
001 pO clock: The final state of the SIR = the syndrome.
- 58-
Since the data record is all zeros, the shift register remains all zeros until the
error bit dt is clocked. The shift register is then set to the 'OIl' state. As each new
clock occurs, the shift register advances along its sequence. There is an advance for
dO, p2, pI, and pO. After the pO clock, the shift register is at state '001'. This is the
syndrome for the assumed error.
When the error bit occurs, it has the same effect on the shift register as loading
the shift register with '100' and clocking once. Regardless of where the error occurs,
the first nonzero state of the shift register is '011'.
Error displacement from the end of the record is the number of states between the
'100' state and the syndrome. It is determined by the number of times the shift reg-
ister is clocked between the error occurrence and the end of record.
Consider what happens on the shift register sequence during the buffer unload
cycle. The number of states between the syndrome and '100' state represents the error
displacement from the front of the record. To determine when to correct, it is suffi-
cient to monitor the shift register for state '100'. Correction occurs on the next clock
after this state is detected.
Consider the case when the data is not all zero. The check bits would have been
selected on write such that when the record (data plus check bits) is read without
error, a syndrome of zero results. When an error occurs, the operation differs from the
all-zeros data case, only while the syndrome is being generated. A given error results
in the same syndrome, regardless of data content because the code is linear. Once a
syndrome is computed, the operation is the same as previously described for the all-
zeros data case.
The code discussed above is a single-error correcting (SEC) Hamming code. It can
be implemented with combinatorial logic as well as sequential logic.
- 59 -
SINGLE-BIT ERROR CORRECTION AND DOUBLE-BIT ERROR DETECTION
If an (x + 1) factor is combined with the polynomial of the previous example, the
resulting polynomial
g(x) = (x + 1)·(x 3 + x + 1) = x4 + x 3 + x 2 + 1
can be used to correct single-bit errors and detect double-bit errors on seven-bit rec-
ords (three data bits and four check bits). Double-bit errors are detected regardless of
the separation between the two error bits.
g(x) has four sequences; the two sequences of length one and two sequences of
length seven.
SEQ A SEO :a
0001
0010
0011
0110
ooE] 10E]
0100 1100
1000 0101
1101 1010
0111 1001
1110 1111
The circuit below could be used for decoding. Encoding would be performed with
a shift register circuit premultiplying by xm and dividing by g(x).
GATE A
- 60 -
Gate A detects the '1000' state on the clock prior to the clock that corrects the
error. Gate B blocks the shift register feedback on the clock following detection of the
'1000' state. This causes the shift register to be cleared.
If three bit-errors occur, the syndrome will be on sequence A. During the buffer
unload cycle, the shift register state '1000' is detected and a data bit is falsely cleared
or set. This is miscorrection because the bit affected is not one of the bits in error.
This code corrects a single-bit error. It detects all occurrences of an even num-
ber of bits in error. When more than one bit is in error and the total number of bits
in error is odd, miscorrection results.
- 61 -
CORREC/'JON OF LONGER BURSTS
The concepts discussed above can be extended to correction of longer bursts as
well.
To construct such a code, select a reducible or irreducible polynomial meeting the
following requirements.
1. Each correctable burst must be on a separate sequence.
2. The sequence length must be equal to or greater than the record length (in
bits, including check bits) for sequences containing a correctable burst.
3. Any burst that is to be guaranteed detectable must not be on a sequence
containing a correctable burst.
Assume a polynomial with multiple sequences and that the bursts '1', '11', '101',
and ' 111' are all on separate sequences of equal length. There may be other sequences
as well:
Such a code has at least the following capability: Its correction span can be
selected to be one, two, or three bits. In either case, its detection span is guaranteed
to be at least three.
Primitive polynomials can also be used for single-burst correction. In this case,
the polynomial requirements are:
1. The polynomial period must be equal to or greater than the record length (in
bits, including check bits).
2. Correctable bursts must be separated from each other on the sequence by a
number of states equal to or greater than the record length (in bits, including
check bits).
3. Any burst that is to be guaranteed detectable must be separated from correc-
table bursts by a number of states equal to or greater than the record length
(in bits, including check bits).
It is also possible to state more general requirements for a single-burst correcting
code. Any polynomial satisfying either of the two previous sets of requirements would
satisfy the more general requirements. Many other polynomials would meet the general
requirements as well. .
- 62 -
The more general requirements for a single-burst correcting code are:
1. If more than one correctable burst is on a given sequence, these bursts must
be separated by a number of states equal to or greater than the record
length (in bits, including check bits).
Additionally, if i>j then we require i>(j+s-LI) and i~(j+LV, while if i<j then we require
i~(j-Lt>and i<(j-s+L2).
DST uses special hardware and software to find codes that satisfy these require-
ments.
- 63 -
SINGLE-BURST CORRECI'ION VIA STRUCTURED CODES
Fire codes achieve single-burst correction capability by their structure. These
codes are generated by the general polynomial form:
g(x) = c(x)"p(x) = (xc + l)"p(x)
where· p(x) is any irreducible polynomial of degree z and period e, and e does not divide
c. These codes are capable of correcting single bursts of length b and detecting bursts
of length d>b provided z>b and c>(d+b-l). The maximum record length in bits, including
check bits, is the least common multiple (LCM) of e and c. This is also the period of
the generator polynomial g(x).
The structure of Fire code polynomials causes them to have multiple sequences.
Each correctable burst is on a separate sequence. Burst error correction with polyno-
mials of this type was discussed earlier in this section. See Section 3.1 for more infor-
mation on Fire codes.
- 64 -
The result of having a particular pattern (or state) in the shift register is the
same as if the same pattern were an input-error burst. It follows that the list of shift
register states near the correctable patterns also represents a list of error bursts, of
length m or less, that may result in miscorrection.
The search software shifts a simulated shift register more than ntimes forward
and reverse from each correctable pattern. After each shift, the burst length in the
shift register is determined. One less than the minimum burst length found over the
entire process represents the single-burst detection span.
PROBABILITY OF MISCORRECIION
Let
b = correction span
n = record length including check bits
m = number of check bits
The total number of possible syndromes is then 2m . The total numbfr of valid syn-
dromes must be equal to the total number of correctable bursts, which is n· 2 - .
Assume that all error bursts are possible and equally probable and that when
random bursts are received, one syndrome istjust as likely as another. If all syndromes
have equal probability and there are n· 2b- valid syndromes out of 2 m total possible
syndromes, then the probability of miscorrection for bursts exceeding the code's guaran-
teed detection capability is:
n'2 b - 1
=-=--
PIne ~
This equation provides a measure for comparing the effect that record length,
correction span, and number of check bits have on miscorrection probability.
One must be careful using this equation. A very simple assumption is made, which
is that all error bursts are possible and equally probable. This is unlikely to be the
case except for particular types of errors such as synchronization errors. To accurately
calculate the probability of miscorrection requires a detailed knowledge of the types of
errors that occur and detailed information on the capability and characteristics of the
polynomial.
- 65 -
PATTERN SENSITIVITY OF A BURST-CORRECI'ING CODE
Some burst-correcting codes have pattern sensitivity. The Fire code, for example,
has a higher miscorrection probability on short double bursts than on all possible error
bursts.
Pattern sensitivity is discussed in greater detail in Sections 4.4 and 4.6.
- 66 -
2.3 DECODING FUNDAMENTALS
The following pages show various examples of decoding single-burst-error-correct-
ing codes. These points will help in understanding the examples.
1. Forward displacements are counted from the first data bit to the first bit in error.
The first data bit is counted as zero.
2. Reverse displacements are counted from the last check bit to the first bit in error.
The last check bit is counted as zero.
3. If a negative displacement is computed, add the record length (seven in all ex-
amples) to the displacement. If a displacement greater than the record length
minus one is computed, subtract the record length from the displacement.
6. In these simple examples, check bits are corrected as well as data bits.
7. In these examples, only the read decode circuit is shown. The write circuit always
premultiplies by xm and divides by g(x).
8. Each suffix A example is the same as the prior example, except that a different
error has been assumed.
10. In these simple examples, if an error occurs that exceeds the correction capability
of the code, miscorrection results. In a real world implementation, excess redun-
dancy would be added to keep miscorrection probability low.
- 67 -
Example #1:
GATE 'A'
- 68 -
Example #2:
GATE 'A'
- 69 -
Example #3:
~----~L--/
GATE 'A'
'---"-.......
Jt BUFFER
UNLOAD
CYCLE
GATE 'B'
- 70 -
Example #3A:
GATE 'B'
- 71 -
Example #4:
.'
GATE 'A'
GATE 'B'
- 72 -
Example #4A:
GATE 'A'
GATE 'B'
- 73 -
Example #5:
RAW DATA
RAM BUFFER
GATE IBI
- 74 -
Example #SA:
RAW DATA
RAM BUFFER
GATE 'B'
- 75 -
Example #6:
RAW DATA
RAM BUFFER
- 76 -
Example #6A:
RAW DATA
RAM BUFFER
- 77 -
Example #7:
- 78 -
Example #7A:
RAW DATA
RAM BUFFER
~______~========~========O
ECC ERROR
.FLAG TO HP •
SOF7WARE CORRECTION ALGORITHM
1. Clock the shift register in a software loop until high output on gate A.
2. Reverse displacement to first bit in error is clock count.
3. Pattern is in left-most two bits of shift register.
4. Use pattern and displacement to correct RAM buffer.
- 79 -
Example #8:
RAW DATA
RAM BUFFER
- 80 -
Example #8A:
RAW DATA
RAM BUFFER
~______~~======~========O
ECC ERROR
.FLAG TO HP
- 81 -
2.4 DECODING SHORTENED CYCUC CODES
In the decoding examples of the previous section, the record length was equal to
the polynomial period. The method discussed in this section allows forward clocking to
be used in searching for the correctable pattern when the record length is shorter than
the polynomial period. Shortening does not change code properties.
The method assumes that the error pattern is detected when it is justified to the
high order end of the shift register. If this is not the case, the method must be mod-
ified.
Let,
- 82 -
EXAMPLES OF COMPUTING THE MULTIPLIER POLYNOMIAL
FOR SHORTENED CYCLIC CODES
g(x) = x4 + x + 1, g' (x) = x4 + x 3 + 1
0 0001 0 0001
1 0010 1 0010
2 0100 2 0100
3 1000 3 1000
4 0011 4 1001
5 0110 5 1011
6 1100 6 1111
7 1011 7 0111
8 0101 8 1110
9 1010 9 0101
10 0111 10 1010
11 1110 11 1101
12 1111 12 0011
13 1101 13 0110
14 1001 14 1100
- sn -
CORRECTION EXAMPLE FOR A SHORTENED CODE
The code is single-bit correcting only.
GATE A
READ SECTOR SKIPPED SECTOR
(READ CYCLE) (CORRECT CYCLE)
ERR SR 0010
d3 0 0000 d3 0100
d2 0 0000 d2 1000
d1 1 1110 d1 0011
*
do 0 1111 dO 0110
**
p3 0 1101 p3 1100
p2 0 1001 p2 1011
pI 0 0001 pI 0101
pO 0 0010 pO 1010
- 84 -
CORRECI'ION EXAMPLE FOR A SHORTENED BURSfLENGTH-1WO CODE
g(x) = (x + l)o(x4 + x + 1) = x5 + x4 + x2 + 1
g'(x) = x5 + x3 + x + 1
n = 9, e = 15, m = 5
Pmult(x) = x3 + x2 + x
Tables of xr MOD g(x) and xr MOD g'(x)
r xr MOD g(x) r xr MOD g' (x)
0 00001 0 00001
1 00010 1 00010
2 00100 2 00100
3 01000 3 01000
4 10000 4 10000
5 10101 5 01011
6 11111 6 10110
7 01011 7 00111
8 10110 8 01110
9 11001 9 11100
10 00111 10 10011
11 01110 11 01101
12 11100 12 11010
13 01101 13 11111
14 11010 14 10101
- 85 -
9 BIT FIFO BUFFER D ~
- 86 -
2.5 INTRODUcnON TO FINITE FIELDS
A knowledge of finite fields is required for the study of many codes, including
BeH and Reed-Solomon codes.
Before discussing finite fields, the definition of a field must be stated. This def-
inition is reprinted from NTIS document AD717205.
DEFINITION OF A FIELD. A field is a set F of at least two elements together with a
pair of operations, (+) and ( 0), which have the following properties:
a. Closure: For all x and y E F,
(x + y) E F and (Xoy) E F
x + Y = Y + x and x y = yo x
0
x+y=O
and for each non-zero x E F, there exists a unique element y E F such that
x·y =1
The set of positive and negative rational numbers together with ordinary addition
and multiplication comprise a field with an infinite number of elements, therefore it is
called an infinite field. The set of positive and negative real numbers together with
ordinary addition and multiplication and the set of complex numbers together with
complex addition and multiplication also comprise infinite fields.
- 87 -
FINITE FlEWS
Fields with a (mite number of elements are called (mite fields. These fields are
also called Galois fields, in honor of the French mathematician Evariste Galois.
The order of a (mite field is the number of elements it contains. A finite field of
order .pD, denoted GF{pD) , exists for every prime p and every positive integer n. The
prime p of a (mite field GF{pD) is called the characteristic of the field. The field
GF(p) is referred to as the ground field and· GF{pD) is called an extension field of
GF(p). The field GF{pD) can also be denoted GF(q), where q=pD.
Let Il represent an arbitrary field element, that is, an arbitrary power of Q. Then
the order e of Il is the least positive integer for whic~ Il~ = 1. More simply, the order
of Il is the number of terms in the sequence (fl ,Il ,Il , ••• ) before it begins to repeat.
Elements of order 2n-l in GF(2n) are called primitive elements. They are also called
generators of the field. Do not confuse the order of a field element with the order of
a field, which is defined in the previous paragraph.
Two fields are said to be isomorphic if one can be obtained from the other by
some appropriate one-to-one mapping of elements and operations. Any two finite fields
with the same number of elements (the same order) are isomorphic. Therefore, for
practical purposes there is only one (mite field of order pD.
(x + y + )2 = x2 + y2 + •••
(x + y + )~ = x~ + y~ + ~ ••
k k 2k
(x + y + )2 = x2 + Y +
(x + y + ••• )1/2k= xl/2k + yl/2k +
These identities will be helpful in performing finite field computations in fields
GF(2n).
- 88 -
GENERATION OF A FlEW
The rmite field GF(2) has only two elements (0,1). Larger fields can be dermed by
polynomials with coefficients from GF(2).
Let p(x) be a polynomial of degree n with coefficients from GF(2). Let a be a
root of p(x). If p(x) is primitive, the powers of a up through 2n-2 will all be unique.
Appropriately selected operations of addition and multiplication together with the field
elements:
n
O,l,a,a 2 ,ooo,a 2 -2
derme a field of2n elements GF(2n).
Assume a finite field is defined by p(x) = x3 + x + 1. Since a is a root of p(x),
pea) =0. Therefore,
a3 + a + 1 = 0 and a3 = a + 1
The field elements for this field are:
0 MOD (a 3 + a + 1) 0
aO " aO = 1
a1 " = 1a
a2 " = a2
a3 " a + 1
a4 " a oa 3 = ao(a + 1) = a 2 + a 1
a5 " a oa 4 = ao(a2 + a) = a 3 + a 2 = a 2 + a 1 + 1
a6 " a oa 5 = a o (a 2 + a + 1) = a 3 + a 2 + a = a 2 + 1
a7 " " = aO
as = a1
" "
The elements of the field can be represented in binary fashion by using one bit to
represent each of the three powers of a whose sum comprises an element. For the field
constructed above, we generate the following table:
a2 a1 aO
0 0 0 0
aO 0 0 1
a1 0 1 0
a2 1 0 0
a3 0 1 1
a4 1 1 0
a5 1 1 1
a6 1 0 1
Figure 2.5.1
- 89-
This list can also be viewed as the zero state plus the sequential nonzero states of
a shift register implementing the polynomial
x3 + x + 1
The number of elements in the field of Figure 2.5.1, including the zero element, is
eight. This field is called GF(8) or GF(23).
aO + a3 = '001'$ '011'
= '010'
= a1
Subtraction: In GF(2n), subtraction is the same as addition, since each ele-
ment is its own additive inverse. This is not the case in all (mite fields.
0·a 4 = 0
a3.a5 = a(3+5) mod 7
= a1
I Division: If the divisor is zero, the quotient is undefined. If the dividend is
zero, the quotient is zero. Otherwise subtract exponents modulo seven e.g.:
a 5/a 3 = = a2
a(5-3)
= a(3-5) = a- 2
= a(-2+7) = a 5
By convention, multiplication and division take precedence over addition and sub-
traction except where parentheses are used.
- 90-
FINITE FIELD COMPUTATION
From the list of field elements above, a 3 repre&ents the vector '011' and a 5 rep-
resents the vector 'Ill'. The integer 6 is the exponent of a .
The log function in this field produces an exponent from a vector while the an-
tilog function ~roduces a vector from an exponent. The log of a 4 ('110') is 4. The
antilog of 3 is a (,011 '). The familiar properties of logarithms hold.
Finite field computation is frequently performed by a computer. At times, field
elements are stored in the computer in vector form. At other times, the logs of field
elements are stored instead of the field elements themselves. For example, consider
finite field math implemented on a computer with an eight-bit wide data path. Assume
the finite field of Figure 2.5.1. If a memory location storing a 4 is examined, the 2binary
value '0000 0110' is observed. This b~ value represents the vector '110' or a + a.
If a memory location storing the log of a is examined, the binary value '0000 0100' js
observed. This value represents the integer 4 which is the exponent and log of a .
Finite field computers frequently employ log and antilog tables to convert from one
representation to the other.
Finite field addition t;rr this field is JIlodu10-2 addbtion (bit-wise EXCLUSIVr-OR
operation). The sum of a (' 110') and a' (' 111') is a ('001 '). The sum of a (,011 ')
and at> (,101 ') is a 4 ('110'). Subtraction in this field, as in all finite fields of char-
acteristic two, is the same as addition. The' +' symbol will be used to represent
modulo-2 addition (bit-wise EXCLUSIVE-OR operation). The' +' symbol will also con-
tinue to be used for ordinary addition, such as adding exponents. In most cases, when
,+' represents modulo-2 addition, it will be preceded and followed by a space, and when
used to represent ordinary addition, its operands will immediately precede and follow it.
Usage should be clear from the context.
There are two basic ways to accomplish finite field multiplication for the field of
Figure 2.5.1. The vec~rs representing the field elements can be multiplied and the
result reduced modulo (x + x + 1). Alternatively, the product may be computed by
first taking logs of the finite field elements being multiplied; then taking the antilog of
the sum of the logs modulo 7 (field size minus one). The'·' symbol will be used to
represent finite field multiplication. The ,*, symbol will be used to represent ordinary
multiplication, such as for multiplying an exponent, which is an ordinary number and not
a finite field element, by another ordinary number.
- 91 -
The examples below multiply a 4 (' 11 0') times as (' 111') using the methods described
above. .
Example #1
1. Multiply the vectors '110' (a 4) and '111' (a 5 ) to get the vector '10010'.
2. Reduce the vector '10010' modulo a 3 + a + 1 to get the vector '100' (a 2).
Example #2
1. Take the logs base a of a 4 and a 5 to get exponents 4 and 5.
2. Add exponents 4 and 5 modulo 7 to get the exponent 2.
3. Take the antilog of the exponent 2 to get the vector a 2 (' 100').
~ = a(-j) MOD 7
aj
The inverse of the zero element is undefmed. aOis its own inverse.
Inversion Examples:
-1=
Division Examples:
a2 = a 2 .-.!
a3 a3
- 92 -
Examples of finite field computation in the field of Figure 2.5.1 are shown below.
To provide greater insight, some examples use different approaches than others with
various levels of details being shown. Note that all operations on exponents are per-
formed modulo 1 (field size minus one).
y = a3 + a4 y = a 1 ·a 4
= '011' + '11O' = '010'·'110'
= '101' = (a 1 ).(a 2 + a 1 )
= a6 = a3 + a2
= (a + 1) + a 2
= a2 + a + 1
= 'Ill' = a5
y = a 2 ·a 6 1
Y =-
a4
= a(2+6)MOD 7
= a(-4) MOD 7
= a1
= a3
2
y =a- (x + aO).(x + a 1 ) = x2 + (aO + a 1 )·x + aO o a 1
a 5
= X2 + a 3x + a 1
= a (2-5) MOD 7
= a4
The modulo operations shown above for adding and subtracting exponents are
understood for finite field computation and will not be shown for the remainder of the
book.
- 93 -
Other examples are:
Y = a 22 ·a6 6 Y = a1 + a2
=a + = a4
= a1
2 2
Y = LOGa [ :5 ] y = LOGa [ :5 ]
= 4 = 4
a3
Y = y = (a 3 )3
a 2 • (a 4 + a 3 )
= a 3 *3
a - 3 = a2
=-
a1
a 3- 1
= a2
y = (x + aO)·(x + al).(x + a 2 )
- 94-
FIELD PROPERTY EXAMPLES
ASSOCIATIVITY
(x + y)+ z +
(a2 + ( 3 )+ a4 +
(1100 1 + 10111) + 1110 1 1100 1 + (10111 + 1110 1 )
11111 + 1110 1 1100 1 + 1101 1
10011 10011
(x . y) · z = x (y z) .
(a 4 . as) · a 6 a4 · (as .
( 6)
a(4+5 MOD 7)
· a6 a4 · a(5+6 MOD 7)
a2 · a6
a(2+6 MOD 7)
a4 ·a4
a(4+4 MOD 7)
a1 a1
COMMUTATIVITY
x + y y+ x
a3 + a4 a4 + a3
10111 + 1110 1 1110 1 + 10111
1101 1 1101 1
x y = y x
as a 6 a 6 as
a(5+6 MOD 7) a(6+5 MOD 7)
a4 a4
DISTRIBUTIVITY
x + z)
(y (x • y) + (x' z)
a4 · (as + ( 6 ) (a 4 . as) + (a4. ( 6 )
a4 · (11111 + 1101 1 ) a(4+5 MOD 7) + a(4+6 MOD 7)
a4 · 1010 1 a2 + a3
a4 · a1 1100 1 + 10111
a(4+1 MOD 7) 11111
as as
- 95 -
SIMULTANEOUS LINEAR EOUATIONS IN A FIELD
Simuitaneous linear equations in GF(2n) can be solved by determinants. For ex-
ampie. given:
a'x + b·y = C
d'x + e·y = f
where x and y are independent variables and a, b, c, d, e, and f are constants. then:
b c
f e c'e + b·f
x = ---- =-----
a b
d e
a c
d f
y = =-----
a b
d e
POLYNOMIALS IN A FIELD
Polynomials can be written with variables and coefficients frc-::J. GF(2 n) and manip-
ulated in much the same manner as polynomials involving rational or rea, numbers.
x + a
a 5 ·x2 ... a 6 ·x + a 2
a 4 ·x3 ... a 5 .,,2 + al·x
a 4 ·x3 + a 5 ·x + a 2
+ a 4 .x 2 + a l ." + a 2
+ a 3 .x 2 + al·x
Thus
- 96 -
QUADRA TIC SOLUTION DIFFICULTY IN A FlEW OF CHARACTERISTIC 2
The correlation between finite field, of characteristic 2, algebra and algebra in-
volving real numbers does not include the quadratic formula:
-b ± ./b 2 - 4ac
x = 2a
2a = a + a = 0
d(x 2 )/dx = 2x = x + x = 0
etc.
- 97 -
FINITE FIELDS AND SHIFT REGISTER SEQUENCES
The shift register below implements the polynomial x3 + x + 1, which defines the
field of Figure 2.5.1.
Figure 2.5.2
This shift register has two sequences, a sequence of length seven and the zero se-
quence of length one.
STATE NUMBER SHIFT REGISTER CONTENTS
oE}
o 001
1 010
2 100
3 011
4 110
5 111
6 101
Notice the similarity of the sequences above to the field definition of Figure 2.5.1.
The consecutive shift register states correspond to the consecutive list of field ele-
ments. The state numbers correspond to the exponents of powers of a.
Advancing the shift register once is identical to multiplying its conte~ by a.
Advancing the shift register twice is identical to multiplying its contents by a , and so
on.
COMPUTING IN A SMAUER FlEW
We have been representirig power of a by components. For example4 in the2 field
of Figure 2.5.1, the components of a are a and 1. The Components of a are a and a.
An arbitrary power of a can also be represented by its components. Let X represent
any arbitrary power of a from the field of Figure 2.5.1; then
X = X2oa2 + X10a + Xo
The coefficients X2, Xl, and Xo are from GF(2), the field of two elements, 0 and
1.
In performing (mite field operations in a field such as GF(2 3), it is frequently
- 98 -
necessary to perform multiple operations in a smaller field such as GF(2). For example,
multiplication of an arbitrary field element X by a, might be accomplished as follows:
Y = aoX
But a 3 a + 1, so
Y2 = Xl
Yl = X2 + Xo
Yo = X2
These results have been used to design the combinatorial logic circuit shown below.
This circuit uses a compute element (modulo-2 adder) from GF(2) to construct a circuit
to multiply any arbitrary field element from the field of Figure 2.5.1 bya.
°a
X2 Y2
X Xl Yl Y aoX
Xo Yo
- 99 -
ANOTHER LOOK ATTHE SHIFT REGISTER
The shift register of Figure 2.5.2 has been redrawn below to show that it contains
a circuit to multiply by a.
Original Circuit
Let {3 represent the primitive element a 2 from the field of Figure 2.5.1. The field
can be redermed as follows:
a2 a1 aO
0 0 0 0
{30 0 0 1
{31 1 0 0
{32 1 1 0
{3! 1 0 1
{3s 0 1 0
{36 0 1 1
{3 1 1 1
(a 2 + a) . (a)
= a3 + a 2
But, a 3 = a + 1, so
{32 • {34 a2 + a + 1
= ( , Ill' )
= {36
- tOO-
This defmition of the field could be viewed as having been generated by the
circuit below.
- 101 -
COMPUTING IN GF(2)
~O rT
An element of this field is either 0 or 1. the result of a multiplication is either 0
or 1. The result of raising any element to a power is either 0 or 1, and so on.
Let p represent an arbitrary element of this field; then,
ba~
~aob
a + aob = ao(l + b)
a + aob + b = a V b
: --'l)~.-~ a V b ~ a + a °b + b
- 102 -
2.6 FINITE FIELD CIRCUITS FOR FIELDS OF CHARAcrnRISTIC 2
This section introduces flnite fleld circuits for (mite flelds of characteristic 2 with
examples. The notation for various GF(8) flnite fleld circuits is shown below. The fleld
of Figure 2.5.1 is assumed.
w~
X =~:
GF(8) =~ Y = l/x Mul~iplicative inversion
Inverter
P>j GF(8)
Square ~> Y x2 Square an arbitrary field ele-
ment
x->j GF(8)
Cube r=> y = x3 Cube an arbitrary field element
x ~>j GF(8)
Log r=> j = loga(x) Compute loga of an arbitrary
field element
- 103-
GF(8) Compute antiloga of an
j =~
Antilog ~ Y = antiloga (j) arbitrary integer
~
compute the remainder f:rom
dividing O(x) by (x + a 1 )
~
y = O(x) MOD (x + ai )
O(x)
- 104 -
COMBINING FINITE FIELD CIRCUITS
Finite field circuits can be combined for computing. For illustration, assume that:
X + W3
y =
X =====:., l=====~
Multiply
GF I=~ Y =
II
w =~ GF "II
Invert
X + W3 X X
Y = = + 1 = + aO
W3 W3 W3
X ~ GF
Multiply
0~ Y = 2L +
W3
aO
~o
a
"II
W
=-j GF
Cube ~-j GF
Invert
X lr-G=G=~ Y = aoX + X
~ t = (a + 1) oX
1.b::====:::lJ_ = a3 X 0
This example shows how a c~cuit to multiply by the fixed field element a 3 can be
constructed using two other GF(2 ) circuits: a circuit to add two arbitrary field ele-
ments and a circuit to multiply an arbitrary field element by a. Later, circuits will be
shown that accomplish this type of operation with GF(2) circuits.
- 105 -
Still another example of combining fmite field circuits follows:
w ====;r=G=;r==
w
x2-----+----------~------~
""""r--..J
x xl-----+------~H
""""r--..J
xo
xo· (w)
I!:::::=====.... + )==---0==. y = w· x
This circuit is called an array multiplier and is based on the following finite field
math:
y = x·w
- 106 -
IMPLEMENTING GFf81 FINITE FlEW CIRCUITS WITH GF£21 CIRCUITS
Y2 = x2
Yl = xl + 1
Yo = Xo + 1
This is realized by the following circuit:
x2
-0 • Y2
x Xl
Xo
f ~
G=:: f
Y =x + a3
0 1 1
I I
a3
A simpler fixed field element adder:
Y = x + a3
= (X2oa2 + x1°a + xo) + (a + 1)
= X2oa2 + (Xl + l)oa + (xo + 1)
But (Xl + 1) = xl and (xO + 1) = xo, so:
Y = X2 oa2 + x1° a + Xo
Again expressing y in component form, we have:
Y2oa2 + Y1° a + Yo = X2 oa2 + x1° a + Xo
and equating coefficients of like powers of a gives:
Y2 = x2
Y1 = xl
Yo = Xo
- 107-
which is realized by the following circuit:
x2 Y2
x xl [>0 YI Y = x + a3
Xo [>0 YO
x2 Y2
x xl Yl Y x + w
Xo YO
I
w2 wI Wo
I
I I I ~
x
0-1 I I I Y = x + W
W
III~
- 108 -
Fixed field element multiplier to multiply by a.
Y = aox
Y2 = xl
Y1 = x2 + Xo
Yo = x2
Y2
x Y1 Y = aox
'----.... YO
- 109-
Fixed field element multiplier to multiply by a-I.
Y = a-lox
= a 60 x
= a 6 °cx2 oa 2 + xloa + xO)
= X2oa8 + xl oa 7 + xo oa 6
= xo oa 2 + x2° a + (xl + xO)
Expressing y in component form:
Y2oa2 + Yloa + Yo = xo oa 2 + x2° a + (xl + xo)
Equating coefficients:
Y2 = Xo
Yl = x2
Yo = xl + Xo
oa- l
Y2
x Yl
1---.- Yo
- lID-
Fixed field element multiplier to Tultiply by a 2 . The finite field math for this circuit
is similar to the math for the a and a- multipliers above.
------~--~ + r-----. Y2
Yl
~--------~-? Yo
Fixed field element multiplier to multiply by a 2 using two circuits that multiply by a:
Y2
Yl Y = a 2 ·x
Xo ~---+' YO
- 111 -
Fixed field element multiplier to multiply by a using bit serial techniques.
Y = a'X
~r...t----~
..
PROCEDURE:
1. Clear the Y register.
2. Load the X register.
3. Apply three clocks. In GF(2n) apply n clocks.
4. Accept the result from the Y register.
PROCEDURE:
Same as above.
- 112 -
Finite field circuit to compute Y = a· X + a 4 • W using bit serial techniques.
Y = a·X + a 4 ·W
~ 113-
Arbitrary field element multiplier using combinatorial logic.
X
Xo
W Wl --+-r--r--------+-+--r------~~--T_-----
Wo
Yo I
Y = X·W
Y = X·W
= (X2· a2 + Xl·a + XO)·(W2· a2 + Wl·a + Wo)
= (X2· W2)·a 4 + (X2·Wl + Xl·W2)·a 3
+ (X2· WO + Xl·Wl + XO·W2)·a 2 + (Xl·WO + XO·Wl)oa + XO·Wo
But a 4 = a2 + a and a 3 =a + 1, so
Y = (X2·W2 + X2°WO + Xl·Wl + XooW2)·a 2
+ (X2· W2 + X2· Wl + Xl·W2 + Xl·WO + XO·Wl)·a
+ (X2 oW l + XloW2 + XO·WO)
Expressing Y in component form and equating coefficients on
like powers of a gives:
Y2 = x2ow2 + X2·wO + XloWl + XO·W2
Yl = X2·W2 + X2"Wl + Xl·W2 + XloWO + XOoWl
YO = X20Wl + XloW2 + Xo·Wo
- 114-
Array multiplier - another arbitrary field element multiplier using combinatorial logic.
W2
W WI
Wo
X2
X Xl
Xo ..,
Y2
Yl Y
Yo
Y =:: XoW
X2 oa 2o W + XloaoW + XOoW
=:: X2 (a 2 oW) + Xl- (a-W) + (W)
0
- 115 -
Arbitrary field element multiplier using bit serial techniques.
Y = X-w
The X register is a shift register. The W register is composed of flip-flops that hold
their value until reloaded.
PROCEDURE:
DEVELOPMENT
Y x-w
= (X2-a2 + Xl-a + Xo)-W
- 116 -
Another arbitrary field element multiplier using bit serial techniques.
y = Xow
PROCEDURE
DEVELOPMENT
y X·W
= (X2oa2 + Xla + XO)oW
= X2 oa 2o W + XloaoW + XOoW
= X2o(a 2o W) + Xlo(aoW) + XOo(W)
- 117-
Arbitrary field element multiplier using log and antilog tables.
x -@to, ROM
BINARY ADDER 1===.., ANTILOG 1== .. Y
MOD (2n-l)* ROM
= X'W
OUTPUT ENABLE
ZERO
DETECT
n)
ZERO
DETECT U
W .. = LOG
ROM
DEVELOPMENT
Y = X'W
IF (X=O) OR (W=O) THEN
Y 0
ELSE
Y = ANTILOG a [ LOGa(XoW) ]
END IF
* For n-bit symbols, 2n is the field size of GF(2n), so (2n-l) is the field size
minus one.
- 118-
Circuit to cube an arbitrary field element.
Xo --------~--------~ Yo
DEVELOPMENT
Y = X3
= (X2"a 2 + Xl"a + XO)3
= (X2"a 2 + Xl"a + XO)2"(X2"a 2 + Xl"a + XO)
= [(X2"a 2 )2 + (Xloa)2 + (XO)2]"(X2"a 2 + Xl"a + XO)
= (X2"a 4 + Xl"a 2 + XO)"(X2"a 2 + Xl"a + XO)
= X2"a 6 + Xl"X2oa5 + XO"X2"a 4 + Xl"X2"a 4
+ Xl"a 3 + XOOXl"a 2 + XO"X2"a 2 + XO"Xl"a + Xo
= X2"(a 2 + 1) + XloX2 o (a 2 + a + 1) + XO"X2"(a 2 + a)
+ XlOX2"(a 2 + a) + Xl"Ca + 1) + XOOXl" (a 2 )
+ XO"X2o(a 2 ) + XOoXl(a) + Xo
= (X2 + XOoXl)oa 2
+ (XO"X2 + Xl + XO"Xl)oa
+ (X2 + XloX2 + Xl + XO)
Y2 = X2 + XOXI
Yl = XO"X2 + Xl + XO"XI =XooX2 + Xl"(l + XO)
= XO"X2 + Xl"XO
Yo = X2 + Xl"X2 + Xl + Xo = (X2 v Xl) + Xo
where v is the INCLUSIVE-OR operator.
- 119-
IMPLEMENTING GF(~) FINITE FIELD CIRCUITS WITH ROMS
In many cases, ftnite fteld circuits can be implemented with ROMs. For example, a
GF(256) inverter is an 8-bit-in, 8-bit-out function and can be implemented with a 256:8
ROM.
Other examples:
1. The square function in GF(256) can be implemented with a 256:8 ROM. The
same is true for any power or root function in GF(256).
2. A GF(16) arbitrary fteld element multiplier can be implemented with a 256:4
ROM. A GF(256) arbitrary fteld element multiplier can be implemented with a
65536:8 ROM. It is also possible to implement a GF(256) multiplier with four
256:4 ROMs and several ftnite fteld adders. (See Section 2.7.)
3. A GF(256) ftxed fteld element multiplier can be implemented with a 256:8
ROM.
When back-to-back functions are required, it is sometimes possible to combine
them in a single ROM. For example, the equation:
Y = [l/X]3. a 2
in GF (256) can be solved for Y when X is known with a single 256:8 ROM.
- 120 -
SOLVING FINITE FIELD EQUATIONS
Finding a power of a fmite field element results in a single solution, but the same
solution may be obtained by raising other finite field elements to the same power.
Finding the root(s) of a fmite field element may result in a single solution, multi-
ple solutions or no solution.
Finding the root(s) of a finite field equation may result in a single solution, multi-
ple solutions or no solution.
x + °1 0
x2 + °l'x + °2 0
x3 + 01'X 2 + °2'x + °3 0
One way to find the roots of such an equation is to substitute all possible finite
field values for x. The equation evaluates to zero for any finite field elements that are
roots.
Two methods which perform the substitution will be discussed. The first method
uses "brute force", and is shown only to illustrate the idea of substitution.
The second method is the Chien search. This is a practical method that can be
used to find the roots of equations of a low degree or high degree.
After discussing the Chien search, alternatives will be explored for finding roots of
nonlinear equations of a low degree.
- 121 -
SUBSTITUTION METHOD - BRUTE FORCE
Assume the roots of X3 + 01X2 + 02X + 03 = 0 must be found. The circuit below
could be used:
~1 SQUARE ~~ MULTIPLY
~1 ZERO DETECT ~
Each possible finite field value must be substituted for x while checking the output
of the zero detector.
- 122 -
SUBSTITUTION METHOD - CHIEN SEARCH
X3 + al'X 2 + a2'x + a3 = 0
L -_ _ _ _ '~_
This method uses less complex circuits than the "brute force" method.
The example circuit above finds roots of finite field equations of degree three.
The circuit can be extended in a logical fashion to find the roots of equations of a
higher degree.
- 123 -
RECIPROCAL ROOTS
There are times when the reciprocals of roots of finite field equations are re-
quired. If
X3 + Gl'X 2 + G2'X + G3 0
is an equation whose roots are the reciprocals of the roots of the first equation. The
Chien search circuit below can be used to find reciprocal roots.
L -_ _ _ _ _ _ ~+---------Jr~
L~r--Z-E-R-O-D-E-T-E-C-T--,~
In this circuit, the inputs to the XOR rircuit are from the multipliers instead of
the registers because the equation is evaluated at a first.
- 124 -
FINDING ROOTS OF EQUATIONS OF DEGREE 2
AN EXAMPLE
First generate the antilog table for the field. Next construct a table giving C
when y is known. Then construct a table giving y when C is known (Table A below).
Antilog Table
Exponent vector
000
o 001
1 010
2 100
3 011
4 110
5 111
6 101
'y' is known
...:L- ~
000 000
001 000
010 110
011 110
100 010
101 010
110 100
111 100
y2 + Y = a 2 => y = 110,111
y2 + Y a 5 => y = No Solution
- 125-
FINITE FlEW PROCESSORS
LOG
ROM
- 126 -
Adding two finite field elements from the work buffer consists of the following
steps.
1. Transfer the first element to the A register.
2. Transfer the second element to the B register.
3. XOR the contents of the A and B registers and set the result in the C reg-
ister.
4. Transfer the C register to the work buffer.
For the processor under consideration, logs must be added modulo 255. Eight-bit
binary adders add .modulo 256. They can be used to add modulo 255 by connecting
"carry out" to "carry in". For the antilog table, the contents of location 255 are the
same as location zero.
Finite field division is accomplished with the same steps used for finite field mult-
iplication, except logs are subtracted.
The log operation could be implemented as follows:
1. Load the finite field value in register G.
2. Move the log of the finite field value from the ROM tables to register H.
3. Store register H in the work buffer.
There are many design options available when designing a finite field processor.
The options selected depend on the logic family to be used, cost, performance and other
design considerations. The options selected for an LSI design would differ from those
selected for a discrete design.
- 127 -
A partial list of operations that have been implemented on real world finite-field
processors is shown below.
- Finite field addition
- Finite field multiplication
- Finite field division
- Logarithm
- Antilogarithm
- Fetch one root of the equation y2 + y + C =0
- Take cube root
- Compare finite field values
- Branch unconditional
- Branch conditional
Non-finite-field operations that may be implemented include:
- Binary addition and subtraction
- Logical AND and inclusive-OR operations
- Operations for controlling error-correction hardware.
A finite field processor implementing subfield multiplication is shown in Section 5.4.
- 128 -
2.7 SUBFIELD COMPUTATION
In this section, a large field. GF(22*n), generated by a smail field, GF(2 n), is dis-
cussed. Techniques are developed to accomplish operations in the large field by per-
formmg several operations in the smail field.
Let elements of the smail field be reDresented by powers of 11. Let elements of
the large tield be represented by powers of a. .
The small field is' defmed by a specially selected polynomial of degree n over
GF(:!). The iarge tield is detined by the polynomial:
x2 + x + 11
over the smail field.
Each element of the large field. GF(2 2*n), can be representeci by a pair of ele-
ments from the small field. GF(2 n). Let x represent an arbitrary element from the large
field. Then:
where Xl and XO are elements from the smail field. GF(2 n). The element x from the
large field can be represented by the pair of elements (x 1,X() from the small field.
This is much like representing an element from the field of Figure 2.5.1 with three
elements from GF(2), (x2,xloXO).
Let :l be any primitive root of:
x2 + x + 11
Then:
a 2 + a + 11 0
Therefore:
- 129 -
The elements of the large field GF(22*n), can be defined by the powers of a. For
example:
o 0
a2 = a + f3
a3 a·a 2
a'(a + (3)
a2 + a·f3
a + f3 + a·f3
(f3 + 1)' a + f3
o 0 0
aO 0 1
a1 1 0
a2 1 f3
a3 f3+1 f3
The large field, GF(22*n), can be viewed as being generated by the following shift
register. All paths are n bits wide.
Methods for accomplishing finite field operations in the large field by performing
several simpler operations in the small field are developed below.
- 130 -
ADDITION
Let x and w be arbitrary elements from the large field. Then:
y =x + W
MULTIPLICATION
The multiplication of two elements from the large field can be accomplished with
several multiplications and additions in the small field. This is illustrated below:
y X·W
INVERSION
y = l/x
Xl Xl + Xo
--------------------·a + ----------------------
(Xl)2.~ + xl·xO + x02 (Xl)2.~ + xl·xO + xo2
- 131 -
LOGARITHM
L = LOGa(x)
Let,
Then,
BEGIN
Set table location fl (0) =0
FOR 1=2 to 2n
Set ft(YO/Y1) =1
NEXT I
END
- 132 -
ANTILOGARITHM
x = ANTlLOGaCL)
= ANTlLOG~CINTCL/(2n+l» if [L MOD (2 n +1)]=O
= [ANTlLOG~(INTCL/(2n+l»].a if [L MOD (2 n +1)]=1
= xloa + Xo if [L MOD C2 n +1) ]>1
where x1 and X() are determined as follows. Let
a = ANTlLOG~[ L MOD (2 n -1)
b = f2[ (L mod (2~1»-2 ]
Then,
xl = [ b 2 +a b + ~ ]1/2
Xo = box1
The function f2 can be accomplished with a table of 2n entries. This table can be
generated with the following algorithm.
BEGIN
Set f2(2n-l)=0
FOR 1=0 to 2n-2
Calculate the GF(22 *n) element Y = a(l +2) = Y loa + YO
Calculate the GF(2n) element YO/Y 1
Set f2(1) = YO/Yl
NEXT 1
END
- 133 -
APPLICATIONS
In this section, techniques were introduced for performing operations in a large
field, GF(2z *n), by performing several simpler operations in a small field, GF(2n).
One application of these techniques is for computing in a very large finite field.
Assume that it is necessary to perform computation in GF(65536). A multiplication
operation might be accomplished by fetching logs from a log table; adding logs modulo
65535; and fetching an antilog. The log and antilog tables would each be 65536 loca-
tions of 16 bits each. The total storage space required for these tables would be one
quarter million bytes. An alternative is to define GF(65536) as described in this section
and to perform operations in GF(65536) by performing several simpler operations in
GF(256). These GF(256) operations could be performed with 256 byte log and antilog
tables.
Another application is for performing finite field multiplication di!fntly with ROMs
for double-bit-memory c~ection. Instead of using one ROM with 2 n locations, use
four ROMs each with 2 n locations. An example application to multiplier ROMs is
shown below.
- 134 -
CHAPTER 3 - CODES AND CIRCUITS
Let:
z = Degree of the p(x) factor of g(x)
m= Degree of g(x) = total number of check bits = c +z
n= Record length in bits including check bits; nSLCM(e,c)
b = Guaranteed single-burst correction span in bits
d = Guaranteed single-burst detection span in bits
The maximum record length in bits, including check bits, is equal to the period of
g(x) , which is the least common multiple of e and c. The guaranteed single-burst
correction and detection spans for the Fire codes are subject to the following ine-
qualities:
b S z
b S d
b+d S c+1
These inequalities provide a lower bound for d. When the record length is much
less than the period of the polynomial, this bound for d is conservative. In this case,
the true detection span should be determined by a computer search.
Given a fixed and limited total number of check bits, selecting the degrees of p(x)
and c(x) will be involve a tradeoff. Increasing the degree of p(x) will provide more
protection against miscorrection on double-bit errors (less pattern sensitivity), while
increasing the degree of c(x) will provide a greater correction span andlor detection
span. The degree of c(x) should not be used to adjust the period of a Fire code unless
the effects of pattern sensitivity are fully understood.
Overall miscorrection probability for a Fire code for bursts exceeding the guaran-
teed detection capability is given by the equation below, assuming all errors are pos-
sible and equally probable:
b 1
n*2 -
Pmc ~ =-=-=--
2m
Miscorrection probability for double-bit errors separated by more than the guaran-
teed detection span, assuming all errors of this type are possible and equally probable,
~~~ .
- 135 -
P _ (b-l)*2 * n
mcdb -
c c* (2z-1)
This equation is applicable only when the product of Pmcdb and the number of possible
double-bit errors is much greater than one. When this is not true, a computer search
should be used to determine the actual Pmcdb.
An advantage of the Fire Code is simplicity. A disadvantage is pattern sensitivity.
The (xC + 1) factor of the Fire Code generator polynomial causes the code to be sus-
ceptible to miscorrection on short double-bursts. The Pmcdb equation given above
provides a number for this susceptibility for one particular short double-burst (the
double-bit error). For more information on the Fire code's pattern sensitivity see
Sections 4.4 and 4.6.
The pattern sensitivity of the Fire Code can be reduced to any arbitrary level by
adding sufficient redundancy to the p(x) factor.
There are at least five ways to perform the correction step:
1. Clock around the full period of the polynomial.
2. Shorten the code by performing simultaneous multiplication and division of
the data polynomial. A computer search may be required to minimize the
complexity of feedback logic. The period after shortening can be selected to
be precisely the required period.
3. Select a nonprimitive polynomial for p(x). This method yields a less complex
feedback structure than method 2. However, it is only possible to select a
period that is close to the required period. A computer search is required.
4. Perform the correction function with the reciprocal polynomial. This requires
that either a self-reciprocal polynomial be used, or that the feedback terms
be modified during correction. In addition, the contents of the shift register
must be flipped end-for-end before performing the corrections.
This method differs from methods 1 through 3 because the maximum number
of shifts during correction depends on the record length instead of the poly-
nomial period. Therefore, correction is faster for the case when the record
length is shorter than the polynomial period.
5. Decode using the Chinese Remainder Method. This method requires only a
fraction of the number of shifts required by the other methods. Thus, sig-
nificant improvements in decoding speed can be obtained.
Any of the methods above may be implemented in hardware or software. However,
for software, methods 4 and 5 are the most applicable. Methods 4 and 5 are more
flexible for handling variable record lengths than the other methods.
The Fire Code may be implemented with bit-serial, byte-serial or k-bit~serial logic.
See Section 4.1 for k-bit serial techniques.
- 136 -
BIT SERIAL
Fire-code circuit implementations using bit-serial techniques are less complex than
those using byte-serial techniques.
Less logic is required for the shift register as well as for detecting the correctable
pattern.
BITE SERIAL
- 137 -
DECODING ALTERNATIVES FOR THE FIRE CODE
The Fire code can be decoded with the methods described in Section 2.3. Two ex-
amples of real world decoding of the Fire code are discussed in Sections 5.2.2 and 5.2.3.
The internal-XOR or external-XOR forms of shift registers may be used for im-
plementing Fire codes. The decoding methods of Section 5.2 apply to the Fire code as
well as to computer-generated codes.
In many cases, logic can be saved by using sequential logic to determine if the
shift register is nonzero at the end of a read.
It is possible to use a counter to detect the correctable pattern. The counter
counts the number of zeros preceding the error pattern. For the internal-XOR form of
shift register the counter can monitor the high order shift register stage. A one clears
the counter. A zero bumps the counter. The counter function can also be accomplished
by a software routine commanding shifts and monitoring the high order shift register
stage.
It is harder to detect the correctable pattern for byte-serial implementations than
for bit-serial implementations. The second flowchart of Section 5.3.3 shows a software
algorithm for detecting the correctable pattern for a byte-serial software implementa-
tion. The following page shows a method for accomplishing this for a byte-serial
hardware implementation.
- 138 -
,,,
~IRECTION OF SHIFT
. . . . . . . I
tJ Q
I OR GATE
I
j
9
I....
9
j y
~ 9
\(
,9 NOR GATE
I
CORRECTABLE PATTERN FOOND
- 139 -
3.2 COMPUTER-GENERATED CODES
Computer-generated codes are based on the fact that if a large number of poly-
nomials of a particular degree are picked at random, some will meet previously defined
specifications, provided the specifications are within certain bounds.
There are equations that predict the probability of success when evaluating poly-
nomials against a particular specification.
where,
b = Guaranteed single burst correction span in bits
n = Record length in bits including check bits
m = Total number of check bits
- 140 -
COMPUTER SEARCH RUN
This run evaluates polynomials for use with 512-byte records ~d correction spans
to 8 bits. This run is for illustration only. The polynomials below which have a good
single-burst detection span may not test well against other criteria.
Single-burst detection
spans for given
Polynomial correction span of:
(octal) 1 2 3 4 5 6 7 8
40001140741 18 18 18 16 16 16 16 12
41040103211 19 19 19 15 14 14 13 13
42422242001 19 19 19 17 17 12 12 12
42010100127 21 21 16 16 16 15 15 12
42200301203 20 20 19 17 17 15 12 12
40110425041 19 19 17 17 17 17 10 10
40442115001 18 18 18 18 17 16 16 14
44104042501 19 19 16 16 12 12 10 10
40030201415 18 18 18 15 15 13 13 13
40030070211 19 19 18 18 13 11 11 11
40006241441 20 19 18 18 15 15 15 14
40430250401 15 15 15 15 15 13 12 11
44401144041 20 20 20 16 16 14 14 13
41442001203 22 21 20 18 17 16 14 11
44431120001 17 17 17 17 16 15 11 11
40056110021 20 20 15 15 15 9 9 9
40200211701 20 20 20 18 18 9 9 9
40001201163 18 18 18 15 15 14 12 12
40410423003 21 18 17 16 16 16 14 12
42000027421 17 17 17 16 13 13 13 13
40001741005 18 17 17 17 11 11 11 11
42000045065 20 20 17 16 14 14 14 10
41114210201 20 19 19 18 18 16 16 14
44011511001 20 20 18 18 16 13 13 11
41200103203 18 18 15 15 15 15 15 14
43140224001 18 18 18 18 17 7
- 141 -
COMPUTER SEARCH RUN (CONTINUED)
Single-burst detection
spans for given
Polynomial correction span of:
(octal) 1 2 3 4 5 6 7 8
40000074461 14 14 14 14 14 13 13 13
40527200001 16 16 16 16 16 16 16 10
40342100221 19 18 18 18 18 16 11 11
40400264411 16 16 16 16 16 13 13 13
44001140305 17 17 17 17 17 13 13 13
41450040051 19 19 18 18 18 16 14 14
40060405013 20 19 19 19 17 14 13 10
41030210031 18 18 18 18 17 17 17 9
40201202131 17 17 17 17 16 16 16 15
41024021025 21 19 19 19 16 12 12 12
40006052403 18 18 18 18 16 15 13 12
40152014401 19 19 18 18 14 14 14 13
46200002341 19 19 19 19 17 14 14 10
44501404011 19 19 16 16 14 14 13 13
40250002053 20 20 18 18 17 17 15 14
43012104011 19 18 18 18 18 17 12 12
42012430201 21 17 17 17 15 15 12 12
42114023001 21 21 20 16 16 11 11 10
43300020241 15 15 15 15 14 14 14 13
40001403207 18 18 18 18 17 16 9 9
40214020503 20 20 20 16 16 16 10 10
40260302005 20 20 19 18 17 7
40252200241 20 20 20 13 13 13 12 12
40004560111 16 16 16 16 14 14 14 14
40000404347 15 15 15 15 15 15 14 13
42200036011 15 15 15 15 11 11 11 10
42202210241 20 18 18 17 10 10 10 10
40504100431 16 16 15 15 15 12 12 12
42012401111 19 17 17 15 15 14 14 14
43041105001 21 20 17 17 17 14 14 12
40022044225 18 18 18 11 11 11 11 11
40500001465 19 18 18 15 15 15 15 14
- 142 -
SPECI'RUM OF DETECI'ION SPANS FOR COMPUTER SEARCH RUN
12
11
C
o 9
U 8
N
T
5
DET SPAN 4 4
2 2
7 8 9 10 11 12 13 14 15 16 17
CORRECTION SPAN 6: AVERAGE DETECTION SPAN = 13.7
12
-
10
9 r-
r-
7
r-
5 5
4
3 -
r---
1
0 0
=-:-J
7 8 9 10 11 12 13 14 15 16 17
CORRECTION SPAN 7: AVERAGE DETECTION SPAN = 12.9
13
11
10
9
8
1
o o o o
7 8 9 10 11 12 13 14 15 16 17
CORRECTION SPAN 8: AVERAGE DETECTION SPAN = 11.9
- 143 -
MOST PROBABLE DETECTION SPAN
The equation below gives an approximation for the most likely single;'burst detec-
tion span of a single polynomial picked at random.
PROBABILITY OF SUCCESS
The equation below gives an approximation for the probability that a single poly-
nomial picked at random will meet specified criteria.
- 144 -
3.3 BINARY BCH CODES
Binary BCH codes correct random bit errors. Coefficients of the data polynomial
and check symbols are from GF(2) i.e. they are binary '0' or '1', but computation of
error locations and values is performed using w-bit symbols in a fmite field GF(2W),
where w is greater than one.
- 145 -
ENCODING
Encoding for a binary BCH code can be performed with a bit-serial shift register
implementing the generator polynomial of the form shown below. All paths and storage
elements are bit-wide. Multipliers comprise either a connection or no connection.
GATE
is one.
PARITY TREES
Dn- m-1
Dn- m-2 =:i. .I' . . . . .I 1
D1
DO :I 1 1
I
I I
Wm-1 Wo
An example of a combinatorial~logic encoder is given in the BINARY BCH CODE
EXAMPLE below.
- 146 -
DECODING
Decoding generally requires 5 steps:
Sl aLl + a L2 + ...
S3 = a 3*Ll + a 3 *L2 +
S5 = a 5*Ll + a 5*L2 +
= a k*Ll + a k*L2 +
It is possible to compute the syndromes directly from the received codeword poly-
nomial C' (x) with the following equation.
Si = C' (ai)
The above equation can be implemented with either sequential or combinatorial logic.
The syndromes can also be computed by computing the residues of the received
codeword when divided by each factor or the generator polynomial. Let:
Si = ri(a i)
The above equations can be implemented sequentially, combinatorially, or with a mixture
of sequential and combinatorial logic.
An example of each of the above methods is shown in the BINARY BCH CODE EX-
AMPLE below.
- 147 -
COMPUTING COEFFICIENTS OF ERROR LOCATOR POLYNOMIALS
The error locator polynomial has the following form.
The coefficients of the error locator polynomial are related to the syndromes by
the following system of linear equations, called Newton's identities.
a1 Sl
a1 oS 2 + a2 oS 1 + a3 S3
a1oS4 + a2 oS 3 + a3 oS 2 + a4 oS 1 + a5 = S5
a1 S2t-2 +
0 + a2t-2 Sl
0 = S2t-1
For error locator polynomials of low degree, the coefficients of the error locator
polynomial are computed by solving Newton's identities using determinants. For error
locator polynomials of high degree, the coefficients are computed by solving Newton's
identities with Berlekamp's iterative algorithm.
x = a1
x 2 + alex + a2 = 0
This equation can be solved using a precomputed look-up table by first applying a
substitution to transform it into following form (see Sections 2.6 and 3.4 for more
details):
y2 + Y + c = 0
There are similar approaches to solving other low degree error locator polynomials.
The Chien search is used to solve error locator polynomials of high degree.
- 148 -
BINARYBCHCODEEX4MPLE
Assume a two-error-correcting code over GF(24). The generator polynomial is:
GATE
o C(x)
D(x)
---~---I1
MUX
D6 [x 8 .x 6 ] MOD g(x) 1 1 1 0 1 0 0 0
Ds [X 8 ·xS ] MOD g(x) 0 1 1 1 0 1 0 0
D4 [X 8 .x 4 ] MOD g(x) 0 0 1 1 1 0 1 0
D3 [X 8 .x 3 ] MOD g(x) 0 0 0 1 1 1 0 1
D2 [X 8 ·x 2 ] MOD g(x) 1 1 1 0 0 1 1 0
D1 [X 8 ·x1 ] MOD g(x) 0 1 1 1 0 0 1 1
Do [x 8 .x O] MOD g(x) 1 1 0 1 0 0 0 1
- 149 -
SYNDROME GENERATION
C' (x)
C' (x)
L------------------------4------ra 2
L-----------------------------------+-----~a
'-------.-1
S3 = C' (a 3 )
C' (x)
- 150 -
COMBINATORIAL LOGIC SYNDROME CIRCUITS
The parity tree for a coefficient Sij of the :d term of syndrome Si includes each
received codeword bit Ck for which the coefficient of the :d term of
[xk*i] MOD mi(x)
is one.
PARITY
TREE5
Co'
51 0
53 51 (Sl)2
°1 = = - - = Sl
1 0 Sl
52 Sl
1 Sl
52 53 53 + 51 05 2 S3 + 51 0 (51)2 53 + (51)3
°2 = =
1 0 51 51 51
52 51
- 151 -
FINDING ROOTS OF THE TWO-ERROR LOCATOR POLYNOMIAL
The algorithm below defines a fast method for fmding roots of the error locator
polynomial in the two-error case. This algorithm can be performed by a finite field
processor. For double bit memory correction is performed by combinatorial logic.
x 2 + alex +a2 = 0
where
a1 = S1 and a2
Substitute
to obtain
y2 + y + C o
where
a2 (Sl)3 + S3
C = =
(a1) 2 (Sl) 3
Fetch Y 1 from TBLA (see Section 2.6) using C as the index. Then form
Y2 = Y1 + a O
y = x/a1
to obtain
L1 LOG a (Xl)
L2 LOG a (X2)
For a binary BCH code, the error values are by defmition equal to '1'.
- 152 -
BCH CODE DOUBLE-BIT MEMORY CORRECfION - EXAMPLE #1
LOG(S3)
-LOG(S13)
BINARY~--------~
ADDER*
ZERO-DETECTHr--------------------~
aO ----r------.\.
=0 ALARM
Yl
- 153 -
BCB CODE DOUBLE-BIT MEMORY CORRECTION - EXAMPLE #2
-+'-'-~ CUBE/LOG
LOG(S3~
-LOG(Sl )
BINARYr---------~
ADDER*
ZERO-DETECT~------~
S3
81 3
ALARM
=0 OR
*Binary addition
modulo field
size minus one. LOG(Y2) LOG (Y1)
+ + + +
BINARY BINARY
ADDER* ADDER*
This example uses the same approach as Example #1 but several functions have
been combined.
- 154 -
BCH CODE DOUBLE-BIT MEMORY CORRECTION - EXAMPLE #3
.---'--.=0 ALARM
w
Y2 Yl
GF MULTIPLIER
- 155 -
BCH CODE DOUBLE-BIT MEMORY CORRECTION - EXAMPLE #4
--F-~ CUBE/INVERT
.-----'----, = 0 ALARM
w
YI
+--------;.f +
1
Xl X2
- 156 -
BCN CODE DOUBLE-BIT MEMORY CORRECl'ION - EXAMPLE #5
The mathematical basis for this example is developed by operating on the error
locator polynomial:
x 2 + 01·x + 01 = 0
Finally, rearrange and combine the underlined terms to obtain a useful relation:
(S1 + x)3 + (S3 + x 3 ) = 0
ICUBER~
ZERO-DETECT
One such circuit is required for each bit of the memory word.
- 157 -
3.4 REED-SOWMON CODES
»,
We shall use the Galois field with eight elements (i.e., GF(8) or GF(23 introduced
in Section 2.5 in illustrating the properties and implementation of Reed-Solomon codes.
- 158 -
REED-SOLOMON CODE SUMMARY
Let
w = Number of bits per symbol; each symbol £ GF(2W)
m = Degree of generator polynomial = number of check symbols
n = Selected record length in symbols,
~
including check symbols
d = Minimum Hamming distance of the code
t = Number of symbol errors correctable by the code
= Selected number of symbol errors to be corrected
= Number of symbol errors which the code is capable of de-
tecting beyond the number selected for correction
b = Burst length in bits
c = Number of bursts correctable by the code
A(x) = Any polynomial in the field
G(x) = The code generator polynomial
gi(x) = Any of the m factors of G(x) = (x + a i) when mO=O
D(x) = Data polynomial
W(x) = Write redundancy polynomial = [xm D(x)] MOD G(x)
0
n S 2w-l
d = m+l,
ec S t = INT[(d-l)/2] = INT[m/2]
ed = dmin - 2*ec - 1 = m - 2*ec
b S (ec-1)*w + 1
c = ec/(1 + INT[(b+w-2)/w])
- 159 -
AR
AR
AR
AR
AR
REED-SOLOMON CODE SUMMARY (CONT.)
A(x) MOD gi(x) = [A(x) MOD G(x)] MOD gi(x) (1)
A(x) MOD gi(x) = A(x) la i = A(a i ) (2)
C(x) MOD G(x) =0 (3)
C(x) MOD gi(x) =0 (4)
R(x) = CI (x) MOD G(x)
= [C(x) + E(x)] MOD G(x) {by definition of CI}
= E(x) MOD G(x) {by equation (3)} (5)
Si = CI(X) MOD gi(x)
= [C(x) + E(x)] MOD gi(x) {by definition of CI}
= E(x) MOD gi(x) {by equation (4)}
= Eloa i*Ll + E2 0a i*L2 + 000 {by equation (2)}
= [E(x) MOD G(x)] MOD gi(x) {by equation (l)}
= R(x) MOD gi(X) {by equation (5)}
- 160 -
CONSTRUcrING THE CODE GENERATOR POLYNOMIAL
The generator polynomial of a Reed-Solomon code is given by:
m-l
G(x) = I I (x + a mo +i )
i=o
where m is the number of check symbols and 1DQ is an offset, often zero or one. In
the interest of simplicity, we take II1() equal to zero for the remainder of the discussion.
Note that many expressions derived below must be modified for cases where mO is not
zero. Let m=4; the code will be capable of correcting:
t = INT(m/2) = 2
3
G(x) = I I (x + ai,
i=o
(x + aD) (x + a l ) (x + a 2 ) (x + a 3 )
x4 + (a O + a l + a 2 + a 3 ).x 3
+ (aOa l + a Oa 2 + a Oa 3 + a l a 2 + a l a 3 + a 2 a 3 ) ·x 2
+ (a Oa l a 2 +a Oa l a 3 +a Oa 2 a 3 +a l a 2 a 3 ).x + (a Oa l a 2 a 3 )
= x4 + a 2 .x 3 + a 5 .x 2 + a 5 .x + a 6
- 161 -
FINITE FIELD CONSTANT MULTIPLIERS
y=an·x
xw-l a n +w- l
xw-2
x
Yw-l Yw-2 YI Yo
~
Y = an·x
Then construct parity trees down columns. The parity tree for a given y bit
includes each x bit with a '1' at the intersection of the corresponding column and row.
Example: Using the field of Figure 2.5.1, construct a constant multiplier to compute:
Y = a 3 ·x
I I I
I I 0
0 1 I
Y2 YI YO
t
a 3 ·x
Y
Y2 Xl + x2
YI Xo + Xl + x2
YO Xo + x2
- 162 -
ENCODING OF REED-SOLOMON CODES
Encoding is typically, but not always, performed using an intetllal-XOR shift regis-
ter with symbol-wide data paths, implementing the form of generator polynomial shown
above. Other encoding alternatives will be discussed later in this section.
The following circuit computes C(x) for our example field and code in symbol-serial
fashion:
MUX
The circuit above multiplies the data polynomial D(x) by xm and divides by G(x).
All paths are symbol-wide (three bits for this example). The AND gate and the MUX
are fed by a signal which is low during data time and high during redundancy time.
1
D(x)
----..-10
MUX C(x)
- 163 -
DECODING OF REED-SOLOMON CODES
1. Compute syndromes.
2. Calculate the coefficients of the error locator polynomial.
3. Find the roots of the error locator polynomial. The logs of the roots are the
error locations.
4. Calculate the error values.
The following circuit computes the syndromes for our example field and code in
symbol-serial fashion:
C' (x)
Q:5 So oj
1
0
QB S1 oj
~
QB S2 oj
8
1
QB S3 oj
~
This circuit computes the syndromes by dividing the received codeword C'(x) by
the factors of G(x). All paths are symbol-wide (three bits for this example). After all
data and redundancy symbols have been clocked, the registers contain the syndromes Si.
- 164 -
COMPUTING COEFFICIENTS OF ERROR LOCATOR POLYNOMIALS
Recall the syndrome equations derived above:
e
I I (x + aLi) = x e + 01·Xe-1 + ••• + 0e-1·x + 0e = 0
i=l
where e is the number of errors. The coefficients of the error locator polynomial are
related to the syndromes by the following system of linear equations, called Newton's
generalized identities:
Se+1
For error locator polynomials of low degree, the coefficients 0i are computed by
directly solving the above system of equations using determinants. Examples are worked
out below.
For error locator polynomials of high degree, the coefficients 0i are computed by
solving the system of equations above using Berlekamp's iterative algorithm. One ver-
sion of the iterative algorithm is outlined, and an example is worked out, below.
- 165 -
ITERA TIVE ALGORITHM
(0) Initialize a table as shown below; the parenthesized superscript on a(x) is a
counter and not an exponent.
(2) Find a row k where k<n and dIdO, such that k-Lk (the last column of row k
in the table) has the largest value. Compute:
Ln+l = MAX[Ln,Lk+(n-k)]
a(n + l)(x) = xLn + I-Ln. a(n)(x)-(dfi/d0 • a(k)(x)
(3) If n+l=m (or n+l=t+Ln+l)* then exit; a(n+l)(x) is the desired error locator
polynomial.
* t+Ln iterations are required to satisfy the basic guarantees of the code;
terminating on this second criterion is sufficient for generating the proper
error locator polynomial for correctable error cases, but may sacrifice some
protection against miscorrection of uncorrectable error cases.
(4) Compute:
Ln+1
d n +1 = )0 al n +1 )'Sn+1-i
i=o
then set n = n + 1 and go to Step (1).
- 166 -
FINDING ERROR LOCATIONS USING THE LOCATOR POLYNOMIAL
Methods for solving the error locator polynomial for cases of one and two errors
are developed below. A method for solving the three-error case is given in Section 5.4.
Methods for directly solving the four-error case are also known, but we shall not
discuss them.
For cases of more than four errors, the Chien search is typically used to fmd the
roots of the error locator polynomial. The Chien search could be used to find the roots
of the error locator polynomial for cases of fewer errors, but it is slower and in most
cases requires more logic.
I I [1 + (aLjja Li )]
j=l
jli
where,
e = number of errors
i 1,2,···,e
Li error locations
aLi error location vectors
Gi coefficients of the error locator polynomial.
e=3
+ • • •
- 167 -
THE REED-SOLOMON SINGLE ERROR CASE
ERRORLOCATORPOL~OMML
1
L'
o (x) = I I (x + a 1) = X + 0 (1)
i=l
o = Sl/S0
o may then be substituted into equations (7) and (8) for verification. The locution L is
given as the log of 0 from equation (1) and the value E is given as So from equation
(2).
o(x) = -r-r-
i=l
2
(x + a i)
L
= x2 + 0l°x + 02 (1)
The syndrome equations are a set of simultaneous non-linear equations which are
difficult to solve. Newton's identities are a set of simultaneous linear equations which
can be solved by determinants for 01 and 02 in terms of the syndromes .. Once we have
computed 01 and 02, we must solve (1) for Ll and L2. From (1) we have:
- 168 -
FINDING ROOTS OF THE TWO-ERROR LOCATOR POLYNOMIAL
One method for rmding the roots of the two-error locator polynomial:.
x 2 + a1°x + a2 =0 (1)
is to employ the substitution:
y2 + Y + C = 0 (9)
C Yl Y2
o 0 aO
-- --
a2 a6
a4 a5
-- --
al a3
-- --
-- --
Once roots Yl and Y2 of (9) have been found, roots Xl and X2 of (1) can be com-
puted by reverse substitution of equation (8). Then LI and L2 may be computed as the
logs of XI and X2.
So = (2)
- 169 -
ONE- AND 1WO-ERROR CORRECTION ALGORITHM
SOoS3 + SloS2
and
sOos2 + (Sl)2
If the denominator or either numerator IS zero, post an uncorrectable error
flag and exit.
2) Compute C=a2/(al)2 and fetch Yl and Y2 from the root table. If C does not
correspond to a valid pair of roots, post an uncorrectable error flag and exit.
- 170 -
CODEWORD EXAMPLE
Data Symbols Redundant Symbols
I I I I
Assume the data symbols are (in order of transmission) a 2 , ai, and a 5 . Then the
data polynomial is:
The redundant symbols can be computed using one of the encoder circuits shown
above. A trace of the contents of the registers is shown below:
data x3 x2 x 1
init 0 0 0 0
a2 a4 aO aO al
al a5 0 a3 al
a5 0 a3 al 0
- 171 -
S.INGLE ERRQR EXAMPU
COMPUTE a
a = Sl/S 0 = a 6 /a 2 = a4
VERIFY NEWTONS IDENTITIES
? ?
Sl oa :0::
S2 S2 0a :0::
S3
? ?
a 6 ·a 4 :0:: a3 a 30 a 4 :0:: aO
? a(3+4 MOD 7) ?
a(6+4 MOD 7) :0:: a3 :0:: aO
a3 = a3 aO = aO
E = So = a 2
- 172 -
7WO ERROR EXAMPLE
C(x) = a 2 ·x 6 + a 1 ·x 5 + a 5 ·x4 + 0·x 3 + a 3 ·x 2 + a 1 ·x + 0
E(x) a 2 ·x 5 + a 1 ·x2
C'(x) a 2 ·x 6 + a 4 ·x 5 + a 5 ·x 4 + 0·x 3 + a O·x2 + a 1 ·x + 0
COMPUTE SYNDROMES
C' (x) So S1 S2 S3
INIT 0 0 0 0
a2 a2 a2 a2 a2
a4 a1 a6 0 aO
a5 a6 a4 a5 a2
0 a6 a5 aO a5
aO a2 a2 a6 a3
a1 a4 aO 0 a5
0 a4 a1 0 a1
COMPUTE a
a = S1/S0 = a 1 /a 4 = a4
VERIFY NEWTONS IDENTITIES
?
Sl oa ::
S2
?
a 40 a 4 :: a3
a(4+4 MOD 7) ?
:: a3
a 1 ,. a 3 => TWO ERRORS
SOoS3 + Si oS 2 (S2)2 + Sl oS 3
a1 = = a3 a2 = = aO
SOoS2 + (Sl) 2 SO'S2 + (Sl) 2
C a2/(a1)2 = a1
Y1 = a 2 Y2 = a 6
Xl a1' Yl = a 3 ·a 2 = a5 X2 = al oY2 = a 3 ·a 6 = a 2
L1 = LOG (Xl) = 5 L2 = LOG (X2) = 2
COMPUTE ERROR VALUES
- 173 -
lTERA11YE ALGORJ11/M EXAMPLE
Use the iterative algorithm to generate o(x) for the case above.
- 174 -
UNCORRECIABLE ERROR EXAMPLE
COMPUTE SYNDROMES
C' (x) So Sl S2 S3
INIT 0 0 0 0
(X2 (X2 (X2 (X2 (X2
(X4 (Xl (X6 0 (XO
(X3 (XO (Xl (X3 0
0 (XO (X2 (XS 0
(XO 0 (Xl 0 (XO
(Xl (Xl (X4 (Xl (XO
0 (Xl (XS (X3 (X3
COMPUTE a
(X(S+4 MOD 7) ?
:0:: (X3
(X2 Y. (X3 => TWO ERRORS
- 175 -
MISCORREcnON EXAMPLE
COMPUTE SYNDROMES
C' (x) 50 51 52 53
INIT
a2
a4
a2
a1
° °
a2
a6
°
a2 °
a2
aO
a5 a6 a4 a°5 a2
a6 a5 aO a5
°
aO
a1
a2
a4
a2
aO
a6 a3
as
a4
° °a 4
a2 a2
COMPUTE (1
50 05 3 + 51 05 2 (52)2 + 51 05 3
(11 = a2 (12 = a5
50. 5 2 + (51) 2 50 05 2 + (51) 2
c = (12/«(11)2 = a1
Y1 = a2 Y2 = a 6
Xl = (11° Y1 a20a2 = a4 X2 = (11° Y2 a 2 ·a 6 a1
L1 LOG (Xl) = 4 L2 = LOG(X2) = 1
- 176 -
REFERENCE TABLES
ADDmON TABLE
o aO a 1 a2 a3 a4 a 5 a6
0 0 0 0 aO a 1 a2 a3 a4 a 5 a6
0 0 1 aO 0 a3 a6 a 1 a5 a4 a2
0 1 0 a 1 a3 0 a4 aO a 2 a 6 a 5
1 0 0 a2 a6 a4 0 a5 a 1 a3 aO
0 1 1 a3 a 1 aO a5 0 a6 a2 a4
1 1 0 a4 a5 a 2 a 1 a6 0 aO a3
1 1 1 a5 a4 a6 a3 a 2 a O 0 a1
1 0 1 a6 a2 a5 aO a 4 a3 a 1 0
o a O a 1 a2 a3 a4 a5 a6
0 0 0 0 0 0 0 0 0 aO
0 aO a 1 a2 a3 a4 a5 a 6 -- --
0 a 1 a 2 a3 a4 a 5 a6 aO a2 a6
0 a2 a 3 a4 a5 a6 aO a 1 a4 a5
0 a3 a4 a5 a6 aO a 1 a2 -- --
0 a4 a5 a 6 a O a 1 a 2 a3 a1 a3
0 a5 a6 a O a 1 a 2 a 3 a4 -- --
0 a6 aO a 1 a2 a3 a4 a5 -- --
- 177 -
AN INTUITIVE DISCUSSION OF THE SINGLE-ERROR CASE
PO (x 8 +1)
PI (x 8 + x 6 + x 5 + x4 + I)
The correction algorithm requires residues of a function of the data, f(data),
modulo PO and PI where:
m-I
for PI, f(OATA} )0 xiOi(X}
i=o
and m is the number of data bytes. Di represents the individual data byte polynomials.
DO is the lowest order data byte (last byte to be transmitted or received).
The residues are computed by hardware implementing the logical circuits shown
below. These logical circuits are clocked once per byte.
5
..4
2
1
ft
- 178 -
The shift register for PO computes an XOR sum of all data bytes including the
check bytes. Since PI is primitive, its shift register generates a maximum length se-
quence (255 states). When the PI shift register is non-zero, but its input is zero, each
clock sets it to the next state of its sequence.
CORRECJION ALGORITHM
Consider what happens when the data record is all zeros and a byte in error IS
received.
Both shift registers wiJ1 remain zero until the byte in error arrives. The error
byte is XOR'd into the PO and PI shift registers. Since the PO shift register preserves
its current value as long as zeros are received, the error pattern remains in it until the
end of record. XOR'ing the error byte into the PI shift register places the shift reg-
ister at a particular state in its sequence. As each new byte of zeros is received the
PI shift register is clocked along its sequence, one state per byte.
The terminal states of the PO and PI shift registers are sufficient for determining
displacement. To find displacement, it is necessary to determine' the number of shifts
of the PI shift register that occurred between the occurrence of the error byte and the
end of record.
Refer again to the diagram on the following page. What we need to know is the
number of states between SO and S1. We construct a table. The table is addressed by
SO and S I, and contains the distance along the PI sequence between the reference state
and any arbitrary state Sx.
First, SO is used to address the table to fetch distance dl. Next, S 1 is used to
address the table to fetch distance d2. The desired distance Cd), distance between SO
and S 1 is computed as follows:
d = d2-dl; ifd<O then d = d+255
The distance d is the reverse displacement from the end of the record. The
forward displacement can be computed by subtracting the reverse displacement from the
record length minus one. The error pattern is simply the terminal state of PO, which is
SO.
- 179 -
Consider the case when the data is not all zeros. The check bytes would have
been selected on write, such that on read, when the entire record (including check
bytes) is processed by the PO and PI shift registers, residues ofzero result.
When an error occurs. the operation differs from the all-zeros data case only while
residues are being generated. A given error condition results in the same residues,
regardless of data content. Once residues have been generated, the operation is the
same as previously described for the all-zeros data case.
THE PI SEQUENCE
- 180 -
ALTERNA TlVE ENCODING AND DECODING METHODS
FOR THE REED-SOLOMON CODE
There are many encoding and decoding alternatives for the Reed-Solomon code.
The best alternative for a given application depends on such factors as:
- Cost requirements
- Speed requirements
- Space requirements
- Sharing of circuits and resources
ENCODING ALTERNATIVES
Encoding can be accomplished with the external-XOR form of shift register as well
as the internal-XOR form. An encoder circuit example using the external-XOR form of
shift register is shown below:
MUX
~--+--1...-l 1 C (x)
D(X)
--------------------------~~----~~~D
CHECK- SYMBOL- TIME
- 181 -
Another encoding alternative is illustrated by the following example.
STANDARD ENCODER
I
GATE0000~0
I I I I I
I 0
I
~
e _ a 179
a - a 184
- 182 -
BIT-SERIAL ENCODING
Encoding can be accomplished with bit-serial techniques. We ~llustrate using the
encoder implementing the external-XOR form of shift register introduced above. Rear-
ranging to place low-order to the right, we have:
GATE
O(x)
° C(x)
----i.----H 1
MUX
All paths are symbol-wide (three bits for this example) and the GATE and MUX
are controlled by a signal which is asserted during clocks for data symbols. The field
GF(8) is generated by the polynomial:
x3 + x + 1
over GF(2). The code generator polynomial over GF(23) is:
- 183 -
BIT-SERIAL
MULTIPLIER
A~--------------------------------------------------,
B~--------------------------------------~
C~---------------------------,
O~--------------~
MUX
o C(x)
O(x)
-----....&-------11
The bit-serial multiplier circuit accomplishes in three clocks what the four multi-
pliers of the symbol-serial encoder accomplish in one clock. The Z register is initially
cleared, then on evefJ third clock it is again cleared and what would have been shifted
in is clocked into the x register.
A
B
C
D
- 184 -
DECODING ALTERNATIVES
The standard form of syndrome circuit is:
....
C' (x)
- 185 -
SHARING CIRCUITRY BE1WEEN ENCODER AND DECODER
It is possible to share circuitry between the encoder and the decoder in several
different ways. Recall the general case of the generator polynomial, write redundancy
polynomial, and syndromes of a Reed-Solomon code of degree m:
m-l m-l
G(x) I I gj (x) = I T (x + a mo + j )
j=o j=o
W(x) xm'D(x) MOD G(x)
S·J = C' (x) MOD gj(x)
As we have seen, the processes of generating W(x) and generating Sj each require
a different circuit configuration and a different set of finite field multipliers. Cost
motivates us to find some means for reducing hardware by sharing circuitry in perform-
ing these two functions.
One method for sharing circuitry is to use the encoder on read to assist with syn-
drome generation by feeding it the received codeword to generate the composite read
remainder:
R(x) = C ' (x) MOD G(x)
The individual remainders (syndromes) can then be generated by dividing the composite
remainder by each factor of the generator polynomial. This second step can be ac-
complished with sequential logic, combinatorial logic, or software. In many cases, more
time can be allotted for the processing of each symbol during the second step than
during the first step due to the difference in degrees between the composite remainder
and the full received codeword.
Another method for sharing circuitry is to use the syndrome circuits for encoding.
The validity of the following approach is guaranteed by the Chinese Remainder Theorem
for polynomials.
Consider the set of parameters:
Pj = D(x) MOD gj(x) = D(x) MOD (x + a1Il()+j)
which are the contents of registers of a set of circuits for j =0 to m-l like the one
shown below, after clocking in a data polynomial D(x). We use Pj here to distinguish
from the syndromes Sjt which are produced by similar circuits 15ut have a received
codeword C'(x) polynonual as input.
D(X)
- 186-
Now observe that:
Pj = a -m *(IIl() +')
J • [xm • D(x) MOD 8j(X))
xm. D(x) MOD 8j(x) = [xm • D(x) MOD G(x)] MOD 8j(X)
and so by defInition of W(x) , we have:
m-l
p, = \0 a(i-m)*(mo+j).w.
J L::.- ~
i=o
These equations give the parameters Pj in terms of the write redundancy coeffI-
cients Wi and a matrix of constants which are powers of a that depend only on i, j, m
and ID(), Inverting this matrix gives the write redundancy coeffIcients Wi in terms of
the parameters Pj and a set of transform constants Kij which also depend only on i, j,
m, and ID():
m-l
W,
~
= )0 K.
~,
,.p,
J J
j=o
- 187 -
To aid in understanding the implementation of the above equations, we first discuss
the following circuit, which is equivalent to the conventional form of encoder circuit
discussed previously.
C(x)
- 1
D(X)
----.--10
OUTPUT L - - - - _ J
When the CHECK_SYMBOL_TIME signal is asserted, the OUTPUT bus is fed back
into the dashed portion of the circuit by the MUX. The input to the multipliers is then
zero, so the contents of the registers, the write redundancy polynomial W(x), are not
altered as they are shifted out and appended to the data polynomial D(x) to form the
codeword cex).
- 188 -
The circuit below illustrates the method for sharing circuitry.
C(x)
-------l
D(X)
-----11
OUTPUT
- 189 -
loaded from the registers after the last symbol of the received codeword has been
clocked in.
It is possible to take the input to the Km-l j multipliers from the input to the Sj
registers instead of from their output. If this is' done,a register must be inserted in
the OUTPUT path before the MUX preserve clocking. It is also possible to take the
multipliers from the output of the a1 multipliers. If this is done, the values of the
Km-l,j multipliers are changed to:
, l<m-l,j
Km-l,j - mo+j
a
+.
to remove the extra factor of a mo J.
Finally, it is also possible to implement the shared circuitry method using the mod-
ified form of the syndrome circuit introduced above:
D(X)
~Sj~®=(b t
- 190 -
EXTENDED REED-SOLOMON CODES
2
G(x) = I I (x + ail
i=l
(x + al).(x + ( 2)
x 2 + (a l + ( 2 ).x + a l ·a 2
x 2 + a 4 .x + a 3
ENCODING OF EXTENDED REED-SOLOMON CODE
- 191 -
ENCODING CIRCUITRY FOR EXTENDED REED-SOLOMON CODE
D(X)
SIGNAL DEFINITIONS
ID41D31D21D11DOIW11WOIX31XOI
.---,...---" I
REDUN_TIME _ _ _ _ _ _-J
X3 _TIME - - - - - - - - - - - - - - - - - - r-Li
XO TIME I n
- 192 -
DECODING CIRCUITRY FOR EXTENDED REED-SOLOMON CODE
X2 -C'(X) + X3"X
" + Xo
X3 TIME
XO TIME
- 193 -
DECODING OF EXTENDED REED-SOLOMON CODE
Compute the syndromes for the basic code over the received codeword C' (x) in the
usual fashion:
When at least one error falls into an extension symbol, we have two cases: those
in which one or two errors occur and all errors fall into the extension symbols, and
those in which two errors occur and one of the errors falls into a data symbol or one
of the redundant symbols of the basic code and one of the errors falls into an exten-
sion symbol.
When errors occur only in the extension symbols, SI and S2 will both be zero.
This cannot occur for any correctable error pattern, so we know within the power of
the code that no error in the basic codeword exists.
When one error falls in a data symbol or one of the redundant symbols of the
basic code and one error falls into an extension symbol, both SI and S2 and either So
or S~ will be the same as for the degree four code shown above. We may solve for the
locatIon and value of the first error using S 1 and S2 by a process similar to that used
above, and the fact that either So or S3 satisfies Newton's identities is sufficient to
confirm within the power of the code that the computed location and value of the
single error in the basic codeword are valid.
- 194 -
EXTENDED DECODING OF REED-SOLOMON CODES
Extended decoding refers to techniques that allow successful .correction of many
error situations which exceed the basic guaranteed correction capability of a Reed-
Solomon code. This is distinct from and not to be confused with the concept of an
extended Reed-Solomon code introduced above. Several extended decoding techniques
are discussed below.
- 195 -
Described below is a well-known algorithm for erasure correction. More efficient
algorithms do exist, but this one was chosen for inclusion for its instructional value.
1) Generate an erasure-locator polynomial from the known (or suspected) erasure
locations:
where
n = the number of available erasure pointers
Pi = the location specified by erasure pointer number i
2) Generate (m-n) modified syndromes Ti from the m raw syndromes Si and the
coefficients of the erasure-locator polynomial:
for i=n to m-l, where m is the degree of the code's generator polynomial.
3) Generate the coefficients of the error locator a(x) from the modified syn-
dromes Ti.
5) Compute error values using the raw syndromes Si and the erasure pointers
and error locations.
The error value for a false erasure pointer will be zero, so a false erasure pointer
will not necessarily cause miscorrection, but each false erasure pointer decreases the
remaining correction capability, and increases the chance of miscorrection, by decreasing
by one the number of available modified syndromes.
- 196 -
ERASURE CORRECJ10N EXAMPLE
[This example uses the same field and polynomials
as the uncorrectable error example shown above.]
C(x) a 2 ·x 6 + a 1 ·x 5 + a 5 ·x 4 + 0·x 3 + a 3 ·x 2 + a 1 ·x + °
E(X)
C· (x)
COMPUTE SYNDROMES
C' (x) So sl S2 S3
---
INIT
a2
a4
°
a2
a2
°
a2
a2
°
a2
a2
°
a2
a2
a5 a5 a1 a3
°
aO
°
a°1
a6
a3
a3
a6
a6
a4
a1 a1 a4 a1 aO
a1 a5 a3 a3
° POINTERS
n 2
P1 4
P2 5
n
r(x) -r-r-
i=l
(x + a Pi )
(x + a 4 ) .(x + a 5 )
x 2 + (a 4 + a 5 ).x + a 4 ·a 5
a O.x 2 + aO.x + a 2
ro·x 2 + r1· x + r2
- 197 -
GENERATE MODIFIED SYNDROMES
----D-
Ti-n = )0 rjOSi_j for i = n to m-l
j=O
TO = rO os2 + rlosl + r2 os 0
aOoa3 + aOoa5 + a20al a5
TI rO os3 + rl os2 + r2 os 1
= aOoa3 + aOoa3 + a2oa5 = a O
a = TI/TO = a O/a 5 = a2
L = log(a) = 2
COMPUTE ERRATA VALUES
X2 a PI = a4
X3 = a P2 a5
S2+Slo(X2+X3)+SooX2oX3
(XI+X2) ° (XI+X3)
SooX3+SI+EloCXI+X3)
X2+ X3
- 198 -
EXTENDED CORRECTION ALGORITHMS
It is possible to extend the correction capability of a Reed-Solomon code by using
algorithms that decode beyond the basic code guarantees without using erasure correc-
tion. Examples of error situations which, though not guaranteed to be handled by
extended decoding techniques, have a certain probability of being handled include:
(a) A single long burst where the number of bytes in error in a codeword ex-
ceeds the basic guarantees of the code.
(b) Multiple long bursts, or a long burst in combination with random byte-errors,
where the total number of bytes in error in a codeword exceeds the basic
guarantees of the code.
We first illustrate decoding beyon~ code guarantees without erasure pointers with a
method for case (a) above. Consider a single error burst which is thirteen bytes in
length and affects the last thirteen bytes of the received codeword. The error poly-
nomial is:
- 199 -
Now consider a single thirteen-byte error burst that ends J bytes prior to the end
of the received codeword. The error polynomial is:
The equation above is the basis for the decoding method. We count and record
the number of consecutive high-order zero coefficients in the initial remainder, record-
ing the low-order coefficients if the number of consecutive high-order zero coefficients
is sufficiently high. Then we compute:
R I (x) = x-I ° R(x) MOD G(x)
and repeat the counting/recording process. This process is performed n-I times, where
n is the length of the codeword, to compute R I (x) through Rn-l (x) and account for all
possible ending locations of the long burst. The pattern containing the highest number
of consecutive high-order zero coefficients will be that of the long burst itself, which
will have been segregated at the low-order end of the remainder.
The detection of some minimum number of consecutive high-order zero bytes (three
for the given code operating on a full-length codeword, as shown below) can be used to
flag the existence of a single long burst. The necessary number of consecutive high-
order zero coefficients is established by the required miscorrection probability.
- 200 -
MISCORREC1'lON
For a codeword of length n, the miscorrection probability (units: miscorrectcd
codewords per uncorrectable codeword) for a conventional decoding method against all
combinations of random errors which exceeds the capability of the code is:
-L [~] 255 i
Pmc 1 = \
1- 256 t + i
i=o
where,
n!
[~J r! (n-r) !
_8_
Pmc 1 =\ 2.1E-5
1-
i=o
while for n=255, m= 16, and L= 13, we have:
- 201 -
Thus the extended decoding method outlined above could be. used to correct a
single burst of up to thirteen bytes in a full-length codeword with a miscorrection
probability comparable to that of a conventional decoding method against all combina-
tions of random errors.
It is important to note that for high-performance ECC applications. an auxiliary
error detecting code is usually implemented to improve data accuracy. In some cases,
the dedicated error detection. code may provide most of the protection against the
transfer of undetected erroneous data.
INTERLEA VING
When interleaving is used, the maximum length of a decodable single burst is
multiplied by the number of interleaves. Consider the same code, described above but
impiemented with ten-way interleaving in sectors of 1040 data bytes; each interleave
contains n=(1040/1O+16)=120 bytes. The conventional miscorrection probability (units:
miscorrected codeword$ per uncorrectable codeword) against all combinations of random
errors is:
_8_
Pmcl =\ 4.4E-8
L-
i='o
while for 1=10, m = 16, and L= 12, the miscorrection probability for this extended decoding
method is:
Pmc 2 = 1-[1-256- 4 )120 ~ 2.8E~8
Thus the method outlined above will allow successful decoding of a single burst of
up to I*L= 120 bytes in a ten-way interleaved sector of 1040 data bytes with a miscor-
recti on probability comparable to that achieved using a conventional decoding method in
decoding all combinations of random errors.
- 202 -
A SECOND EXAMPLE
We next illustrate decoding beyond code guarantees without erasure pointers with a
method for case (b) above, also for a ten-way interleaved sector of 1040 data bytes.
Consider an error burst which is 100 bytes in length (ten consecutive bytes in error in
each of the ten interleaves) that ends J bytes prior to the end of an. interleave, to-
gether with other error burst(s) or random byte error(s) which affect no more than one
byte in anyone interleave. The error polynomial for an interleave is:
E(x) = EAoXA + EJ+9oxJ+9 + 000 + EJoxJ
where A is the location of the singf byte in error, which may either precede or follow
the long burst. If we premultiply by x- then the sixteen byte remainder is:
The decoding method for case (b) is similar to that for case (a) above. We at-
tempt to decode some restricted number of bytes in error (one for this particular ex-
ample) using the first few high-order coefficients of the initial remainder, count and
record the number of consecutive high-order coefficients which are consistent, and
record the low-order coefficients if the number of consecutive consistent high-order
coefficients is sufficiently high (six for this particular example). Then we compute:
and repeat the decoding/counting/recording process. This process is repeated n-l times,
where n is the length of the codeword, to compute R1(x) through Rn-l (x) and account
for all possible ending locations of the long burst. The low-order coefficients of the
remainder containing the highest number of consecutive consistent high-order coeffi-
cients can be adjusted to remove the contribution of the decoded and verified errors,
leaving the pattern of the long burst, which will have been segregated at the low-order
end of the remainder.
The detection of some minimum number of consecutive consistent high-order coef-
ficients can be used to flag the existence of a single long burst together with up to
some maximum number of other bytes in error in a codeword. The necessary number of
consecutive high-order zero coefficients is again established by the required miscorrec- .
tion probability.
- 203 -
The miscorrection probability for each remainder RJ(x) for this method when used
on n-byte codewords to decode a long burst contributing L consecutive bytes in error
together with up to K other bytes in error per codeword, where K<INT[(m-L)/2], is:
-L
P mc 3
\
1-
i=o
and the total miscorrection probability (units: miscorrected codewords per uncorrectable
codeword) for all n values of J is:
-L
P mc 3
\ ~ 1. 09E-IO
1-
i=o
and the total miscorrection probability is:
Thus this method would allow successful decoding of a long burst of up to I *L = 100
bytes in combination with up to one other byte in error per interleave with a miscor-
rection probability comparable to that achieved using a conventional decoding method in
decoding all combinations of random errors.
Note that a logical extension of the decoding method for both cases (a) and (b)
for an interleaved code is to require consistency across interleaves in the de<;oded
location of the long burst.
CONCLUDING REMARKS
The techniques discussed above were selected for ease of understanding and are by
no means the best or only methods which exist for extending the correction power of
long-distance Reed-Solomon codes. It is possible, with or without erasure pointers, to
efficiently decode multiple long-burst errors and combinations of long-burst errors and
random byte-errors which exceed the basic guarantees of a code. Long-distance Reed-
Solomon codes possess much greater correction power against both long-burst and
random byte-errors than has traditionally been understood.
- 204 -
3.5 b-ADJACENT CODES
The b-Adjacent codes are parity check codes constructed with symbols from
GF(2b), b> 1. A subset of these codes is similar to the Reed-Solomon codes, but in many
cases encoding for a b-adjacent code is less complex than encoding for a Reed-Solomon
code with an equivalent capability.
Check symbols are generated on write and appended to data. On read, check
symbols are generated and compared with the write check symbols. The XOR differen-
ces between the read check symbols and write check symbols determine the syndromes.
The syndromes are used to compute error pattern and displacement information. Errors
within the check bytes must be detected with special tests.
The IBM 3370, 3375, and 3380 magnetic disk drives employ b-Adjacent code techni-
ques. Several of these techniques are described below.
Consider a b-Adjacent code using two l6-bit shift registers, Po and PI, defined by
the polynomials below:
Po (x 16 + 1)
P1 (x 16 + x 12 + x 3 + x + 1) [Primitive]
The properties of these polynomials enable the code to correct a single word (16
bits) in error in a 65,535 word record.
The write and read check words (CO and Cl) are generated by taking residues of a
function of the data, f(data), modulo PO and PI, where:
m-1
for P1, f(DATA) )0 Xi.Di(X)
i=o
and m is the number of data words. Di(x) are the individual data word polynomials. DO
is the lowest order data word (last data word to be transmitted and received).
- 205 -
The residues are computed by hardware implementing the logical circuits shown in
figure 3.4.1 below. These logical circuits are clocked once per word. The Po shift
register computes an XOR sum of all data words. The PI shift register computes a
cyclic XOR sum of all data words. Since PI is primitive, its shift register generates a
maximum length sequence (65,535 states). When the PI shift register is nonzero, but its
input is zero, each word clock sets it to the next state of its sequence.
On read, the check words read from media are XOR-ed with the computed check
words to obtain syndromes SO and S 1.
Figure 3.5.1
- 206 -
65,535-d
Figure 3.5.2
Figure 3.5.3
- 207 -
CORRECI'ION ALGORITHM
Consider what happens when the data record is all zeros and a word in error is
received.
Both shift registers will remain zero until the word in error arrives. The error
word is XOR-ed into the Po and PI shift registers. Since Po preserves its current value
as long as zeros are received, the error pattern remains until the end of record. XOR-
ing the error word into PI, places it to a particular state in its sequence. This state
will be referred to as the initial state. As each new word of zeros is received the PI
shift register is clocked along its sequence, one state per word.
The terminal state of Po is the error pattern. The terminal states of Po and PI
together establish error displacement.
To fmd displacement, it is necessary to determine the number of shifts of the PI
shift register that occurs between the occurrence of the error word and the end of
record.
L.et SI be the terminal state of the PI shift register and let So be the terminal
state of the Po shift register. So is alsothe initial state of the PI shift register.
The number of states between So and S 1 must be determined. There are several
ways to do this. For this simple example an impiementation is assumed that clocks S 1
forward along the PI sequence until a match is found with SO. The number of clocks
subtracted from 65,535 is the displacement from the end oi data counting the last data
word as one.
Consider the case when the data is not all-zeros. The check words are selected
on write such that residues of zero result on read. when the entire record is processed
by the Po and PI shift registers. When an error occurs. the operation differs from the
all-zeros data case only while residues are being computed. A given error condition
resul~ in the same residues. regardless of data values. Once residues have been com-
puted. the operation is the same as previously described for the all-zeros data case.
If there is a single word in error in the record and it is check word CO, then S 1
will be zero and SO will be nonzero. However, if check word C 1 is the word in error,
So will be zero, and SI will be nonzero.
- 208 ~
EXAMPLE #2 - SINGLE-WORD ERROR CORRECI'ION IN 1WO INTERLEA YES
Consider a code with two interleaves. Assume four shift registers PO, Pl, P2 and
The Po shift register computes an XOR sum of all even data words. PI computes
an XOR sum of all odd data words. P2 and P3 compute cyclic XOR sums of even and
odd data words respectively. . .
Po and P2 determine the pattern and displacement for the even interleave. PI and
P3 determine the pattern and displacement for the odd interleave.
This code can be used to correct a single word error in an even interleave and a
single word error in an odd interleave. The error words need not be adjacent. How-
ever, correction can be restricted to double word adjacent errors by requiring a par-
ticular relationship between interleave displacements.
If the record length is even, then the odd interleave displacement (from the end of
the record) must be either equal to, or one greater than the even interleave displace-
ment.
A double adjacent word error starting on an even word will cause interleave
displacements to be equal. A double adjaCent word error starting on an odd word will
cause the odd interleave displacement to be one greater than the even interleave dis-
placement.
- 209 -
EXAMPLE #3 - SINGLE-WORD ERROR CORRECTION
USING A NONPRIMITIVE POLYNOMIAL
The polynomial PI of example #1 is primitive. Therefore, it generates two sequen-
ces; a sequence of length one when initialized to zero; a sequence of length 65,535
when initialized to any nonzero state.
Consider another code where PI is degree 16 and irreducible but nonprimitive. As-
sume that PI has a period of 257. Then it would have 256 sequences, the zero sequence
of length one and 255 sequences of length 257. The operation of the code and dis-
placement computation would be identical to the code of example #1 except that the
record length, including check words would be limited to 257.
The operation of the code is unaffected by the fact that PI has multiple sequen-
ces. However, it is very important that all sequences of PI are of an equal length,
excepting the zero sequence. This condition is met by all irreducible polynomials. The
condition is also met by some composite polynomials, but not all.
EXAMPLE #4 - A CODE TO COMPUTE DISPLACEMENT MODULO SOME INTEGER
The code of Example #3 could be part of a larger code. For example, instead of
computing error displacement for a 257-word record, displacement modulo 257 could be
computed for a larger record.
In this case, if the data record is all-zeros and an error is received, the PI shift
register could traverse its sequence many times before the end of record is reached.
See Figure 3.5.3.
Another part of the overall code might compute displacement modulo some other
integer that is relatively prime to 257. The overall displacement then would be com-
puted using the Chinese Remainder Method.
EXAMPLE #5 - A CODE TO CORRECT DOUBLE-WORD ADJA CENT ERRORS
The interleave code of Example #2 uses four shift registers. Its capability includes
the correction of double-word adjacent errors.
Consider a code using only three shift registers (PO, PI, P2) that corrects most
double-word adjacent errors.
The Po shift register computes an XOR sum of all even data words. The PI shift
register computes an XOR sum of all odd data words. The P2 shift register processes
all data words (odd and even). Its definition and operation are identical to that of the
.
PI shift register in the previous examples .
- 210 -
Assume the data to be all zeros. Assume that a double word adjacent error oc-
curs. The two adjacent words in error will be XOR-ed into the Po and PI shift regis-
ters. Which shift register receives the first word in error depends on whether the
error starts on an odd or even word. When the first error word is received, it is XOR-
ed into the P2 shift register, after which P2 is advanced one state along its sequence.
Next, the second error word is XOR-ed into P2. P2 is again advanced one state along
its sequence.
P2 continues to be advanced along its sequence once per data word until the end
of record is reached.
The final states of shift registers PO, PI, P2 are syndromes SO, SI, S2.
So and S1 are the error pattern. Assume that it is known from another part of an
overall code, that the error started in an even word. Then, the error displacement can
be found by advancing S2 along the P2 sequence until a k'th state is found, such that,
zero results when So is XOR-ed with the k'th state and the result is advanced one state
along the P2 sequence and XOR-ed with SI. The procedure for fmding displacement
would be slightly different if the error started on an odd word.
This code would not allow correction of all double word adjacent errors. If the
second word in error is equal to the first word in error shifted once along the P2
sequence, the error is not detected at all and correction cannot be accomplished.
Using two codes of this type will overcome the problem, providing the P2 polyno-
mials of the two codes are different and satisfy a particular criteria.
- 211 -
USING FINITE FlEW MATH WITH THE b-ADJACENT CODE
Let powers of a represent the elements of a field. Let reverse displacement mean
the displacement from the last data word to the first word in error, counting the last
data word as one.
In example #1, displacement is computed by shifting SI forward along the PI
sequence until a match is found with SO. In terms of (mite field math, j must be
determined, where:
SI·ai + So = 0
The reverse displacement is then (-j) MOD 65,535.
For example #5, j must be determined where if the double-word error starts in an
even word:
(S2 • ai + SO)· a = S 1
and if the double-word error starts in an odd word:
(S2 • ai + S 1) • a = So
The reverse displacement is then (-j) MOD 65,535.
- 212 -
CHAPTER 4 - APPLICATION CONSIDERATIONS
Error rates and the nature of error must be characterized before designing and test-
ing a real-world error-control system. The error characteristics should be determined
by a combination of measurement and estimation. The estimation should be based on
experiences with similar products and technologies. Data typically required is listed
below.
1. Defect distribution (number of defects per media of each defect length).
2. Soft-error distribution (number of soft errors versus total bits transferred
for each error burst length).
- 213-
12. Relationship between decoded· bits in error and encoded bits in error for
the read/write modulation method used.
13. Available pointer information that can be used for erasure correction.
Some sources of pointer information on prior storage products are listed
below.
a. Excessive phase shift.
b. Excessive amplitude deviations.
c. Invalid code found by the modulation method.
d. Error locations from adjacent interleaves.
14. Information on usage. For example, expected bits read per day and ex-
pected accesses per day.
15. Record sizes.
- 214-
4.2 DECODED ERROR RATES
The equations and tables below and on the following pages can be used to deter-
mine the block error rate when raw error rate and the number of errors corrected per
block are known. A block error exists if, after performing error correction, the data
is erroneous. The block error rate is the ratio of block occurrences to blocks trans-
ferred. Raw error rate for the equations is the ratio of raw error occurrences to a
unit of data transfer. The unit of data transfer is specified in each case. The raw
error rate for the tables is the ratio of raw error occurrences to bits transferred. An
error may be a bit, symbol, or burst error. Errors are assumed to be random; the
equations and tables give erroneous results if they are not.
In the equations, the following notation represents the number of ways to chose r
out of n without regard to order.
r-l
(n-j)
[ rnJ = r!*(n-r)!
n!
= I I (r-j)
j=o
Some of the probability equations given on the following pages can be reduced in
complexity by using the following relationships when applicable.
if n»r
- 215-
BIT-ERROR PROBABILITIES
Let P e be the raw-bit-error rate. Let the raw-bit-error rate be defined as the
ratio of bit error occurrences to total bits transferred; that is, bit errors per bit. The
equations below give probabilities for various numbers of bit errors occurring in a block
ofn bits.
-IL
\ Pr l- P O
1-
r>O
-IL
\ Pr
1-
r>l
- 216-
DECODED ERROR PROBABILITIES FOR A BIT-CORRECTING CODE
~
BLOCK ERRORS
BIT ~ ~ *~ [~J*(Pe)i*(l-Pe)n-i
i>e
n
BIT ERRORS
BLOCK ~ ~(i+e)*[~J*(Pe)i*(l-Pe)n-i
i>e
~
BIT ERRORS
BIT ~ ~ *~ (i+e)*[~J*(Pe)i*(l-Pe)n-i
i>e
- 217-
BURST-ERROR PROBA BILITIES
Let P e be the raw burst-error rate, defined as the ratio of burst error occurrences
to total bits transferred, with units of burst errors per bit. The equations below give
the probabilities for various numbers of burst errors occurring in a block of n bits. It
is assumed that burst length is short compared to block length.
-.!L
\ Pr l- P O
1-
r>O
-.!L
\ Pr
1-
r>l
- 218 -
DECODED ERROR PROBABILITIES FOR A BURST-CORRECTING CODE
-1L
BLOCK ERRORS
BLOCK ~ ~ [~J*(Pe)i*(l-Pe)n-i
i>e
-1L
BLOCK ERRORS
BIT ~ ~ *~ [~J*(Pe)i*(l-Pe)n-i
i>e
-1L
BURST ERRORS
BLOCK ~ ~ (i+e)*[~J*(Pe)i*(l-Pe)n-i
i>e
-1L
BURST ERRORS
BIT ~ ~ *~ (i+e)*[~J*(Pe)i*(l-Pe)n-i
i>e
- 219-
SYMBOL-ERROR PROBABILITIES
Let P e be the raw-symbol-error rate, defined as the ratio of symbol error occur-
rences to total symbols transferred, with units of symbol errors per symbol. The equa-
tions below give probabilities for various numbers of symbol errors occurring in a block
of n symbols.
\ Pr l-P O
1-
r>O
~
\ Pr
1-
r>l
- 220-
DECODED ERROR PROBABILITIES FOR A SYMBOL-CORRECTING CODE
BLOCK ERRORS
BLOCK
--ll-
BLOCK ERRORS
SYMBOL ~ ~ *L [~J*(Pe)i*(l-Pe)n-i
i>e
--ll-
BLOCK ERRORS
BIT
~
w*n
1 *\ [~J * (P e ) i* (l-P e ) n-i
1-
i>e
--ll-
(i+e) * [~J * (P e ) i* (l-P e ) n-i
SYMBOL ERRORS ~ \
BLOCK 1-
i>e
--ll-
--ll-
(i+e) * [~J * (P e ) i* (l-P e ) n-i
SYMBOL ERRORS ~
1 *\
BIT w*n 1-
i>e
--ll-
* BIT ERRORS
BIT
~
2*n
1 *\
1-
(i+e)*[~J*(Pe)i*(l-Pe)n-i
i>e
* Assuming a symbol error results in kl2 bit errors.
- 221 -
DECODED ERROR PROBABILITIES FOR A SYMBOL-CORRECTING CODE
WHEN ERASURE POINTERS ARE AVAILABLE FOR SYMBOL ERRORS
.-!L
BLOCK ERRORS
BLOCK ~ ~ [~J*(Pe)i*(l-Pe)n-i
i>e
.-!L
BLOCK ERRORS
SYMBOL ~ ~ *~ [~J*(Pe)i*(l-Pe)n-i
i>e
.-!L
BLOCK ERRORS
BIT ~ w;n *~ [~J*(Pe)i*(l-Pe)n-i
i>e
.-!L
SYMBOL ERRORS
BLOCK ~ ~ i*[~]*(Pe)i*(l-Pe)n-i
i>e
.-!L
SYMBOL ERRORS
SYMBOL ~ ~ *~ i*[~J*(Pe)i*(l-Pe)n-i
i>e
.-!L
SYMBOL ERRORS
BIT ~ w;n *~ i*[~J*(Pe)i*(l-Pe)n-i
i>e
.-!L
* BIT ERRORS
~ 2;n *~ (i)*[~J*(Pe)i*(l-Pe)n-i
BIT
i>e
* Assuming a symbol error results in kl2 bit errors.
- 222-
4.3 DATA RECOVERABILITY
Error correction is used in storage device subsystems to improve data recover-
ability. There are other techniques that improve data recoverability as well. Some of
these techniques are discussed in this section. System manufacturers may want to
include data recovery techniques on their list of criteria for comparing subsystems.
DATA SEPARATOR
The design of the data separator will have a significant influence on data recover-
ability. Some devices have built-in data separators. Other devices require a data
separator in the controller.
Controller manufacturers should consult their device vendors for recommendations
when designing a controller for devices which require external data separators.
Circuit layout and parts selection are very important for data separators. Even if
one has a circuit recommended by a drive vendor, it may be advisable to use a highly
experienced read/write consultant for the detailed design and layout.
WRITE VERIFY
Another technique that can improve the probability of data recovery is write verify
(read back after write). Write verify can be very effective for devices using magnetic
media due to the nature of defects in this media. One may write/read over a defect
hundreds of times without an error. An error will result only when the write occurs
with the proper phasing across the defect. Once the error occurs, it may then have a
high incidence rate until the record is rewritten. Hundreds of writes may be required
before the error occurs again.
- 223-
When an error is detected by write verify, the record is rewritten or retired or
defect skipping is applied. This reserves error correction for errors that develop with
time or usage. Since it affects performance, write verify should be optional.
DEFECT SKIPPING
- 224-
Devices employing defect skipping within a data field must allocate extra media
area for each sector, track, or cylinder, depending on whether or not embedded servoing
is used and on other implementation choices. In devices using embedded servoing, the
space a]]otted for each sector must allow room for the maximum-length defect(s) which
may be skipped. In devices not using embedded servo techniques, the track format need
accommodate only some maximum number of skips per track, which may be much less
than one per sector. .
When defect-skipping techniques are used and skip or alternate-sector information
is stored in headers, care must be taken to make sure that the storage of information
in headers other than track and sector number does not weaken the error tolerance of
the headers. A different method for alternate-sector assignment, which avoids this
complication, is sector slipping. Each track or cylinder contains enough extra area to
write one or more extra sectors. When a sector must be retired, it and each succeeding
sector are slipped one sector-length along the track or cylinder. This method has the
additional advantage that sectors remain consecutive and no additional seek time is
required to find an alternate sector at the end of the track or cylinder, or on a dif-
ferent track or track or cylinder. This method is discussed in more detail under A
HEADER STRATEGY EXAMPLE below.
SYNCHRONIZA TION
For high defect rate devices, it is essential that the device/controller architectures
include a high degree of tolerance to defects that fall within sync marks. There are
several synchronization strategies that achieve this. The selection will be influenced by
the nature of the' device and the nature of defects (e.g., length distribution, growth
rate, etc.). Both false detection and detection failure probabilities must be considered.
Synchronization is discussed in detail in Section 4.8.1; some high points are briefly
covered below.
One method for achieving tolerance to defects that fall within sync marks is to
employ error-tolerant sync marks. Error-tolerant sync marks have been used in the
past that can be detected at the proper time even if several small error bursts or one
large error burst occurs within the mark. See Section 4.8.1 for a more in-depth discus-
sion of synchronization codes.
- 225-
Another strategy is to replicate sync marks with some number of bytes between.
The number of bytes between replications would be determined by the maximum defect
length to be accommodated. A different code is used for each replication so that the
detected code identifies the true start of data. The number of replications required is
selected to achieve a high probability of synchronization for the given rate and nature
of defects. Mark lengths, codes, and detection qualification criteria are selected to
achieve an acceptable rate of false sync mark detection.
If synchronization consists of several steps, each must be error-tolerant. If sector
marks (also called address marks) and preambles precede sync marks they must also be
error tolerant. Today, in some implementations correct synchronization will not be
achieved if an error occurs in the last bit or last few bits of a preamble. Such sen-
sitivities must be avoided. Section 4.8.1 discusses how error tolerance can be achieved
in the clock-phasing step of synchronization as well as in the byte-synchronization step.
- 226-
HEADERS
For high error-rate devices, header strategy is influenced by defect event rates,
growth rates, length distributions, performance requirements, and write prerequisites.
One header strategy requires replication. A number of contiguous headers with
CRC are written, then on read one copy must be read error-free. Another strategy is
to allow a data field to be recovered even if its header is in error. This requires that
headers consist solely of address information such as track and sector number. If a
header is in error, such information can be generated from known track orientation.
Some devices combine this strategy with header replication in order to minimize the
frequency at which address information is generated rather than read. In any case,
devices using high error-rate media must be insensitive to defects falling into the
headers of several consecutive sectors. When address information is generated rather
than read, the data field can be further qualified by subsequent headers.
Using error correction on the header field as well as the data field will increase
the probability of recovering data. However, one must either be able to store and
correct both a header and the associated data field, or provide a way to space over a
defective header in order to recover the associated data field on a succeeding revolu-
tion.
An alternative to correcting the header is to keep only address information in the
header and to provide a way to space over a defective header. When a defective header
is detected, record address is computed from track orientation. A disadvantage of this
method is that it does not allow flags to be part of the header field.
Some devices also include address information within the highly protected data
field to use as a final check that the proper data field was recovered. This check must
take place after error correction. The best time to perform it may be just before
releasing the sector for transfer to the host.
A HEADER STRATEGY EXAMPLE
- 227-
If a header-in-error is encountered during a search then it is either the header of
a sector that had been previously retired or it is a header containing a temporary error
or a new hard defect. The sector number sequence encountered in continuing the
search can be used to determine which is the case. If the header-in-error was that of
an already-retired sector, the sector number sequence should be adjusted and the search
continued. Otherwise the search should still be continued unless the header-in-error
was that of the desired sector, in which case the search should be interrupted and a
re-read attempted. If the error is not present on re-read, assume it was a temporary
error and proceed to read the data field. If the error persists on re-read, assume a
new hard defect: orient on the preceding sector, skip the header-in-error, and read the
desired data field. A sector whose header contains a new hard defect should be retired
as soon as possible.
Note that the error-tolerant header strategy outlined above will not work if it is
necessary to store control data, such as location information for defect skipping, within
headers.
SERVO SYSTEMS
In many devices, the ability to handle large defects is limited by the servo sys-
tem(s). Engineers responsible for defect handling strategy must understand the limits of
the servo system(s) relative to defect tolerance. In particular, any testing of defect
handling capabilities should include the servo system(s).
MODULATION CODES
The modulation code selected will affect EDAC performance by influencing noise--
generated error rates, the extension of error bursts, the ability to acquire synchroniza-
tion, the ability to hold synchronization through defects, the ability to generate erasure
pointers, and the resolution of erasure pointers.
- 228-
ERROR ERROR PROPAGATION LENGTH
TYPE o 123 4 5 TOTAL
# 2674 5009 1220 195 127 0 9225
DROP-IN % 29 54 13 2 1 0
- 229-
4.4 DATA ACCURACY
The transfer of undetected erroneous data can be one of the most catastrophic
failures of a data storage system; consider the consequences of an undetected error in
the money field of a financial instrument or the control status of a nuclear reactor.
Most users of disk subsystems consider data accuracy even more important than data
recoverability. Nevertheless, many disk subsystem designers are unaware of the factors
determining data accuracy.
Some causes of undetected erroneous data transfer are listed below.
- Miscorrection by an error-correcting code.
- Misdetection by an error-detecting or error-correcting code.
- 230-
POLYNOMIAL SELECTION
In disk subsystems, the error-correction polynomial has a significant influence on
data accuracy. Fire code polynomials, for example, have been widely used on disk con-
trollers, yet they provide less accuracy than carefully selected computer-generated
codes.
Many disk controller manufacturers have employed one of the following Fire code
polynomials:
(x21 + 1).(xll + x2 + 1) or (x21 + 1).(xll + x9 + 1)
The natural period of each polynomial is 42,987. Burst correction and detection
spans are both eleven bits for record lengths, including check bits, no greater than the
natural period. These codes are frequently used to correct eleven-bit bursts on record
lengths of 512 bytes.
When used for correction of eleven-bit bursts on a 512-byte record, these codes
miscorrect ten percent of all possible double bursts where each burst is a single bit in
error. With the same correction span and record length, the miscorrection probability
for all possible error bursts is one in one thousand. The short double burst, with each
burst a single bit in error, has a miscorrection probability two orders of magnitude
greater.
Such codes have a high miscorrection probability on other short double bursts as
well. Double bursts are not as common as single bursts. However, due to error clus-
tering, they occur frequently enough to be a problem.
The data accuracy provided by the above Fire codes for all possible error bursts is
comparable to that provided by a ten-bit CRC code. The data accuracy for all possible
double-bit errors is comparable to that provided by a three-bit or four-bit CRC code.
Fire codes are defined by generator polynomials of the form:
g(x) = c(x).p(x) = (xC + 1).p(x)
where p(x) is any irreducible polynomial of degree z and period e, and e does not divide
c.
The period of the generator polynomial g(x) is the least common multiple of c and
e. For record lengths (including check bits) not exceeding the period of g(x) , these
codes are guaranteed to correct single bursts of length b bits and detect single bursts
of length d bits where d~b, provided z~b and c~(d+b-l).
The composite form of the generator polynomial (g(x» is used for encoding.
Decoding can be performed with a shift register implementing the composite generator
polynomial (g(x» or by two shift registers implementing the factors of the generator
polynomial (c(x) and p(x». Code performance is the same in either case.
The p(x) factor of the Fire code generator polynomial carries error displacement
information. The c(x) factor carries error pattern information. It is this factor that is
responsible for the Fire code's pattern sensitivity. To understand the pattern sen-
sitivity, assume that decoding is performed with shift registers implementing the in-
dividual factors of the generator polynomial. For a particular error burst to result in
- 231 -
miscorrection, it must leave in the c(x) shift register a pattern that qualifies as a
correctable error pattern. A high percentage of short double bursts do exactly that.
For example, two bits in error, (c+ 1) bits apart, would leave the same pattern in the
c(x) shift register as an error burst of length two. The same would be true of two bits
in error separated by any multiple of (c+ 1) bits.
If p(x) has more redundancy than required by the Fire code formulas, the excess
redundancy reduces the miscorrection probability for short double bursts, as well as the
miscorrection probability for all possible error bursts.
The overall miscorrection probability (Pmc) for a Fire code is given by the follow-
ing equation, assuming all errors are possible and equally probable.
n*2 (b-1)
Pmc ~ (1)
where,
n record length in bits including check bits.
b = guaranteed single burst correction span in bits.
m = total number of check bits.
For many Fire codes, the miscorrection probability for double bursts where each
burst is a single bit in error is given by the following equation, assuming all such
errors are possible and equally probable.
~ 2*n*(b-1) (2)
Pmcdb
c 2 * (2z-1)
where,
nand b are as defined above.
c = degree of the c(x) factor of the Fire code polynomial.
z = degree of the p(x) factor of the Fire code polynomial.
This equation is unique to the Fire Code. It is applicable only when the product
of Pmcdb and the number of possible double-bit errors is much greater than one. When
this is not true, a computer search should be used to determine Pmcdb.
The ratio of Pmcdb to Pmc provides a measure of pattern sensitivity for one par-
ticular double burst (each burst a single bit in error). Remember that the Fire code is
sensitive to other short double bursts as well.
Properly selected computer-generated codes do not exhibit the pattern sensitivity
of Fire codes. In fact, it is possible to select computer-generated codes that have a
guaranteed double-burst detection span. The miscorrecting patterns of these codes are
1llore random than those of Fire codes. They are selected by testing a large number of
random polynomials of a particular degree. Provided the specifications are within
certain bounds, some polynomials will satisfy them.
There are equations that predict the number of polynomials one must evaluate to
meet a particular specification.
In some cases, thousands of computer-generated polynomials must be evaluated to
find a polynomial with unique characteristics.
- 232-
For a computer-generated code, correction and detection spans are determined by
computer evaluation. Overall miscorrection probability is given by Equation #1.
To increase data accuracy, many disk controller manufacturers are switching from
Fire codes to computer-generated codes.
(3)
where,
Pc = Catastrophic probability
Burst error rate (Pe) can be reduced by using reread. Most disk products exhibit
soft burst error rates several orders of magnitude higher than hard burst error rates.
Rereading before attempting correction makes Pe (in Equation #3) the hard burst error
rate instead of the soft burst error rate, reducing Pued by orders of magnitude.
- 233-
Rereading before attempting correction provides additional improvement in Pued
due to the different distributions of long error bursts and multiple error bursts in hard
and soft errors.
Another strategy that reduces Pued is to reread until an error disappears, or until
there has been an identical syndrome for the last two reads. Correction is then at-
tempted only after a consistent syndrome has been received.
- 234-
DESIGN PARAMETERS
For data accuracy, a low miscorrection probability is desirable. Miscorrection
probability can be reduced by decreasing the record length and/or correction span, or
by increasing the number of check bits.
For most Winchester media, a five-bit correction span has been considered ade-
quate. A longer correction span is needed if the drive uses a read/write modulation
method that maps a single encoded bit in error into several decoded bits in error, such
as group coded recording (GCR) and run-length limited (RLL) codes.
For several years, 32-bit codes were considered adequate for sectored Winchester
disks provided that the polynomial was selected carefully, record lengths were short,
correction span was low, correction was used only on hard errors, and the occurrence
rate for hard errors exceeding the guaranteed capability of the code was low.
More recently, most disk controller developers have been using 48-, 56- and 64-bit
codes in their new designs. Using more check bits increases data accuracy and provides
flexibility for increasing the correction span when the product is enhanced. Using more
check bits also allows other error-recovery strategies to be considered, such as
on-the-fly correction.
Disk controller developers are also implementing redundant sector techniques and
Reed-Solomon codes. Redundant sector techniques allow very long bursts to be cor-
rected. Reed-Solomon codes allow multiple bursts to be corrected.
- 235-
DEFECI' MANAGEMENT STRATEGY
All defects should have alternate sectors assigned, either by the drive manufacturer
or subsystem manufacturer, before the disk subsystem is shipped to the end user.
There are problems with a philosophy that leaves defects to be corrected by ECC
on each read, instead of assigning alternate sectors. First, if correction before reread
is used, a higher level of miscorrection results. This is because a soft error in a sector
with a defect results in a double burst. Once a double burst occurs that exceeds the
double-burst-detection span, miscorrection is possible. In the second case, if reread
before correction is used, revolutions will be lost each time a defective sector is read.
ERROR RATES
Clearly, disk drive error rates also significantly influence data accuracy. If errors
exceeding the guaranteed capability of the code never occurred, inaccurate data would
never be transferred.
When a data separator is part of the controller, its design affects error rate and
therefore data accuracy. While most drive manufacturers provide recommended data
separator designs, there are also well-qualified consultants who specialize in this area.
- 236-
One example of undetected erroneous data which the media EDAC system is power-
less to detect is a single-bit soft error occurring in an unprotected data buffer after
the EDAC system has corrected the data but before the data are transferred to the
host. Another example is a subtle subsystem software error which causes a request for
the wrong sector to be executed. The actual sector fetched may contain no .media-
related errors and so be accepted as correct by the media EDAC system, yet it is not
the data which the host requested.
Data Systems Technology, Corp. (DST) has proposed a method to combat errors not
covered by the media EDAC system. DST recommends that the host append a CRC
redundancy field to each logical sector as it is sent to the storage subsystem and
perform a CRC check on each logical sector as it is received from the storage subsys-
tem. DST further recommends that a logical identification number containing at least
the logical sector number, and perhaps the logical drive number as well, be placed
within each logical sector written to a storage subsystem and that this number be
required to match that requested when each logical sector is received from· the storage
subsystem.
It is possible to combine these two functions so that only four extra bytes per
logical sector are needed to provide both thirty-two-bit CRC protection and positive
sector/drive identification. Three methods are outlined below; whatever method is
chosen for implementing the two functions, it must be selected with multiple-sector
transfers in mind.
(1) Append to each logical sector within the host's memory a four-byte logical
sector number field. Design the host adapter so that as each logical sector of a multi-
ple-sector write is fetched from the host's memory, four bytes of CRC redundancy are
computed across the data portion of the logical sector and then EXCLUSIVE-OR summed
with the logical sector number field and transferred to the storage subsystem im-
mediately behind the data. During a multiple-sector read, the host adapter would com-
pute CRC redundancy over the data portion of each received logical sector and EX-
CLUSIVE-OR sum it with the received sum of the logical identification number and CRC
redundancy generated on write, then store the result after the data portion of the
logical sector in the host's memory. The host processor would then have to verify that
the result for each logical sector of a multiple-sector transfer matches the identification
number of the respective requested logical sector. If an otherwise undetected error
occurs anywhere in a logical sector anywhere beyond the host interface which exceeds
the guarantees of the host CRC code, including the fetching of the wrong sector, the
logical identification number within the host's memory will be incorrect with probability
1-(2.33E-1O).
(2) Keep data contiguous in the host's memory by instead recording the identifica-
tion numbers of all of the logical sectors in a multiple sector transfer within the host
adapter's memory, but process the data and identification numbers for the CRC code in
the same manner as in (1). The host adapter would have the responsibility for checking
that identification numbers match those requested. Equivalent error detection is achiev-
ed.
(3) Initialize the CRC shift register at the host interface with the identification
number of each logical sector before writing or reading each logical sector of a multi-
ple-sector transfer. The host adapter would require that on read the CRC residue for
each logical sector be zero. Again equivalent error detection is achieved.
- 237-
To implement the CRCnn field approach toward achieving higher data integrity,
computer builders will have to support the generation and checking of the extra four
bytes of CRC redundancy. Storage subsystem suppliers accustomed to sector lengths
which are powers of two will have to accommodate sector lengths which are greater by
four bytes. If the storage subsystem architecture includes its own auxiliary CRC field
of thirty-two or fewer bits, an option to disable it should be provided in order to
minimize overhead when the storage subsystem is connected to a host which implements
the CRC/ID field. The scope of coverage of the host CRC/ID field is much greater
than that of an equivalent-length auxiliary CRC field which protects only against media
errors, so data integrity can be greatly improved at no increase in overhead if the
subsystem auxiliary CRC code is disabled and the host CRCnn field is used instead.
Procedures like those outlined above can have a profound impact on data integrity
in computer systems. They allow the computer builder to be in control of the integrity
of data throughout the entire system without being concerned with the detailed designs
of the storage subsystems connected to the system.
- 238-
SUMMARY
When designing error correction for a disk controller, keep data accuracy high by
using the techniques listed below:
- 239-
4.5 PERFORMANCE REOUIREMENTS
Below are some of the parameters that should be specified for an error-control
system.
DATA RECOVERABILITY
Specify permissible decoded hard error rate. For storage devices this specification
is likely to be I.E-13 or less.
DATA ACCURACY
Specify allowable undetected erroneous data rate. For storage devices this spec-
ification is likely to be 1. E-15 or less.
OPERA TlNG SPEED
Specify data transfer rates that the error-control system must support.
DECODING SPEED
Specify allowable error-correction decoding times. These are times allowed for
computing patterns and displacements when errors occur.
SELF-CHECKING
Specify the form of self-checking to be used, such as:
- Duplicated circuits
- Parity predict
- Periodic microcode or software testing.
This determination may have to be made after a code has been selected and the
design is in progress. Use the reliability of circuit and packaging technologies along
with parts counts to determine the reliability of the error-correction circuits. If the
probability of error-correction circuit failure in a design contributes significantly to the
probability of transferring undetected erroneous data, self-checking should be added to
the design.
Once error rates and the nature of errors have been characterized and the perfor-:
mance requirements established, code selection can begin.
- 240-
4.6 PATTERN SENSITIVITY
Some error detecting and error correcting codes are more likely to misdetect or
miscorrect on certain classes of error patterns than others. This is· called pattern
sensitivity. If these classes of errors are also the most likely to occur, then protection
provided by these codes may not be as good as expected. In this section several ex-
amples of pattern sensitivity are discussed.
~'--1-6---B-I-T-SH-I-F-T-R-E-G-I-S-T-E-R-~
i
The polynomial for this circuit is (x 16 + 1). Of all possible error bursts, this
circuit will fail to detect one out of 65,536. Any degree 16 polynomial would have the
same misdetection probability for all possible error bursts. However, this circuit has a
pattern sensitivity. It will fail to detect one out of every sixteen possible error pat-
terns, consisting of two bits in error, separated by more than sixteen bits.
To understand the pattern sensitivity, consider reading a data record that is zeros
except for two bits in error, sixteen bits apart. The shift register will be all zeros
until the first error bit arrives. After arrival of the first error bit, the shift register
will contain '0 ..... 01 ' . After receiving the fifteen zeros separating the error bits, the
shift register will contain ' 10..... 0' . After receiving the second error bit, the shift reg-
ister will again contain all zeros, due to the cancellation of the high-order bit by the
second error bit.
This· circuit is 4000 times more likely to fail to detect an error pattern consisting
of two bits in error, separated by more than sixteen bits, than it is to fail to detect a
pattern consisting of many random bits in error.
The pattern sensitivity of this circuit is obvious. Nevertheless, it was implemented
by a large computer manufacturer on the 2314 magnetic disk device in the mid 1960's.
After the product was in the field, additional checking was installed to correct the
problem.
- 241-
PAITERN SENSITIVITY OF ERROR CORRECTION CODES
The Fire code is used for single burst correction. Many Fire codes have a high
pattern sensitivity for short double bursts. See Section 4.4 for a discussion of the Fire
code's pattern sensitivity.
Many interleaved error correcting codes have a pattern sensitivity for multiple
short bursts. The 3370 code (see Section 5.2) is such a code. It uses a single symbol
error correcting, double symbol error detecting Reed-Solomon code interleaved to depth
three. Symbols are one byte wide. Its miscorrection probability is 2.2E-16 for all
possible error bursts. However, the miscorrection probability is 2.6E-3 for all possible
errors exceeding code guarantees and affecting a single interleave.
- 242-
4.7 K-BIT-SERIALTECHNIQUES
Clocking error-correction circuits once per data bit limits operating speed. To
operate at higher speeds, it is necessary to clock these circuits once per symbol. A
symbol is some convenient cluster of bits, for example a byte or word.
There are at least two ways to do this. A code such as the Reed-Solomon code
can be selected that inherently operates on symbols; or the shift-register for a code
such as the Fire code can be transformed from bit-serial to k-bit-serial. The k-bit-
serial shift register operates on k input bits and accomplishes k bit shifts per clock. A
special case ofk-bit-serial is byte-serial (k=8).
The higher operating speed of k-bit-serial shift registers is attained at the expense
of added complexity.
There are two methods for implementing k-bit-serial shift register divide circuits.
The first me~od adds the necessary XOR gates to shift k bits per clock. The second
method uses 2 :k bit tables to accomplish k bit shifts per clock.
For both k-bit serial methods discussed in this section, circuitry is shown for
computing the remainder only. If the quotient is required, additional circuitry must be
added.
Recognize that in k shifts of the bit-serial shift register, the bits influencing the
new shift register contents via the feedback network, are the high order k bits. To
determine the contribution of anyone of these bits, bit j for example, do the following.
Clear the shift register, set bit j to 1, and shift k times. The resulting 1 bits in the
shift register is the contribution of bit j.
The other contributor to the new state of each bit, when the shift register IS
shifted k times, is the bit itself shifted k bits to the right.
The result of this intuitive development is the basis for the following procedure.
PROCEDURE
- 243-
Let i represent the polynomial degree and k the desired numQer of shifts per
clock. The following steps transform a bit-serial shift register into a k-bit-serial shift
register.
Simulate the bit-serial shift register. Initialize the high-order bit of the simulated
shift register to 1 and clear the remaining bits. Shift k times. After each shift, record
the new state of the shift register.
The first state in the sequence recorded is the contribution of shift register stage
x - to the feedback network. The second state is the contribution of stage xi-k + 1, and
i k
so on. The last state in the sequence is the contribution of shift register stage xi-t.
The next step is to add circuitry for the k information bits. It will be clear from
the examples how this is done.
- 244-
EXAMPLE'.
000010001 contribution of x 6
000100010 contribution of x 7
001000100 contribution of x 8
- 245-
EXAMPLE #2
Divide by x 8 + x 6 + x5 + x + 1
01100011 contribution of x 5
11000110 contribution of x 6
11101111 contribution of x 7
H----+--IO
M-------------I 2
- 246-
TABLE METHOD OF k-BIT-SERIAL IMPLEMENTATION
This ~ethod will be illustrated by example. The circuit of this example premul-
tiplies by x and divides by:
I II II I
0~
x 31 x 23
~
~=0
x 15
~=0
~ x7
.J
x24 x 16 x8 xO
INPUT
DATA
- 247-
BYTE-SERIAL TABLE T1
0 1 2 3 4 5 6 7 8 9 A B C 0 E F
00 00 14 28 3C 50 44 78 6C AO B4 88 ·9C FO E4 08 CC
10 54 40 7C 68 04 10 2C 38 F4 EO DC C8 A4 BO 8C 98
20 A9 BO 81 95 F9 ED 01 C5 09 10 21 35 59 40 71 65
30 FO E9 05 Cl AD B9 85 91 50 49 75 61 00 19 25 31
40 46 52 6E 7A 16 02 3E 2A E6 F2 CE OA B6 A2 9E 8A
50 12 06 3A 2E 42 56 6A 7E B2 A6 9A 8E E2 F6 CA DE
60 EF FB C7 03 BF AB 97 83 4F 5B 67 73 IF OB 37 23
70 BB AF 93 87 EB FF C3 07 IB OF 33 27 4B 5F 63 77
80 80 99 A5 Bl DO C9 F5 El 20 39 05 11 70 69 55 41
90 09 CD Fl E5 89 90 Al B5 79 60 51 45 29 3D 01 15
AO 24 30 OC 18 74 60 5C 48 84 90 AC B8 04 CO FC E8
BO 70 64 58 4C 20 34 08 lC DO C4 F8 EC 80 94 A8 BC
CO CB OF E3 F7 9B 8F B3 A7 6B 7F 43 57 3B 2F 13 07
DO 9F 8B B7 A3 CF DB E7 F3 3F 2B 17 03 6F 7B 47 53
EO 62 76 4A 5E 32 26 lA OE C2 06 EA FE 92 86 BA AE
FO 36 22 IE OA 66 72 4E 5A 96 82 BE AA C6 02 EE FA
BYTE-SERIAL TABLE T2
0 1 2 3 4 5 6 7 8 9 A B C 0 E F
00 00 OA 14 IE 28 22 3C 36 50 5A 44 4E 78 72 6C 66
10 AA AO BE B4 82 88 96 9C FA FO EE E4 02 08 C6 CC
20 54 5E 40 4A 7C 76 68 62 04 OE 10 lA 2C 26 38 32
30 FE F4 EA EO 06 DC C2 C8 AE A4 BA BO 86 8C 92 98
40 A3 A9 B7 BO 8B 81 9F 95 F3 F9 E7 ED DB 01 CF C5
50 09 03 10 17 21 2B 35 3F 59 53 40 47 71 7B 65 6F
60 F7 FO E3 E9 OF 05 CB Cl A7 AD B3 B9 8F 85 9B 91
70 50 57 49 43 75 7F 61 6B 00 07 19 13 25 2F 31 3B
80 46 4C 52 58 6E 64 7A 70 16 lC 02 08 3E 34 2A 20
90 EC E6 F8 F2 C4 CE DO OA BC B6 A8 A2 94 9E 80 8A
AO 12 18 06 OC 3A 30 2E 24 42 48 56 5C 6A 60 7E 74
BO B8 B2 AC A6 90 9A 84 8E E8 E2 FC F6 CO CA 04 DE
CO E5 EF Fl FB CD C7 09 03 B5 BF Al AB 90 97 89 -83
DO 4F 45 5B 51 67 60 73 79 IF 15 OB 01 37 3D 23 29
EO Bl BB A5 AF 99 93 80 87 El EB F5 FF C9 C3 DO 07
FO IB 11 OF 05 33 39 27 20 4B 41 5F 55 63 69 77 70
- 248-
BYTE-SERIAL TABLE T3
0 1 2 3 4 5 6 7 8 9 A B C 0 E F
00 00 04 08 OC 11 15 19 10 22 26 2A 2E 33 37 3B 3F
10 40 44 48 4C 51 55 59 50 62 66 6A 6E 73 77 7B 7F
20 80 84 88 8C 91 95 99 90 A2 A6 AA AE B3 B7 BB BF
30 CO C4 C8 CC 01 05 09 DO E2 E6 EA EE F3 F7 FB FF
40 04 00 OC 08 15 11 10 19 26 22 2E 2A 37 33 3F 3B
50 44 40 4C 48 55 51 50 59 66 62 6E 6A 77 73 7F 7B
60 84 80 8C 88 95 91 90 99 A6 A2 AE AA B7 B3 BF BB
70 C4 CO CC C8 05 01 DO 09 E6 E2 EE EA F7 F3 FF FB
80 08 OC 00 04 19 10 11 15 2A 2E 22 26 3B 3F 33 37
90 48 4C 40 44 59 50 51 55 6A 6E 62 66 7B 7F 73 77
AO 88 8C 80 84 99 90 91 95 AA AE A2 A6 BB BF B3 B7
BO C8 CC CO C4 09 DO 01 05 EA EE E2 E6 FB FF F3 F7
CO OC 08 04 00 10 19 15 11 2E 2A 26 22 3F 3B 37 33
DO 4C 48 44 40 50 59 55 51 6E 6A 66 62 7F 7B 77 73
EO 8C 88 84 80 90 99 95 91 AE AA A6 A2 BF BB B7 B3
FO CC C8 C4 CO DO 09 05 01 EE EA E6 E2 FF FB F7 F3
BYTE-SERIAL TABLE T4
0 1 2 3 4 5 6 7 8 9 A B C 0 E F
00 00 45 8A CF 14 51 9E DB 28 60 A2 E7 3C 79 B6 F3
10 15 50 9F OA 01 44 8B CE 3D 78 B7 F2 29 6C A3 E6
20 2A 6F AO E5 3E 7B B4 F1 02 47 88 CD 16 53 9C 09
30 3F 7A B5 FO 2B 6E A1 E4 17 52 90 08 03 46 89 CC
40 11 54 9B DE 05 40 8F CA 39 7C B3 F6 20 68 A7 E2
50 04 41 8E CB 10 55 9A OF 2C 69 A6 E3 38 70 B2 F7
60 3B 7E B1 F4 2F 6A A5 EO 13 56 99 DC 07 42 80 C8
70 2E 6B A4 E1 3A 7F BO F5 06 43 8C C9 12 57 98 DO
80 22 67 A8 EO 36 73 BC F9 OA 4F 80 C5 1E 5B 94 01
90 37 72 BO F8 23 66 A9 EC 1F 5A 95 DO OB 4E 81 C4
AO 08 40 82 C7 1C 59 96 03 20 65 AA EF 34 71 BE FB
BO 10 58 97 02 09 4C 83 C6 35 70 BF FA 21 64 AB EE
CO 33 76 B9 FC 27 62 AO E8 1B 5E 91 04 OF 4A 85 CO
00 26 63 AC E9 32 77 B8 FO OE 4B 84 C1 1A SF 90 05
EO 19 5C 93 06 00 48 87 C2 31 74 BB FE 25 60 AF EA
FO OC 49 86 C3 18 50 92 07 24 61 AE EB 30 75 BA FF
- 249-
4.8 SYNCHRONIZATION
It is common for data storage device track formats to include a sector mark and
one or more sync marks in front of each sector for achieving initial synchronization.
A sector mark is used to establish coarse synchronization to a sector. The sector
mark is unique and very different from data. It may be chosen so that it is impossible
for data to emulate it and very difficult for a defect to emulate it. Sector marks are
generally detected before data acquisition and therefore must be detected asynchronous-
ly. After coarse synchronization has been established, the general location of the sync
mark is known and the search for the sync mark can be restricted to a window spann-
ing the time around which it is expected to occur.
Ideally, the sync mark is unique and we are assured that no combination of valid
channel bits can emulate it. To achieve this, the sync mark might include a run-length
violation or an invalid decode. An invalid decode is a sequence which satisfies the run-
length constraints but which cannot be emulated by any valid combination of channel
words. When the sync mark is unique, the misdetection probability in the absence of
error is zero. In some cases, the sync mark is not unique and there is a valid data bit
sequence which can emulate it, but with sufficiently low misdetection probability. In
such a case there would generally be additional sync mark detection qualification criter-
ia.
In selecting a sync mark strategy, it is desirable to minimize overhead yet maxi-
mize the probability of successful decoding and minimize the probability of false decod-
ing. These conflicting goals require that trade-offs be made in selecting sync mark
parameters. Typical parameters include:
- Detection window width
- Error tolerance of the mark
- Mark length
- Mark pattern
- 250-
Detection window width and error tolerance of the mark may be changed for retry
reads. A narrow detection window is desirable in order to minimize the probability of
false detection. However, if the detection window is established by a counter running
off a reference clock then spindle speed variations, eccentricity, and mechanical oscilla-
tions will influence timing accuracy and will therefore influence window width as well.
Increasing the error tolerance of the sync mark while keeping its length constant
increases the probability of successful decoding but also increases the possibility of
false decoding.
Increasing the sync mark length decreases the probability of false decoding but in-
creases overhead.
The sync mark pattern is selected to minimize the probability of false decoding
when defects exist within and/or preceding the sync mark. To accomplish this, the
pattern is selected to maximize the number of error bits and/or the error burst length
that are required to cause a sync mark to be falsely detected in front of the true mark.
This selection can be accomplished with a computer.
If we assume that the bit stream preceding and following a mark is random, we
are motivated to use a sequence which does not resemble itself when shifted one or
more bits, so that it is impossible for a small number of errors to cause false detection.
As an illustration, consider a sequence of all 'I 'so If the bit immediately preceding is
random, there is a 50 % chance of falsely detecting this sequence one bit early.
The autocorrelation function of a sequence is used to measure the degree to which
a sequence resembles itself. Conceptually, one copy of the sequence is "slid past"
another. At each offset i, the autocorrelation R(i) is the number of corresponding bits
which are identical minus the number which differ. R(O) is of course equal to the
number of bits in the sequence, n; the maximum value of R(i) is n-I ii, with lower values
being preferred. The class of sequences called Barker codes has the so-called "perfect"
property 1R(i) 1~ 1 for i;!O. Only eight Barker codes are known to exist, with lengths 2,
3,4,5,7,11, and 13.
BARKER CODES
- 251 -
Barker codes can be combined to form longer codes which have good, though not
"perfect" autocorrelations. To construct such a combined Barker code, each bit of a
Barker code is replaced with the entire sequence of another (possibly the same) Barker
code, the sequence being inverted if the bit being replaced is zero. Longer sequences
with autocorrelations which are nearly as desirable (Barker-like codes) also exist.
In practice, a sync mark is detected by counting the number of matching bits,
without subtracting the number of mismatched bits. The sync mark is considered de-
tected when the count of matching bits meets or exceeds a threshold, which may be
variable so that it can be changed for read retries. In discrete designs, an efficient
implementation may include PROM circuits; in integrated designs, logic gates may be
preferable.
Window width can be increased and misdetection can be reduced by writing a
known bit pattern (preamble) preceding the mark. A mark pattern is then selected for
minimum correlation with the preamble and with itself. This preamble-sync mark com-
bination is equivalent to a sync mark which is detected by searching only for its last
half. An example is 16 zero-bits followed by the 16-bit mark '0001111100110101' (3 zero
bits followed by the 13-bit Barker code) and followed by random data. When detected
in a window from 16 bits before the position of the mark up to 5 bits after and requir-
ing 13 bits (out of 16) to match, this pattern is guaranteed to be detected and guaran-
teed not to be falsely detected when not more than 3 random bits (out of 16) are in
error or when a single error burst of length 3 bits or less exists. There are other
patterns besides this one which have the same error tolerance using the same detection
method.
Note that a preamble of all one-bits could be used as well, in which case each bit
of the mark would be inverted. The preamble need not be all zero-bits or all one-bits;
satisfactory codes can be selected for any given preamble pattern.
An extension of the above technique would be to write known patterns both pre-
ceding and following the sync mark. Selecting the pattern following the mark for
minimum correlation with the sync mark would increase the acceptable window width
after the position of the mark.
Sync marks can be decoded in either the data-bit domain or the channel-bit do-
main; the error propagation of the RLL decoding process motivates us to decode in the
channel bit domain when possible, particularly if the detection criteria have been re-
laxed to achieve error tolerance. The desire to have error tolerant clock phasing also
motivates us to decode in the channel-bit domain. In this case clock phasing and byte
synchronization are established simultaneously with the detection of the sync mark.
- 252-
Some implementations do not detect sync marks using a bit-by-bit comparison, but
by comparing groups of bits. This reduces the circuitry required to implement majority-
vote detection. Such a code has been proposed for use in optical disk. The 48 chan-
nel-bit mark is made up of 12 groups of 4 bits, each group containing a single one-bit.
The whole mark obeys (2,7) run-length constraints and is preceded by the highest-
frequency (2,7) pattern. The mark is detected in the channel bit domain using 4-bit
groups. The correlation function for the sync mark sequence against the preamble-sync
mark-random data sequence on a 4-bit basis, counted as the number of matches (Plus
the number of possible matches when correlating with random data at positive offsets)
is
Offset: -15 • • • • • • • . • • . • • -1 0 1 • • • • • • • • • 11
4-bit: 2 3 3 4 4 0 4 4 3 2 2 3 4 0 0 12 0 0 5 4 3 2 5 5 4 4 5
- 253-
CHANNEL BITS
~ 48-BIT SHIFT REGISTER
One possible decoding alternative for the X3Bll data field sync code.
- 254-
As another example consider a 32 data-bit sync mark that is composed of four 8-
bit groups A, B, C, and D, preceded by all zeros and followed by random data, to be
detected in the data-bit domain when anyone of the pairs A-B, C-D, or A-D is de-
tected. It is possible to construct a mark which will be detected in the presence of a
burst of not more than 9 data bits (out of 32 data bits) and will not be falsely detected
in the presence of a burst of not more than 10 data bits (out of 32 data bits) in length
when detected in a window from 16 bits before to 16 bits after the mark.
Using the same pair-wise detection method in the channel bit domain, it is possible
to construct a 32 channel-bit mark subject to a (1,7) run-length constraint and preceded
by 32 bits of the maximum-frequency (1,7) pattern which will be detected and will not
be falsely detected in the presence of a burst of not more than 9 channel bits (out of
32 channel bits) when detected in the channel-bit domain in this pair-wise fashion.
Similarly, 32-bit marks have been constructed using a (2,7) run-length constraint which
will be detected in the presence of a burst of not more than 9 channel bits (out of 32
channel bits) and will not be falsely detected in the presence of a burst of not more
than 8 channel bits (out of32 channel bits) in length.
For a given detection method, it is possible to use a computer to select mark
patterns which satisfy the desired error tolerance requirements, if such patterns exist.
The most straightforward method is to successively generate random patterns (using
run-length constraints, if the mark is to be detected in the channel-bit domain), analyze
them, and record the best performers.
RESYNC MARKS
When the probability of loss of synchronization is high, due for example to long
defects, some applications require one or more sync marks preceding each sector and
resync marks interspersed at regular intervals within each sector. The sync marks are
used for achieving initial clock phasing and byte synchronization and the resync marks
are used for restoring clock phasing and byte synchronization after a loss of sync
(when the PLL has slipped cycles).
Many resync marks may be required per sector, so it is very important to minimize
resync mark length to minimize overhead. In clever implementations it is not necessary
for each resync mark to be detected, so the resync mark itself need not be error
tolerant. To minimize the false detection of resync marks, their detection window is
made very narrow. In addition they are typically assigned a channel bit pattern that
cannot be emulated by a channel-bit sequence encoded from data. This guarantees that
correct data will never emulate a resync mark.
- 255 -
4.8.2 SYNCHRONIZATION FRAMING ERRORS
In order to properly frame data, a read system must know where data begins. This
is normally accomplished by detecting a sync mark, a process called byte synchroniza-
tion. A defect can emulate a sync mark at an incorrect position on the media. It is
possible (depending on windowing, etc.) for this to result in an incorrect assumption
about the starting position of data. This is called a synchronization framing error. The
probability of a sync framing error increases as sync mark length decreases, as sync
mark error tolerance increases, and as the length of the sync mark detection window
increases. A sync framing error may be detected as an uncorrectable error or it may
incorrectly cause data to appear correctable or error-free. If the data appears correc-
table or error free, the transfer of undetected erroneous data may result which could
have disastrous consequences.
In order to keep the probability of transferring undetected erroneous data low it is
very important to detect sync framing errors with high probability. In some systems
the responsibility for detecting such errors is placed on the error detection and correc-
tion circuitry.
Most codes used for error detection and correction in data storage systems for
computers are shortened cyclic codes. Cyclic codes are linear codes with the property
that each cyclic (i.e. wraparound) shift of each codeword is also a codeword. Shortened
cyclic codes are not truly cyclic. However, the codewords of a shortened cyclic code
when shifted (left or right) a few symbol positions will either form another shortened
codeword or form a sequence that differs from another shortened codeword in only a
few symbol positions. This property of shortened cyclic codes causes them to have poor
detection capability for sync framing errors.
Shortened cyclic codes are often modified by some method in order to increase
their capability to detect sync framing errors. The degree to which capability of the
modified code to detect sync framing errors is increased depends highly on the specific
method of modification selected.
Ideally, a code modification method will assure that all sync framing errors result
is an error pattern that exceeds correction guarantees but not detection guarantees of
the code. If this cannot be achieved, then as a very minimum it is desirable that the
probability of transferring undetected erroneous data be no greater for sync framing
errors than for all other types of errors that exceed detection guarantees.
Some frequently used codes for performing error detection and/or correction are
listed below. In some cases these codes are cyclic codes but most often they are
shortened cyclic codes. Problem analysis and the selection of a method for code mod-
ification is similar between the different types of codes.
- 256-
• Error detection codes using a polynomial with binary coefficients.
• Single-burst error correcting codes using a polynomial with binary coeffi-
cients.
• Single- and multiple-burst error correcting Reed-Solomon codes.
• Interleaved Reed-Solomon codes.
Binary error detection/correction codes operate on single-bit symbols while Reed-
Solomon codes operate on multiple-bit symbols, typically byte-wide (eight-bit) symbols.
Reed-Solomon codes are cyclic but only on a symbol basis: cyclic rotation of a Reed-
Solomon codeword by a number of bits which is not a multiple of the symbol width does
not generally produce another codeword; an obvious counter-example is the all-zeros
Reed-Solomon codeword. This property allows us to discuss binary codes and non-
interleaved Reed-Solomon codes together. We shall then apply similar methods to
interleaved Reed-Solomon codes.
Let us use the following notation to represent a non-interleaved codeword of a
binary code or a Reed-Solomon code:
···pppddd···dddrrr···rrrggg··.
where 'p' is a preamble/sync symbol,. 'd' is a data symbol, 'r' is a redundancy symbol,
and 'g' is a gap symbol. '0' will represent a symbol whose bits are all zeros, '1' will
represent a symbol whose bits are all ones, and 'X' will represent a symbol whose bits
are neither. In the case of a Reed-Solomon code, each symbol is a group of w bits. In
the case of a binary error correction code, each symbol is one bit (w= 1).
The pattern read is a multiple of the codeword written. This is also a codeword,
so the pattern read appears to be error free and the sync framing error is not detected.
Codeword read
I I I
2 ) ••• pppXdd· •• dddrrr· •• rrrOgg· ••
- 257 -
The pattern read is a multiple of the codeword written with a symbol in error at
symbol -1 of the codeword (Le. the symbol before the first data symbol of the code-
word). When shortened codewords are used, the error appears to be outside the bounds
of the codeword and the correction algorithm will post it as uncorrectable.
In random data the probability that the first symbOl of a codeword is zero is 2- w ,
so from the 1) and 2) above analyses we conclude that this is also the probability that
the read pattern will appear to be error free when synchronization occurs late by one
symbol and a zero symbol is read following the codeword. By similar reasoning, if
synchronization occurs late by k symbols and the first k gap symbols are all zeros then
the codeword read will appear to be error free if the first k symbol~ ,v.f)the codeword
written were all zeros. This should occur in random data with probability 2-tK w .
If synchronization occurs late by k symbols, the first k gap symbols are all zeros,
and the first k symbols of the codeword written were not all zeros, then there will
appear to be an error burst of length k or fewer symbols preceding the codeword read.
If the guaranteed detection capability of the code is equal to or greater than the ap-
parent error created by the pattern of non-zero symbols missed, then there will appear
to be an error burst of length k or fewer symbols preceding the codeword read and the
correction algorithm will post the error as uncorrectable, since the error burst appears
to be beyond the bounds of the shortened codeword. If in the same situation the
apparent error created by the pattern of non-zero symbols missed exceeds the guaran-
teed detection and correction capabilities of the code, then the error will appear to be
correctable with probability Pmc , where Pmc is the miscorrection probability of the
code. Equivalently, the error will appear to be uncorrectable with probability I-P mc .
Codeword read
I I I
3) ••• pppOdd· •• dddrrr· •• rrrXgg· ••
The pattern read is that of a multiple of the codeword written with a symbol in
error in the last symbol position. A code which performs only error detection will
therefore detect the sync framing error, but an error correction code will not.
Codeword read
I I I
4 ) ••• pppXdd· •• dddrrr· •• rrrXgg· ••
The read remainder will be that of two symbols in error, one at the symbol before
the first symbol of the codeword and one at the last symbol of the codeword. If this
double-burst error is within the detection guarantees of the code, then an uncorrectable
error will be posted by the error correction algorithm. If this error pattern exceeds
the detection guarantees of the code, then the error will appear to be correctable with
probability Pmc and will appear to be uncorrectable with probability I-Pmc .
- 258-
By similar reasoning, if synchronization occurs late by k symbols, the first k gap
symbols are not all zeros, and the guaranteed correction capability of the code is equal
to or greater than the number of non-zero gap symbols, then the codeword read wiIl
appear to be correctable if the first k symbols Qf* the codeword written were all zeros.
This should occur in random data with probability 2-\K w).
If synchronization occurs late by k symbols, the first k gap symbols are not all
zeros, and the first k symbols of the codeword written were not all zeros, then there
will appear to be an error burst preceding the codeword read and an error burst at the
end of the codeword. If this double-burst error is within the detection guarantees of
the code, then an uncorrectable error will be posted by the error correction algorithm.
If this error pattern exceeds the detection guarantees of the code, then the error will
appear to be correctable with probability Pmc and will appear to be uncorrectable with
probability I-Pmc .
Consider the case of early synchronization by one symbol. Again there are four
combinations for the values of the preamble/sync symbol read and the redundancy
symbol missed.
Codeword read
I I I
1) ••• ppOddd· •• dddrrr· •• rrOggg· ••
The pattern read is a multiple of the codeword written. The sync framing error
will not be detected.
Codeword read
I I I
2 ) ••• ppOddd· •• dddrrr· •• rrXggg· ••
The read remainder wiIl be that of a single symbol in error at a location cor-
responding to the first symbol of the full-length codeword. Since this is beyond the
bounds of the shortened codeword, the error correction algorithm will post an uncorrec-
table error. Given random data, the probability that the last redundancy symbol is zero
is 2- w , so this is also the probability that the read pattern will appear to be error free
when synchronization occurs early by one symbol and a zero symbol is read preceding
the codeword read. By similar reasoning, if synchronization occurs early by k symbols
and the last k preamble/sync symbols are all zeros then the codeword read will appear
to be error free if the last k symbols of th:f. ~odeword written were all zeros. This
should occur in random data with probability 2 -(k: w,.
o
- 259-
If synchronization occurs early by k symbols, the last k preamble/sync symbols are
all zeros, and the last k symbols of the codeword written were not all zeros, then on
read there will appear to be an error burst of length k or fewer symbols near the
beginning of the full-length codeword. If the guaranteed detection capability of the
code is equal to or greater than the apparent error created by the pattern of non-zero
symbols missed, then the correction algorithm will post an uncorrectable error since the
errors appear to be outside the bounds of the shortened codeword. If the apparent
error created by the pattern of non-zero symbols missed exceeds the guaranteed detec-
tion capability of the code, then the error will appear to be correctable with probability
Pmc and will appear to be uncorrectable with probability I-Pmc .
If synchronization occurs early by k symbols, the last k preamble/sync symbols are
not all zeros and the correction capability of the code is equal to or greater than the
number of non-zero preamble/sync symbols read, then the codeword read will appear to
be correctable if the last k symbols of the codeword written were all zeros. This
should occur in random data with probability 2-(k*w).
The pattern read is that of a multiple of the codeword written with a symbol in
error at the first symbol of the codeword read. A code used only for error detection
would therefore detect the sync framing error while an error correction code would not.
Codeword read
I I I
4) ••• ppXddd· •• dddrrr' •• rrXggg· ••
The read remainder will be that for two symbols in error, one at the first symbol
of the full-length codeword and one at the first symbol of the shortened codeword. If
this double-burst error is within the detection guarantees of the code, then an uncor-
rectable error will be posted by the error correction algorithm. If this error pattern
exceeds the detection guarantees of the code, then the error will appear to be correc-
table with probability Pmc and will appear to be uncorrectable with probability I-Pmc .
- 260-
By similar reasoning, if synchronization occurs early by k symbols, the last k pre-
amble/sync symbols are not all zeros, and the guaranteed correction capability of the
code equals or exceeds the number of non-zero preamble/sync symbols read, then the
codeword read will appear to be correctable if the last k symbols of the c~~word
written were all zeros. This should occur in random data with probability 2-(k wJ. If
the last k symbols of the codeword written were not all zeros, then on. read there will
appear to be an error burst at the beginning of the full-length codeword and an error
burst at the beginning of the shortened codeword. If this double-burst error is within
the detection guarantees of the code, then an uncorrectable error will be posted by the
error correction algorithm. If this error pattern exceeds the detection guarantees of
the code, then the error will appear to be correctable with probability Pmc and will
appear to be uncorrectable with probability I-Pmc .
- 261 -
In the case of late synchronization by one symbol, after read inversion the pattern
read is:
Codeword read
I I I
···pppDdd···dddDdd···dddrrr···rrrggg···
I I I
m re-inverted symbols
The read remainder will reflect one error at symbol m-l of the codeword read, and
may reflect errors at the symbol before the ftrst data symbol and at the last symbol of
the codeword read, depending on the value of the ftrst symbol of the codeword written
and the value of the gap symbol read as part of the codeword read, respectively. .
Let us examine the four combinations for late synchronization by one symbol when
the ECC shift register is initialized to all ones.
- 262 -
Codeword read
I I I
I'
1) 00 opppOdd 0 0 odddDddo o odddrrroo orrrOggo 00
I I
m re-inverted symbols
1 non-re-inverted symbol
When the bits of the skipped data symbol were all ones, the inversion caused it to
appear as a zero on write, so the read remainder reflects only the symbol in error at
symbol m-l of the codeword read. The sync framing errorwi11 go undetected by an
error correction code.
Codeword read
I I I
2) 00 opppXddo 0 odddDdd o • odddrrro 0 orrrOggoo 0
, I I
rn re-inverted symbols
When the bits of the skipped data symbol were not all ones then the inversion
causes the read remainder to appear to be that of two symbols in error, one at symbol
-1 and one at symbol m-l of the codeword read. If this double-burst error pattern is
within the detection guarantees of the code, then an uncorrectable error will be posted
by the error correction algorithm. If this error pattern exceeds the detection guaran-
tees of the code, then the error will appear to be correctable with probability Pmc and
will appear to be uncorrectable with probability I-tJuc. For random data all bits of the
first data symbol will be ones with probability 2- and therefore this is the probability
that late synchronization by one symbol will be undetected by a code guaranteed to
detect a double-bu~~ efJ"or when a zero gap symbol is read. Under similar assumptions
the probability is 2-Vc W) that late synchronization by k symbols will be undetected.
Codeword read
I ' I
o o opppOddo 0 odddDddo •• dddrrro oorrrXggooo
I'm re-inverted symbols
3)
I I
1 non-re-inverted symbol
When the bits of the skipped data symbol were all ones and the first gap symbol is
non-zero, the inversion causes the read remainder to appear to be that of two symbols
in error, one at symbol m-l and one at the last symbol of the codeword. If this
double-burst error pattern is within the detection guarantees of the code, then an
uncorrectable error will be posted by the error correction algorithm. If this error
pattern exceeds the detection guarantees of the code, then the error will appear to be
correctable with probability Pmc and will appear to be uncorrectable with probability
I-Pmc ·
Codeword read
I I I
3) •• 0pppXddooodddDddooodddrrro··rrrXggoo.
, I I
rn re-inverted symbols
- 263 -
When the bits of the first data symbol were not all ones and the first gap symbol
is non-zero, the inversion causes the read remainder to appear to be that of three
symbols in error, one at symbol -1, one at symbol m-I, and one at the last symbol of
the codeword. If this triple-burst error pattern is within the detection guarantees of
the code, then an uncorrectable error will be posted by the error correction algorithm.
If this error pattern exceeds the detection guarantees of the code, then the error will
appear to be correctable with probability Pmc and will appear to be uncorrectable with
probability I-Pmc . Late synchronization by k symbols can be analyzed in a similar man-
ner.
In the case of early synchronization by one symbol, after read inversion the pat-
tern read is:
Codeword read
I I I
···ppPddd···ddDddd···dddrrr···rrrggg···
I I
I
m re-inverted symbols
The read remainder will reflect one error at symbol m of the codeword read, and
may reflect errors at symbol 0 and at the first symbol of the full-length codeword,
depending on the value of the preamble/sync symbol .read as part of the codeword read
and the value of the written redundancy symbol, respectively.
Analysis of early synchronization when the Eee shift register is initialized to all
ones is affected in much the same way as that of late synchronization. It is left as an
exercise for the reader.
We are motivated to use for the initialization pattern a sequence which does not
resemble itself when shifted one or more symbols, so that many errors result when read
inversions are not perfectly aligned with write inversions as a result of a sync framing
error. Pattern selection is influenced by the symbol patterns written immediately before
(preamble/sync symbols) and after (gap symbols) the codeword symbols. Assuming no
errors other than those causing the sync framing error it is possible to use simulation
- 264-
to determine for each candidate initialization pattern and given conditions an integer k
such that all sync framing errors caused by synchronizing up to k symbols early or late
will be detected. k will be a function of the candidate initialization pattern, the poly-
nomial, the symbol patterns written immediately before a codeword (preamble/sync
symbols) and after a codeword (gap symbols), and the record lengths. It is also possible
to find integers k and b for each candidate initialization pattern and given conditions
such that all sync framing errors caused by synchronizing up to k symbols early or late
will be detected even if there is a burst of length b or fewer symbols anywhere within
the codeword, or to fmd integers k and e such that all sync framing errors caused by
synchronizing up to k symbols early or late will be detected even if there are e random
symbols in error.
where 'p' is a preamble/sync symbol, 'd', 'e', and 'f' are data symbols of the three
codewords, 'r', 's', and 't' are redundancy symbols of the three codewords, and 'g' is a
gap symbol.
Consider the case of late synchronization by one symbol:
Codewords read
I I I
o 0 0 pppdef 00 0 defrst 000 rstggg 0 0 0
The second and third Codewords written are read as the first and second Codewords and
contain no errors caused by the sync framing error. The first codeword written is read
as the third codeword. The same analysis performed above for late synchronization by
one symbol of a single codeword applies to the apparent third codeword.
Consider the case of early synchronization by one symbol.
Codewords read
I I I
ooopppdefooodefrstooorstgggooo
The first and second codewords written are read as the second and third codewords and
contain no errors caused by the sync framing error. The third codeword written is read
as the first codeword. The same analysis performed above for early synchronization by
one symbol of a single codeword applies to the apparent first codeword.
- 265-
Analysis of late or early synchronization by any number of bits can be performed
in a similar fashion; there is no qualitative difference in the effect on individual code-
words between the interleaved and non-interleaved cases given the same amount of sync
slippage per codeword.
The effect of initializing the ECC shift register to all ones can be extrapolated
from the non-interleaved to the interleaved case in the same way:
Codewords written
I I I
···pppDEF···DEFdef···defrst···rstggg···
I I I
3*m inverted symbols
If there is no sync framing error on read then the inversions cancel:
Codewords read
I I I
···pppdef···defdef···defrst···rstggg···
I I I
3*m re-inverted symbols
If a sync framing error occurs then read inversions will cancel write inversions
except at the end points of the inversion.
In the case of late synchronization by one symbol, after read inversion the pattern
read is:
Codewords read
I
I I
···pppDef···defDef···defrst···rstggg···
I I I
3*m re-inverted symbols
Aside from misidentification, two of the codewords are not affected by the sync
framing error. The read remainder for the other will reflect one error at symbol m-l
of the codeword read, and may reflect errors at the symbol before the first data symbol
and at the last symbol of the codeword read, depending on the value of the first symbol
of the codeword written and the value of the gap symbol read as part of the codeword
read, respectively. The rest of the analysis is identical.
In the case of early synchronization by one symbol, after read inversion the pat-
tern read is:
Codewords read
I I I
···ppPdef···deFdef···defrst···rstggg···
I I I
- 266 -
Again aside from misidentification, two of the codewords are not affected by the
sync framing error. The read remainder for the other will reflect one error at symbol
m of the codeword read, and may reflect errors at symbol 0 and at the first symbol of
the full-length codeword, depending on the value of the first symbol of the preamble-
/sync symbol read as part of the codeword read and the value of the written redundan-
cy symbol, respectively. The rest of the analysis is identical.
Analysis of late or early synchronization by any number of bits can be performed
in a similar fashion; there is no qualitative difference in the effect on individual code-
words between the interleaved and non-interleaved cases given the same amount of sync
slippage per codeword.
Initializing the ECC shift register to all ones (or inverting all redundancy symbols)
has no qualitative difference between the interleaved and non-interleaved cases, and use
of a specially selected pattern is called for. When a high degree of interleaving or a
code of high degree is used, it might be permissible to initialize a selected set of
symbol-wide registers to all ones (or to invert a selected set of redundancy symbols).
However, best results would be achieved if each bit of the ECC shift register could be
independently initialized to one (or a selected set of redundancy bits could be inverted).
RANDOMIZING DATA
More complete protection against sync framing errors Can be achieved by im-
plementing a shift register which generates a pseudo-random sequence, which is initial-
ized to a known state before writing or reading each data record. The EXCLUSIVE-OR
sum of the data-bit stream and the pseudo-random-bit sequence is fed to the ECC shift
register instead of the data bit stream itself. Again an all-zeros data record produces
non-zero redundancy, and if no sync framing error occurs the effects of the pseudo-
random bit sequence on write and read cancel out. A sync framing error of any number
of bits except the period of the pseudo-random sequence can be guaranteed to produce
errors throughout the data record in excess of the correction capability of the EDAC
code, so a sync framing error is no more subject to misdetection than any other uncor-
rectable error.
- 267-
the correction algorithm will still raise an uncorrectable error flag. If the error pat-
tern exceeds all correction and detection guarantees of the code, the sync framing error
wiII appear to be correctable with probability Pmc and wiII appear to be uncorrectable
with probability I-Pmc . As the amount of synchronization slippage increases, the length
of the apparent error burst(s) also increases.
Consider the case where the sync mark is protected by the error detection/correc-
tion code and synchronization occurs early by one or more symbols. The pattern read
wiII appear to be that of a multiple of the codeword written plus some error pattern. of
about the same length as the sync mark at a location which includes the symbols of the
assumed sync mark plus one or more symbols following the assumed sync mark. If the
error pattern does not exceed the correction guarantees of· the code, the correction
algorithm will detect the presence of· error in the assumed sync mark and raise an
uncorrectable error flag since an error at the location of the sync mark implies that
the original detection of the sync mark was mistaken and a sync framing error must
have occurred. If the error pattern exceeds the correction guarantees but not the
detection guarantees of the code, the correction algorithm will still raise an uncorrec-
table error flag. If the error pattern exceeds all correction and detection guarantees of
the code, the sync framing error will appear to be correctable with probability Pmc and
wiII appear to be uncorrectable with probability I-Pmc . As the amount of synchroniza-
tion slippage increases, the length of the apparent error burst also increases.
- 268-
CONCLUSIONS
Based on the material presented above DST recommends that all cyclic and shor-
tened cyclic error detection and correction codes be modified by either:
(1) Initializing the ECC shift register to a specially selected pattern prior to each
write and read, or
Including the sync mark in the bits covered by the error detection/correction code
and insuring that codewords are preceded and followed by non-zero symbOls could
provide additional protection.
The measures (a), (b), and (c) below have been used in the past to provide in-
creased sync framing error protection. If economic reasons dictate the use in new
designs of existing IC's or other hardware for which it is not feasible to implement (1),
(2) or (3) above, DST recommends the use of all of provisions for sync framing error
protection (a)-(c) below whose implementation is possible:
(a) Initializing ECC shift register to all ones prior to each write or read.
(c) Including the sync mark within the ECC check on each write and read.
(d) Insuring that codewords are preceded and followed by non-zero symbols.
- 269-
4.9 INTERLEAVED, PRODUcr, AND REDUNDANT-SEcroR CODES
4.9.1 INTERLEAVED CODES
Interleaving is a technique used to geographically disperse data for each codeword
over a larger area of media in order to spread error bursts over multiple codewords. In
this way, the error contribution to anyone codeword from a long defect is minimized.
As an example, consider a two-dimensional array with C bytes per row and N bytes
per column, in which each column is a codeword of a Reed-Solomon code. As bytes are
written to the media, they are also processed by the redundancy-generating circuitry.
Bytes 0, C, 2C, etc. are processed by the circuitry for interleave O. Bytes 1, C + 1, 2C + 1,
etc. are processed by the circuitry for interleave 1, and so on. As bytes are read,
operation is identical except that syndromes are generated rather than redundancy. If
necessary, the correction algorithm is performed, after which the data is released to the
host. In this example, any error burst must span more than C bytes before affecting
more than one byte from anyone codeword (interleave).
DO Dl D2 ' . DC-l
DC DC+l ' . , . ·.
K data bytes ·. ·. ·. ·. ·.
per codeword
·. ·. ·. ·. · .
RO Rl R2 ·. RC-l
RC RC+l ·. ·. ·.
N-K redundant ·. ·. · . ·. ·.
bytes/codeword
C codewords
There are many interleaving techniques. Selection of a technique for a particular
application involves tradeoffs between cost, code performance, transfer rate, block size,
and correction time.
- 270-
4.9.2 PRODUCT CODES
Product codes perform error correction on a block of data in more than one
dimension. Consider an array of symbols organized into rows and columns, with each
row treated as a codeword of some code CI and each column as a codeword of another
(possibly the same) code C2. The resulting overall code is called a product code. It is
common to see Reed-Solomon codes used as the component codes of product codes.
There are many techniques for loading and unloading the array of product code.
As an example, consider an array which on write is loaded one row at a time from the
source. After all redundancy in both dimensions has been calculated, the array is
unloaded diagonally to the device. On read, the data from the device is loaded diagon-
ally, then after correction, the array is unloaded one row at a time to the destination.
The diagonal unloading and loading accomplishes geographical dispersion of data in a
manner that minimizes the number of error bytes that a long burst can contribute to
any codeword in either dimension.
ROW
• REDUNDANCY •
L- ~O~--·--+-C-H-E-C-K-S-O-N--I
~E~U~D~_C_Y_ _~R_E_D_U_N_D_AN_C_Y~
There are many decoding techniques for product codes, one of which is to correct
rows first, then correct columns. Another technique is to iterate row and column
correction; errors in an uncorrectable codeword from one dimension may belong to
correctable codewords in the other dimension, and after they are corrected, the uncor-
rectable codeword may become correctable. Another technique is to combine
row/column iteration with erasure correction; the row [column] numbers of codewords in
error are used as erasure pointers for column [row] correction. There are other decod-
ing techniques for product codes as well. The correction capability of product codes is
very dependent on the precise decoding techniques used.
Product codes have been popular with the Japanese companies and have been
implemented on a number of digital audio products, including both optical disk and
magnetic tape products for consumer and commercial use.
Redundant-sector codes can handle very long error bursts. As an example, con-
sider an implementation with one redundant sector for each K data sectors. Each
sector has its own sync field, and uses CRC for error detection. Each byte of the
redundant sector is generated by EXCLUSIVE-OR-ing together the corresponding bytes
of the K data sectors i.e. computing a parity sector. If on reading a data sector, a
CRC error is detected, its contents can be regenerated by EXCLUSIVE-OR-ing the
- 271 -
remammg data sectors with the redundant sector. This technique can correct even a
long burst which wipes out a sync mark.
DATA SECTOR #1
An extension of this technique is to use interleaving e.g. one redundant sector for
even sectors and another for odd sectors. This will allow correction of a long burst
spanning any two adjacent sectors, or correction of any two random sectors in error
provided that one is even and the other is odd.
DATA SECTOR #1
- 272-
Redundant-sector techniques can be combined with other ECC techniques to form a
more powerful EDAC scheme. For example, the CRC shown for each sector can be re-
placed with an ECC code which can correct single (or multiple) small bursts at the
sector level, and redundant-sector techniques. can be used to correct the much lower
rate of long bursts. This is in effect a product code, with the individual sectors com-
prising the row codewords and corresponding bytes from the individual sectors compris-
ing the column codewords.
Redundant-sector techniques have been used on Bernoulli disks, read-only optical
disks, and numerous tape devices.
- 273-
CHAPTER 5. SPECIFIC APPLICATIONS
In the early 1970's, 32-bit Fire codes were widely used for error correction in
OEM magnetic disk devices. These codes were easy to define and required a moderate
amount of hardware to implement. However, their sensitivity to multiple short bursts
posed a serious data accuracy problem. Some error recovery procedures in use at the
time performed correction on soft as well as hard errors; this worsened the problem.
By the late 1970's, many companies had dropped 32-bit Fire codes in favor of 32-bit
computer-generated codes that were selected to be insensitive to multiple short bursts.
They also changed their error recovery procedures to correct hard errors only, and had
taken other steps to achieve better data accuracy. As the 5 1,4 inch hard disk industry
developed, form factor pressure on controller builders pushed implementation efficiency
to the point where 32-bit computer-generated codes were implemented using just five
and one-half standard TTL IC's.
Over the last four or five years, many hard disk developers have implemented the
(2,7) RLL modulation code. The error-propagation properties of this code necessitates a
larger correction span, which has prompted many companies to switch to more powerful
48-bit, 56-bit and 64-bit computer-generated ECC codes to maintain good data accuracy.
Several hard disk controller IC's developed during this period implement programmable
polynomial generators that support 48-bit codes, and at least one supports codes up to
64 bits in length.
Hard disk controller IC developers (including Cirrus Logic) are now incorporating
two symbol error correcting Reed-Solomon codes in their new designs in order to handle
higher raw-error-rate media by correcting two independent error bursts within a sector.
In 1970, IBM introduced the 3330 magnetic disk drive, which uses a 56-bit Fire
code to correct single II-bit bursts in variable length records of up to approximately
13,000 bytes. This code's generator polynomial was selected to allow fast computation
of error location using the Chinese Remainder Theorem. However, the structure that
permitted fast correction also introduced a pattern sensitivity to multiple short bursts.
The IBM 3340 (the first drive using Winchester technology) and 3350 magnetic disk
drives were introduced in 1973 and 1975, respectively. They use the same 48-bit Fire
code to correct single bursts (up to 4 bits for the 3340, 5 bits for the 3350) in variable
record lengths (up to 19,000 bytes for the 3350).
In 1979, IBM introduced the 3370 magnetic disk drive, which employs a three-way
interleaved, Reed-Solomon code on fixed-length sectors of 512 bytes. Three redundancy
bytes are used in each of the three interleaves, giving single symbol (byte) error cor-
rection and double symbol error detection in each interleave. IBM uses this code to
guarantee the correction of any single burst of 9 bits or less, the detection of any
- 274-
single burst of 41 bits or less, and the detection of any two bursts, each of 17 bits or
less.
This code has a high miscorrection probability for cases in which multiple short
bursts cause a single interleave to have more than two symbols in error. The existence
of this pattern sensitivity is clear when one considers that for all possible errors, a
sector has nine bytes of redundancy protecting it from miscorrection, versus only three
bytes of redundancy for single-interleave errors.
In 1980, IBM introduced the 3380 magnetic disk drive, which employs a
Reed-Solomonlike, two-way interleaved code to correct single bursts in variable length
records of up to approximately 48,000 bytes. Operating on 16-bit symbols, this code
will correct any single burst contained within two contiguous symbols and detect any
single burst contained within three contiguous symbols. Twelve bytes of redundancy are
used; four bytes are associated with each interleave, and an additional four bytes are
shared between the two interleaves. This sharing of redundancy between interleaves
reduces the miscorrection probability for single-interleave errors.
In 1987, IBM announced the 3380K magnetic disk drive, which employs a novel
multiple-burst error correcting code that dedicates more than six percent redundancy to
error detection and correction and accommodates a raw error rate much higher than for
earlier versions of the 3380. Other features of the code include minimum data delay
and a unique supplementary error detection method. The higher track densities achieved
by the 3380K may have motivated IBM to use multiple-burst correction. DST expects to
see even more powerful codes of the same class implemented on future high-end mag-
netic (and possibly optical) devices.
- 275-
less expensive of the two methods. We expect IBM to continue to use this code as new
versions of the 3480 are offered. Other companies developing eighteen-track magnetic
cartridge tape products are likely to use it as well.
- 276-
DST expects to continue to be at the forefront of BDAC technology for optical
storage. We support the use of long distance Reed-Solomon codes for 90 mm and 130
mm, WORM and rewritable optical and developed our NG-8510, NG-8520, and CL-SH8530
IC's especially for this application. The NG-8510/8520 approach splits the error correc-
tion task between logic within the IC and logic within support software. The
CL-SH8530 performs correction real-time in hardware.
- 277-
5.2 APPLICATION TO LARGE-SYSTEMS MAGNETIC DISK
ECC Bits 56 48 48 32 72
Correction Span 11 3 4 11 9
Detection Span
Before Correction 56 48 48 32 65
Published Det.
Span After Corr. 22 11 10 32 16
3340/3350
(x21 + l).(x 11 + x2 + 1)
Field Generator: x8 + x6 + x5 + x4 + 1
Code Generator: (x + a~. (x + a 1) • (x + a3- 1)
- 278-
5.2.2 THE 3330 MAGNETIC DISK CODE
CODE DEFINITION
The 3330 code is a generalized Fire code. It has a single-burst correction span of
11 bits and a single-burst detection span of 22 bits. Decoding uses the Chinese Re-
mainder Method for displacement calculation. This method requires only a fraction of
the shifts required by clocking around a sequence. The code is dermed by the following
polynomial:
g(x) = x 56 + x 55 + x 49 + x 45 + x 4l + x 39 + x 38 + x 37
+ x 36 + x 3l + x 22 + x 19 + x 17 + x 16 + x 15
+ x14 + x12 + xll + x 9 + x 5 + x + 1
- 279-
CORRECTION PROCEDURE
(1) Shift Po until the 11 high order bits of the shift register are zeros and the
lowest order bits of the shift register are nonzero. Save the shift count
(no). If the above alignment is not achieved in less than 22 shifts, the error
is uncorrectable.
(2) Shift PI (with feedback) until a match with Po is achieved. Save the shift
count (nO. If a match is not found in less than 89 shifts, the error is
uncorrectable.
(3) Shift P2 (with feedback) until a match with Po is achieved. Save the shift
count (n2). If a match is not found in less than 13 shifts, the error is
uncorrectable.
(4) Shift P3 (with feedback) until a match with Po is achieved. Save the shift
count (n3). If a match is not found in less than 23 shifts, the error is
uncorrectable.
The error pattern is in PO. The error displacement, measured from the last
check bit to the last error bit Oow order of PO) is given by:
d == [-(ko*no+kl *nl +k2*n2 +k3*n3)] MOD e
where,
ko =452,387
kl = 72,358
k2 = 315,238
k3 = 330,902
e = 585,442 = LCM(22,89,13,23)
HARDWARE SELF-CHECKING
Self-checking of the shift registers is performed with parity predict circuits. See
Section 6.5 for information on parity predict.
-280 -
5.2.3 THE 3350 MAGNETIC DISK CODE
CODE DEFINITION
The 3350 code is a shortened Fire code. It is defined by the following generator
polynomial:
c(x) = x 13 + 1
The c(x) factor is composite and has a period of 13. The p(x) factor is irreducible
and has a period of 34,359,738,367. The period of g(x) is the least common multiple of
the periods of c(x) and p(x), which is 446,676,598,771. Fire codes are discussed in
Section 3.1. Decoding of shortened codes is discussed in Section 2.4.
CODE CAPABILITY
When the 3350 code is used for detection only, any single burst not exceeding 48
bits in length is guaranteed to be detected. In addition, any combination of double
bursts is guaranteed to be detected provided the sum of the burst lengths is no greater
than 14. This number comes from the Fire code theory and is a lower bound only. It
is very conservative since record length is very short compared to the period of g(x)
(see Section 3.1). Misdetection probability for bursts exceeding the code guarantees is
3.55E-15.
In the 3350 implementation, the code is used to correct bursts through four bits in
length on records up to 19,069 bytes in length. With this correction span and record
length, the code is guaranteed to detect any single burst not exceeding 26 bits in
length. This number was determined by a computer search. The Fire code theory gives
the detection span as only ten bits. For 19,069 byte records, the miscorrection prob-
ability is 4.3E-9 for error bursts exceeding code guarantees, assuming all errors are
possible and equally probable.
CODE DESCRIPTION
The 3350 code is shortened by the premultiplication of the data polynomial. This
requires a shift-register circuit that multiplies and divides simultaneously. These cir-
cuits are discussed in Section 1.3.2.
- 281 -
The multiplier polynomial is:
x47 + x39 + x35 + x32 + x30 + x25 + x21 + x20
+ x 17 + x 15 + x 13 + x9 + x7 + x6 + x2
The multiplier polynomial is used only during read, since shortening of the code
applies only to the read case.
The logical shift-register configurations used for write and read are shown in
Figures 5.2.3.1 and 5.2.3.2 respectively. Although the write and read configurations are
shown separately, the physical implementation is a single 48-bit shift register. As seen
in Figure 5.2.3.2, there are three separate groups of bits feeding the XOR gates of the
shift register in the read configuration: ..
Figure 5.2.3.3 shows a circuit equivalent to that shown in Figure 5.2.3.2. This
circuit is easier to understand. It is shown in the same form as circuits performing
simultaneous multiplication and division in Section 1.3.2. A close comparison of the
circuits of Figures 5.2.3.2 and 5.2.3.3 reveals that splitting the read configuration feed-
back logic into three parts is a way to save logic. The feedback logic for the write
configuration can be obtained from the feedback logic for the read configuration by
OR'ing the BOTH and FEEDBACK lines and adding gating functions.
WRITE OPERATION
There are two write modes:
During the WRITE DATA BITS mode, serial data bits are written to the disk.
Simultaneously, the ECC shift register, with write feedbacks enabled, receives the serial
data bits and calculates write check bits. During the WRITE CHECK BITS mode, the
feedbacks are disabled and check bits are shifted out of the register, complemented, and
written to the disk.
READ OPERATION
There are three read modes:
During the READ DATA BITS mode, the ECC shift register, with read feedbacks
enabled, receives serial read data bits. A syndrome is partially computed. During the
-282 -
READ CHECK BITS mode, read feedbacks remain enabled. The complements of check
bits are received by the ECC shift register and the computation of the syndrome is
completed. After processing the read check bits, the ECC shift register should be all
zeros if no error occurred and nonzero if a detectable error occurred.
The CORRECTION mode is entered at the end of a read when the ECC shift
register contents (syndrome) is found to be nonzero. The shift register is shifted with
read feedbacks enabled, until bits 4-47 are zero. When this occurs, the error pattern is
in bits 0-3. Shifting continues to the next byte boundary to place the error pattern in
byte alignment. The shift count is used to calculate an error displacement. The error
is uncorrectable if all zeros are not found in bits 4-47 of the shift register within
156,352 (19,544*8) shifts.
HARDWARE SELF-CHECKING
The 3350 employs parity predict for self-checking of error correcting circuits.
These techniques are discussed in Section 6.5.
- 283 -
II
36 35
I I !I I
23 21 15 13 8
'0 B
y47 39 36 35 32 30 25 23 21 20 17 .15 13 9 8 7 6 2 o~
T=INPUT B=BOTH F=FEEDACK
-284-
5.2.4 TilE 3370 MAGNETIC DISK CODE
INTRODUcrlON
For the 512-byte data fields, there are nine check bytes, three for each interleave.
Two interleaves contain 171 data bytes each and the remaining interleave contains 170
data bytes.
ID fields are protected by a three-byte detection-only code that is a subset of the
data field code.
The check bytes are stored in a memory organized logically into three areas re-
ferred to as RAMI, RAM2 and RAM3.
Interleave 0 Interleave 1 Interleave 2
RAMl Gl (x) Gl (x) Gl(x)
MEM LOC 0 MEM LOC 1 MEM LOC 2
- 285-
OVERALL CODE CAPABILITY
The three syndromes of the 3370 imple~entation give the code a minimum distance
of four, which is sufficient to correct single errors and detect double errors.
NOTE: The structure of the 3370 code provides the capability to correct any
single burst up to 17 bits in length. However, the correction span as imple-
mented is limited to nine bits.
2. Guaranteed detection span without correction: 65 bits
3. Guaranteed detection span with correction:
Single-burst: 41 bits
Double-burst: 17 bits
ENCODING
Normally, the encoding for a Reed-Solomon code with the capability of the 3370
code would be accomplished with circuits implementing the following encode equation:
G(x) = (x + a- 1).(x + aD). (x + a+ 1)
However, in the 3370 implementation, each write check byte is generated separately
by dividing the data polynomial by each factor of G(x).
In the 3370 implementation, a is defmed by the primitive polynomial
x8 + x6 + x5 + x4 + 1.
- 286-
SYNDROME GENERATION
On read, the encoding process is repeated. Syndromes are the XOR sum of check
bytes generated on write and check bytes generated on read. The single-error syndrome
equations are shown below:
So = EI
SI = El.a Ll
S-1 = El.a-Ll
where,
El = Error value
L1 = Error location (displacement from the end of an interleave; the
last data byte of an interleave is location zero).
HARDWARE IMPLEMENTATION
The figures on the following page show the 3370 hardware implementation. The
operation is as follows:
WRITE DATA:
Nine check bytes are written via the MUX in the following order:
READ DATA:
The nine check bytes read are added modulo-2 (XOR) to the nine check bytes
generated (the contents of the RAM's). The resulting syndromes are stored
in the RAM's.
- 287 -
3370 ECC HARDWARE
o
SRI 1
.D LOAD RAM 1
r.=======~===========~ 2
'-----'
~C:V====~ o
SR2
=D=A=T=A~~====~~~=========~~_~
M
2 r;=====j U ~
X
e====~ o
:~0F========~~S_R_3~ 2
RAM 3
7 0
Note: The bus is flipped
end-far-end at this point
4 3
3 2
5 4
4 3
5 4
6 5 6 5
7 6 7 6
7 7
- 288-
CORRECI'ION ALGORITHM
So = El
sl = El o a Ll
S-l= El o a- Ll
The error value is SO. The error location can be determined by substituting So for
E1 in the equation for S1.
El So
Ll = LOG a (Sl)-LOGa (So)
In the 3370 implementation the LOG a function is accomplished with a ROM table.
- 289-
OVERAU CORRECl'ION ALGORITHM
Functions of the overall correction ro\ltine inc'ud~:
0 1 2 3 4 5 6 7 8 9 A B C D E: F
00 00 01 E7 02 CF E8 3B 03 23 DO 9A E9 14 3C B7
19 04 9F 24 42 D1 76 9B FB EA F5 15. OB 3D 82 B8 92
20 05 7A AO 4F 25 71 43 6A D2 EO 77 DD 9C F~ FC 20
30 EB D5. F6 87 16 2A OC ~C 3E E3 83 45 B9 SF 93 5E
40 06 46 7B C3 A1 35 50 A7 26 6D 72 CB 44 33 6B 31
50 D3 28 E1 BD 78 6F DE FO 9D 74 F3 80 FD Cp 21 1,;2
60 EC A3 D6 62 F7 37 88 66 17 5.2 2B B1 on A9 8D 59.
70 3F 08 E4 97 84 48 4G DA BA 7D GO C8 94 (:5 5F AF,!
80 07 96 47 D9 7C C7 C4 AD A2 61 36 65 5~ BO 1\8 58
90 27 BC 6E EF 73 7F CC 11 45 C~ 34 A6 6C CA ~2 30
AO D4 86 29 8B E2 4A BE 5.D 79 4F,! 70 69 DF DC F1 1F
BO 9E 41 75 FA F4 QA 81 91 FE E6 CF,! ~A 22 99 13 B6
CO ED OF A4 2E D7 AB 63 56 F8 8F :38 B4 89 5B 67 1D
DQ 18 19 53 1A 2C 54 B2 1B OE 2D AA 55 8E B3 5A 1C
EO 40 F9 09 90 E5 39 98 B5 85 8A 49 5C 4D 68 pB IE
FO BB EE 7E 10 C1 A5 C9 2F 95 D8 C6 AC 60 64 AF 57
-290"
SIMULA TION OF 3370 ECC IMPLEMENTATION
ERROR CASE
BEGIN READ DATA PART OF SIMULATION
(DATA PART OF RECORD IS ALL ZEROS EXCEPT FOR ERROR)
(CHECK BYTES ARE '00 FF FF 00 08 08 00 A3 A3 ')
t I
I I
I I
I I 1
I
FINISHED DATA, NOW READ CHECK BYTES
INTERLEAVE 2,1,
f 0f
BYTE MOD ERROR ---RAM 1--- ---RAM 2--- ---RAM 3---
CNT 3
512 2 FF FF OF 08 08 3C A3 A3 E7
513 0 00 FF OF 08 08 3C A3 A3 E7
514 1 00 00 OF 08 08 3C A3 A3 E7
515 2 00 00 OF 08 08 3C A3 A3 E7
516 0 00 00 OF 00 08 3C A3 A3 E7
517 1 00 00 OF 00 00 3C A3 A3 E7
518 2 00 00 OF 00 00 3C A3 A3 E7
519 0 00 00 OF 00 00 3C 00 A3 E7
520 1 00 00 OF 00 00 3C 00 00 E7
END OF SIMULATION
- 291 -
SIMULATION OF 3370 ECC IMPLEMENTATION
NO ERROR CASE
BEGIN READ DATA PART OF SIMULATION
(DATA PART OF RECORD IS ALL ZEROS)
(CHECK BYTES ARE '00 FF FF 00 08 08 00 A3 A3 ')
512 2 FF FF 00 08 08 00 A3 A3 00
513 0 00 FF 00 08 08 00 A3 A3 00
514 1 00 00 00 08 08 00 A3 A3 00
515 2 00 00 00 08 08 00 A3 A3 00
516 0 00 00 00 00 08 00 A3 A3 00
517 1 00 00 00 00 00 00 A3 A3 00
518 2 00 00 00 00 00 00 A3 A3 00
519 0 00 00 00 00 00 00 00 A3 00
520 1 00 00 00 00 00 00 00 00 00
END OF SIMULATION
- 292-
5.3 APPLICATION TO SMALL-SYSTEMS MAGNETIC DISK .
- 293-
5.3.1 POLYNOMIAL CAPABIL1TIES
The capabilities specified below represent the extremes for which the polynomial
has been tested. Further testing is required if the polynomial is to be used beyond
these extremes.
If you plan to use this polynomial, read Section 4.4 DATA ACCURACY to under-
stand miscorrection probability (number 8 below) before selecting the correction span.
4. Single-burst detection span when the code is used for error detection only =
32 bits.
5. Single-burst detection span (d) when the code is used for error correction:
= 20 bits for b=5 and r=8*270
= 14 bits for b=8 and r=8*270
13 bits for b=11 and r=8*270
= 19 bits for b=5 and r=8*526
= 14 bits for b=8 and r=8*526
= 12 bits for b=11 and r=8*526
= 11 bits for b=11 and r=8*1038
6. Double-burst detection span when the code is used for error correction:
- 294 -
5.3.2 HARDWARE IMPLEMENTAll0N
Several examples of encoder and decoder circuits are described below. Although they
are shown separately, circuitry can obviously be shared between encoder and decoder.
REDUN GATE
~
.,
r 1
DATA
+ ~31 I I o~
L_ J
MUX
;--
L
4- 1 WRITE DATA/CHECK BITS
I xi ~xi-ll
0
T
The shift register has an XOR gate feeding the input of each stage which has a
non-zero coefficient in the generator polynomial (except stage 0). For initialization
considerations, see Section 4.8.2.
After all DATA bits have been clocked into the shift register, REDUN_GATE is as-
serted. The AND gate then disables feedback, allowing the check bits to be shifted out
of the shift register, and the MUX passes the check bits to the device.
- 295-
BIT-SERIAL ENCODER USING THE EXTERNAL-XOR FORM OF SHIFT REGISTER
-
r 1 0
D F<iI=
-1 0 I I
L _ J
31
L -
D
IXi-1~ xi
I J-~-
REDUN_GATE MUX
,--
1 WRITE DATAl
DATA
0 CHECK BITS
T
The shift register is tapped at the output of each stage which has a non-zero
coefficient in the generator polynomial. For initialization considerations, see Section
4.8.2.
After all DATA bits have been clocked into the shift register, REDUN_GATE is as-
serted. The upper AND gate then disables feedback and the lower AND gate blocks
extraneous DATA input to the ODD parity tree, whose output the MUX passes as check
bits to the device.
- 296-
CIRCUITS TO GENERATE SYNDROMES TO BE USED IN SOFTWARE CORRECTION
CASE 1: SYNDROME IS OUTPUT BEIDND DATA
BIT-SERIAL DECODER USING THE INTERNAL-XOR FORM OF SHIFT REGISTER
DATA
~ ,'31 Or-
MUX
~
The shift register has an XOR gate feeding the input of each stage which has a
non-zero coefficient in the generator polynomial (except stage 0). The shift register
must be initialized to the same state used before write.
After all DATA bits have been clocked into the shift register, REDUN_GATE is as-
serted. The upper AND gate then disables feedback, allowing the check bits to be
shifted out of the shift register, and the MUX passes the syndrome bits to the buffer.
The lower AND gate allows any non-zero syndrome bit to latch the JK flip-flop, assert-
ing the ECC_ERROR signal. .
- 297-
BIT-SERIAL DECODER USING THE EXTERNAL-XOR FORM OF SHIFT REGISTER
.---
0
D 1::<11=
~O 31
l D
'---
D-
MUX
REDUN_GATE -
1 WRITE DATAl
DATA
0 CHECK BITS
T
]
)-.-r-- ECC_ERROR
J Q
K
'---
The shift register is tapped at the output of each stage which has a non-zero
coefficient in the generator polynomial. The shift register must be initialized to the
same state used before write.
After all DATA bits have been clocked into the shift register, REDUN_GATE is as-
serted. The upper AND gate then disables feedback and the lower AND gate blocks
extraneous DATA input to the ODD parity tree, whose output the MUX passes as syn-
drome bits to the buffer. The bottom AND gate allows any non-zero syndrome bit to
latch the JK flip-flop, asserting the ECC_ERROR signal.
- 298-
CASE 2: SYNDROME IS FETCHED FROM SHIFT REGISTER
BIT-SERIAL DECODER USING THE INTERNAL-XOR FORM OF SHIFT REGISTER
REDUN GATE
J
DATA J., 131
1 O~
- ECC ERROR
) )--JQ
- K
-
The shift register has an XOR gate feeding the input of each stage which has a
non-zero coefficient in the generator polynomial (except stage 0). The shift register
must be initialized to the same state used before write.
After all DATA bits have been clocked into the shift register, REDUN_GATE is as-
serted. The upper AND gate then disables feedback. The upper-most path, leading from
the XOR gate to stage 0, allows the shift register to collect the syndrome bits for later
retrieval. The lower AND gate allows any non-zero syndrome bit to latch the JK flip-
flop, asserting the ECC_ERROR signal.
- 299-
DETAILED IMPLEMENTATION EXAMPLE #1
The hardware of Figure 5.3.2.1 is used on write to generate check bits and on read
to generate an error syndrome. The error syndrome is stored in memo~ via the de-
serializer during check bit time. It has the following format, where x is the high
order bit of the first byte stored:
This format assumes that the high-order bit of a byte is serialized and deserial-
ized first. The bits are numbered here for the software flow chart. Bits numbered 0-7
above are bits 31-24 of the syndrome from hardware.
As the data is written, data bits are directed to pin 10 via the 2: 1 circuit. At the
same time, check bits are generated in the shift register in a transformed format.
During write checkbit time, the transformed check bits in the shift register are con-
verted to true check bits (some inverted) by the odd circuit and are directed to pin 10
via the 2: 1 circuit.
As the data is read, data bits are directed to pin 9 via the 2: 1 circuit. At the
same time, syndrome bits are generated in the shift register in a transformed format.
During read check-bit time, the transformed syndrome bits in the shift register are
converted to true syndrome bits by the odd circuit and are directed to pin 9 via the 2: 1
circuit.
During read check-bit time, the flip-flop (LS74) will be clocked to its error state
if any of the syndrome bits are nonzero. At the end of any read, pin 11 will indicate
if an error occurred.
- 300 -
i r r * EVEN P .....-
32-BIT LEFT-SHIFTING SHIFT A T .....-
REGISTER (74LS164 1 S) ** R R .....-.
xO x 31
ODD I
T
Y
E
E :J
.....-
SHIFT INPUT ENABLE
2
CLEAR
3 CHECK/SYNDROME BITS
READ/WRITE DATA
1 -------------------r~r--r--r_--------------------------------~
Y
NOTES: * There is one feedback line for each non-
zero coefficient term of the forward poly-
nomial except the x 32 term.
** The 1 1' state of the shift register outputs
is the low voltage state.
- 301 -
DETAILED IMPLEMENTATlQN EXAMPLE' 2
The hardware of Figure 5.3.2.2 is similar to that for method 1 except that. ECC
error is detected by software. After a record and syndrome have been read and stored,
software fetches the four byte syndrome and checks for zero. If the syndrome is
nonzero, an error has occurred and the software ECC algorithm must be performed.
In summary. if method 2 is used, software performs the function that the flip-flop
(LS74) performed in method 1.
IMPLEMENTATION SUBTLETIES
Listed below are some points that have been misunderstood by engineers imple-
menting the hardware.
1. There is a shift register stage for xO through x31 . There is no shift register
stage for x:.i2.
2. The inputoend of the shift register is labeled x31 and the direction of shift is
towards x. This is not an arbitrary assignment. It is required for this par-
ticular form of the polynomial shift register.
3. There is a feedback path from the shift register to the pari~ tree for each
nonzero coefficient term of the forward polynomial except for the x term.
6. After activating the clear line of Figures 5.3.2.1 and 5.3.2.2, the circuit should
not be clocked until the first bit is ready to be processed.
- 302 -
r r 1 * EVEN P r-
32-BIT LEFT-SHIFTING SHIFT A T I--
REGISTER (74LSI64'S) ** R R 1--'
~
ODD I E
xO x 31 T E
Y r--
SHIFT INPUT ENABLE
2
CLEAR
3 CHECK/SYNDROME BITS
READ/WRITE DATA
1
~~~LSI57 ~------------------- 9
READ DATA/SYNDROME BITS
5
- 303-
5.3.3 SOflWARE IMPLEMENTATION
ERROR DETECTION
At the completion of a read, the existence of any non-zero bit in the syndrome
indicates the existence of an error or errors. A non-zero syndrome is typically, and
most efficiently, detected with sequential logic as shown above. However, if a shift
register of the intemal-XOR form with a separate path to stage 0 (see the circuit for
Case 2 above) is used, it is possible (and has been done in the past) to use com-
binatoriallogic (e.g., a 48-input OR gate) rather than the AND gate/JK latch shown.
ERROR CORRECTION
When a non-zero syndrome indicates the presence of an error, correction is ac-
complished by shifting the syndrome until the error pattern is found. This shifting may
be done bit-serially or byte-serially, in hardware or by software. Bit-serial and byte-
serial software algorithms are given below. For discussion of byte-serial hardware
implementations Section 4.7.
The required shifting may be performed either forward along the code's shift-
register sequence using the code's generator polynomial, or reverse along the code's
shift register sequence using the reciprocal of the c~e's generator polynomial.
When forward shifting is implemented, pre-mUltiply must be used to shorten the
code. For a discussion of code-shortening, see Section 2.4. Use the following expres-
sion for the pre-multiply polynomial:
Pmult(x) = xm- l • F(l/x)
where
F(x) = xn- l MOD g I (x)
Forward shifting requires either the use of a different Pmult(x) for each sector length,
or that Pmult(x) must be selected for the largest sector-length to be used. In the
latter case extra shifts are required for the shorter sector-lengths.
When reverse shifting using the reciprocal polynomial is implemented, then if the
shift register shifts left [right] during read, then either
a) The shift register must shift right [left] during correction or
b) The syndrome must be flipped end-for-end before correction, and the shift
register must continue to shift left [right] during correction.
- 304-
DETERMINING ERROR PATTERN AND LOCATION
The error pattern is found by shifting until a given number of consecutive zeros
appears in one end of the shift register. When this occurs, the error pattern is aligned
with the other end of the shift register. Which end of the shift register is aligned
with the error pattern is a matter of implementation choice. See Sections 2.3 and 2.4
for examples of pattern alignment.
Error displacement is calculated by counting the number of shifts executed while
locating the error pattern. The details of displacement calculation depend on which end
of the shift register is used to align the error pattern.
The detection of consecutive zeros to indicate that a valid error pattern has been
found can be accomplished using either combinatorial or sequential logic. Combinatorial
logic would consist of a many-input OR gate.
Sequential logic circuitry for an intemal-XOR shift register implementation would
include a counter that is incremented by each '0' that appears at the output of the
high-order stage and is reset by any '1' that appears. When the counter reaches the
given threshold, the error pattern has been found.
It is also possible to simulate such a counter in software; the software would
control an output line to initiate each shift of the hardware shift register and receive
the output of the high order stage. The software can simultaneously simulate the dis-
placement counter.
OTHER CONSIDERATIONS
1) The detection of consecutive zeros that surround the error pattern is more
complex when error correction is performed using byte-serial hardware or
software. See Figure 3.1.1 for an example of byte-serial hardware. A byte-
serial software algorithm is given below.
2) When error correction is performed in hardware, the internal-XOR form os
shift register is typically used. However, it is also possible to perform error
correction in hardware when the external-XOR form is shift register is used.
3) Feedback could be left enabled during redundancy time. If the reverse-shift-
ing correction method is used, the error location process would then require
48 additional shifts. If the forward-shifting correction process is used, a
different Pmult(x) would be used.
- 305-
SOFTWARE ALGORITHMS
The software algorithms use the syndrome to generate the correction pattern and
displacement for correctable errors, or to detect uncorrectable errors.
The maximum record length for this polynomial is 1038 bytes (including check
bytes). The flow charts and software listings have been designed so that the record
length can be varied by changing a single constant (Kl).
The flow charts cover the algorithms through determination of pattern and dis-
placement. Both forward (FWD) and reverse (REV) displacements are computed. FWD
displacement starts at the beginning of the record counting the first byte as zero.
REV displacement begins with the end of the record counting the last byte as zero.
The pattern is in R2, R3, and R4. R2 is XOR'd with the record byte indicated by
byte displacement. R3 is XOR'd with the byte one address higher than the byte dis-
placement. R4 is XOR'd with the byte two addresses higher than the byte displacement.
If the correction span selected is nine bits or less, the pattern is in R2 and R3.
No action is required for R4.
Once an error pattern and displacement have been computed, there are several
special displacement cases that must be handled. For example, the error may be in
check bytes or it may span data and check bytes. The error may be a header field or
a data field. Some formats combine header information with the data field. The data
field in this case, has several overhead bytes, containing header information, preceding
data. This adds additional special displacement cases.
The software routines defined in this section contain logic for separate and
combined header and data fields.
- 306-
The procedures below handle the special displacement cases of four overhead bytes.
In a particular implementation there may be more, less, or even no overhead bytes.
2 Error burst spans overhead bytes and data. XOR R2 with next to last over-
head byte. XOR R3 with last overhead byte. XOR R4 with first data byte.
3 Error burst spans overhead bytes and data. XOR R2 with last overhead byte,
XOR R3 with first data byte. XOR R4 with second data byte.
4 Error burst spans data and check bytes. XOR R2 with last data byte. No
action required for R3 or R4.
5 Error burst spans data and check bytes. XOR R2 with next to last data byte.
XOR R3 with last data byte. No action required for R4.
- 307 -
BIT-SERIAL SOFTWARE ALGORITHM
FOR POLYNOMIAL '42402402105'
LOAD SYNDROME
R1 x O-x 7
R2 = x 8 _x 15
R3 = x 16 _x 23
R4 = x 24 _x 31
INITIALIZE
J = K1 *
ALGN FLAG = 0
=0 J=J+8
r--=-t.., R1=R2, R2=R3
R3=R4,R4=0
=0
t
****
CORRECTABLE
- 308-
NOTES FOR BIT-SERIAL CORREcrJON ALGORITHM
* Kl = Record length in bits minus 25. Record length includes all bits covered
by ECC including the check bits.
** When shifting, the low-order bit of a register is shifted into the high-order
bit of the next higher-numbered register. '+' here means EXCLUSIVE-OR; the
constants are a form of the reciprocal polynomial in decimal.
**** On correctable exit, J is the forward bit displacement and J/8 is the forward
byte displacement. The reverse byte displacement is (Kl +25-1)/8-1. The error
pattern is in R2:R3:R4.
- 309 -
BYTE-SERIAL SOFTWARE ALGORITHM
FOR POLYNOMIAL '42402402105'
LOAD SYNDROME
R1 xO-x?
R2 = x 8 _x 15
R3 = x 16 _x 23
R4 = x 24 _x 31
INITIALIZE
J = K1 *
J=J+1
=0
R1=R2,R2=R3
R3=R4,R4=0
.-__________~10
Set A = R4 then do the following XORs:
R4 R3 + T4(A), R3 R2 + T3(A)
R2 = R1 + T2(A), R1 = T1(A)
=0 Copy R2:R3:R4
into RA:RB:RC
=1
o =0
Shift RA:RB:RC
left one bit **
=0
****
UNCORRECTABLE CORRECTABLE
- 310-
NOTES FOR BITE-SERIAL CORRECTION ALGORITHM
* Kl = Record length in bytes minus 4. Record length includes all bytes
covered by Eee including the check bytes.
** When shifting, the high-order bit of a register is shifted into the· low-order
bit of the next higher-lettered register.
**** On correctable exit, J is the forward byte displacement. The reverse byte
displacement is (Kl +3-1). The error pattern is in R2:R3:R4.
- 311 -
POLYNOMIAL - '42402402105' (OCTAL)
Z80 CODE FOR BIT-SERIAL ALGORITHM
;----------------------------------------------------------------
IMPLEMENTATION CONSTANTS
DEFINE POLYNOMIAL - DECIMAL CONSTANTS, SEE FLOW CHART
P1 EQU 138
P2 EQU 5
P3 EQU 2
P4 EQU 34
DEFINE CONSTANTS K1 AND K2 (SEE FLOW CHART)
K1 EQU iINSERT DATA FIELD CONSTANT K1
K2 EQU ~--- iINSERT HEADER FIELD CONSTANT K2
DEFINE NUMBER OF OVERHEAD BYTES
OV EQU iINSERT # OF OVERHEAD BYTES
DEFINE CORRECTION SPAN MASK
CSM1 EQU iINSERT APPROPRIATE MASK BELOW
CORR SPAN 1 MASK '01111111'
2 '00111111'
3 '00011111'
4 '00001111'
5 '00000111'
6 '00000011'
7 '00000001 '
8 '00000000'
9 '00000000'
10 '00000000'
11 '00000000'
CSM2 EQU iINSERT APPROPRIATE MASK BELOW
CORR SPAN 1 MASK '11111111'
2 '11111111'
3 '11111111'
4 '11111111'
5 '11111111'
6 '11111111'
7 '11111111'
8 '11111111'
9 '01111111'
10 '00111111'
11 '00011111'
;----------------------------------------------------------------
- 312-
i INITIALIZE PSEUDO SHIFT REGS AND SHIFT COUNT (J)
,
o
SHIFT SRL B ,
o.
RR C :SHIFT RIGHT
RR D
, - 313 -
RR E .,
JP NC,SHIFT1O ;BRANCH IF NO BIT SHIFTED OUT
LD A,E
XOR P4
LD
LD
E,A
A,D .,
XOR P3 ;XOR DECIMAL CONSTANTS
LD D,A (SHIFT REG FEED-BACK)
SHIFT05 LD A,C
XOR P2
LD C,A
LD A,B
XOR Pl
LD B,A
SHIFT10 LD A,B
OR A
JP Z,PTRNTST
SHIFT20 XOR A
OR L
JP NZ,SHIFT30
OR H
JP NZ,SHIFT30
JP UNCORR ~UNCORRECTABLE
SHIFT30 DEC HL ~DECREMENT SHIFT COUNT ('J')
JP SHIFT
TEST FOR CORRECTABLE PATTERN
PTRNTST LD A, (ALGNFLG) ~LOAD ALGN-FLAG
OR A
JP NZ,PTRNTST5 ~BRANCH IF ALGN-FLAG NONZERO
OR E
JP NZ,SHIFT20 ~BRANCH IF CORR PTRN NOT YET FOUND
LD A,C
AND CSMl ~SEE DEFINITION OF CSMl ABOVE
JP NZ,SHIFT20 ~BRANCH IF CORR PTRN NOT YET FOUND
LD A,D
AND CSM2 ;SEE DEFINITION OF CSM2 ABOVE
JP NZ,SHIFT20 ;BRANCH IF CORR PTRN NOT YET FOUND
GET HERE TO START BYTE ALIGNMENT
LD A,l
LD (ALGNFLG) , A ;SET ALGN-FLAG TO NONZERO
PTRNTST5 LD A,L
AND 7 ;TEST 'J' MODULO 8
JP NZ,SHIFTJO ~JP IF BYTE ALIGN NOT COMPLETE
- 314-
.,
., CORRECT BYTES IN ERROR
CORRECT LD B,C ; MOVE
LD
LD
C,D
D,E . i PATTERN
,
LD A,3
CORR10 SRL H ;
RR L ;DIVIDE BIT DISPLACEMENT BY 8
DEC A ; TO GET FWD BYTE DISPLACEMENT
JP NZ,CORR10
COMPUTE REV BYTE DISPLACEMENT
SCF
CCF
PUSH DE
EX DE,HL
LD HL,(RLBMO)
SBC HL,DE
POP DE
TEST REVERSE DISPLACEMENT CASES
LD A,H
OR A
JP NZ,CORR40 ;BR IF HI BYTE OF REV DISP NONZERO
LD A,L
CP 6
JP NC,CORR40 ;BR IF REV DISP EQ OR GTH THAN 6
CP 5
JP Z,CORR25 ;BR IF REV DISP EQ 5
CP 4
JP Z,CORR30 ;BR IF REV DISP EQ 4
GET HERE IF ERROR IN CHECK BYTES
CORR20 JP EXIT ;IGNORE CORR ERR IN CHECK BYTES
; GET HERE IF ERROR STARTS IN NEXT TO LAST DATA BYTE
CURR25 LD A, (nn)
XOR B iCORRECT NEXT TO LAST DATA BYTE
LD (nn),A
LD B,C
GET HERE IF ERROR STARTS IN LAST DATA BYTE
CORR30 LD A, (nn)
XOR B ;CORRECT LAST DATA BYTE
LD (nn),A
JP EXIT ; DONE
; RECOMPUTE FWD BYTE DISPLACEMENT
CORR40 SCF
CCF
PUSH DE
EX DE,HL
LD HL, (RLBMO)
SBC HL,DE
POP DE
- 315 -
TEST FWD DISPLACEMENT CASES
LD A,H
OR A
JP NZ,CORR45 ;BR IF HI BYTE OF FWD DISP NONZERO
LD A,L
CP 4
JP NC,CORR45 ;BR IF FWD DISP EQ OR GTH THAN 4
CP 3
JP Z,CORR60 ;BR IF FWD DISP EQ 3
CP 2
JP Z,CORR55 ;BR IF FWD DISP EQ 2
JP CORR70
GET HERE IF ERROR IN DATA BYTES
CORR45 PUSH DE
LD DE, (BUFFADR) ;LOAD BUFFER ADDRESS
CORR50 ADD HL,DE ;ADD DATA BUFFER
; ADDR TO DISPLACEMENT
POP DE
LD A, (HL)
XOR B ;CORRECT 1ST DATA BYTE IN ERROR
LD (HL) ,A
INC HL
LD A, (HL)
XOR C ;CORRECT 2ND DATA BYTE IN ERROR
LD (HL),A
INC HL
LD A, (HL)
XOR D ;CORRECT 3RD DATA BYTE IN ERROR
LD (HL) ,A
JP EXIT ;OONE
; ERROR STARTS IN NEXT TO LAST OVHD BYTE
CORR55 LD A, (nn)
XOR B ;CORRECT NEXT TO LAST OVHD BYTE
LD (nn),A
LD B,C
LD C,D
LD D,O
ERROR STARTS IN LAST OVHD BYTE
CORR60 LD A, (nn)
XOR B ;CORRECT LAST OVERHEAD BYTE
LD (nn),A
LD A, (nn)
XOR C ;CORRECT FIRST DATA BYTE
LD (nn),A
LD A, (nn)
XOR D ; CORRECT 2ND DATA BYTE
LD (nn),A
JP EXIT ;OONE
- 316 -
GET HERE IF ERROR IN OVERHEAD BYTES
CORR70 PUSH DE
LD DE,nn ;OVERHEAD BYTES BUFFER ADDRESS
JP CORR50 ;JOIN COMMON PATH
WORK STORAGE
- 317-
POLYNOMIAL - J42402402105' (OCTAL)
zao CODE FOR BYTE-SERIAL ALGORITHM
- 318-
INITIALIZE PSEUDO SHIFT REGS AND SHIFT COUNT (J)
i
INIT LD A, (FLDFLG) iLOAD FIELD FLAG
OR A
JP NZ,INIT20 iJP TO INIT20 IF CORRECTING HEADER
i INITIALIZE FOR DATA FIELD
INITIO LD HL,nn-OV ;SAVE DATA BUFFER ADDRESS
LD (BUFFADR),HL i-NUMBER OF OVERHEAD BYTES
LD HL,K1+3 ; SAVE
LD (RLBMO),HL DATA FIELD LENGTH IN BYTES
MINUS 1
LD HL,K1 ;LOAD J WITH Kl (CONST FOR DATA)
JP INIT30
i INITIALIZE FOR HEADER FIELD
INIT20 LD HL,nn iSAVE
LD (BUFFADR) ,HL ; HEADER BUFFER ADDRESS
LD HL,K2+3 ; SAVE
LD (RLBMO),HL i HEADER LENGTH IN BYTES-l
LD HL,K2 ;LOAD J WITH K2 (CONST FOR HEADER)
INIT30 LD BC,65535 ;CONSTANT FOR DECREMENTING SHIFT
COUNT
EXX
LD HL, (nn) iFETCH 1ST 2 SYNDROME BYTES
LD D,L ;SYNDROME BITS XO-X7
LD E,H iSYNDROME BITS X8-X15
.
,
LD HL, (nn) iFETCH 2ND 2 SYND BYTES (X16-X3I)
LEFT JUSTIFY FIRST NON-ZERO SYNDROME BYTE IN 'B'
. ,
JUST XOR A
OR 0 ;TEST 'RI' FOR ZERO
JP NZ,SHIFT05 iBRANCH ON NONZERO
EXX
LD A,L
ADD 1 iJ=J+1
LD L,A
JP NC,JUST9
INC H
JUST9 EXX
JUSTIO LD D,E
LD E,H
LD H,L
LD L,O
JP JUST
- 319 -
., SHIFT PSEUDO SHIFT REG UNTIL' CORRECTABLE PATTERN FOUND
SHIFT EXX
SHIFT05 LD B,O iINIT TO POINT TO TABLE (T4)
LD C,L iLOAD 'A' INDEX (SEE FLOW CHART)
R4=R3 'XOR' T4(A) (SEE FLOW CHART)
LD A, (BC)
XOR H
LD L,A
i
INC B
R3=R2 'XOR' T3(A)
·
,
(SEE FLOW CHART)
LD A, (BC) i
XOR
LD
E
H,A
·
,
INC B
R2=Rl 'XOR' T2(A) (SEE FLOW CHART)
LD A, (BC)
XOR D
LD E,A
INC B
Rl=Tl(A) (SEE FLOW CHART)
LD A, (BC)
LD O,A ,·
TEST LOW ORDER 8 BITS OF SHIFT REG FOR ZERO
OR A ,·
JP Z,PTRNTST ,·
DECREMENT SHIFT COUNT AND TEST FOR ZERO
SHIFT10 EXX i
ADD HL,BC ;BC= , FFFF , FOR DECREMENTING HL BY 1
JP C,SHIFT iNO CARRY IF HL WAS 0 BEFORE ADD
EXX
JP UNCORR
- 320 -
TEST FOR CORRECTABLE PATTERN
PTRNTST LD A,L
JP NZ,SHIFT10 ;BRANCH IF CORR PTRN NOT YET FOUND
SAVE SHIFT REG CONTENTS
PTRNTST2 LD (nn),HL ;SAVE HL
EX DE,HL ;SAVE DE
PTRNTST3 LD (nn) ,HL ;
DETERMINE IF PTRN IN E,H AND L IS CORRECTABLE
PTRNTST4 BIT 7,E
JP NZ,PTRNTST5
SLA L
RL H
RL E
JP PTRNTST4
PTRNTST5 LD A,H
AND CSM2 iSEE DEFINITION OF CSM2 ABOVE
JP NZ,PTRNTST7 iBRANCH IF CORR PTRN NOT YET FOUND
LD A,E
AND CSMl iSEE DEFINITION OF CSMl ABOVE
JP Z,PTRNTST8 iBRANCH IF CORR PTRN FOUND
CORR PTRN NOT YET FOUND, RESTORE SIR, RETURN TO SHIFTING
PTRNTST7 LD HL, (nn) i
EX DE,HL iRESTORE DE (SAVED AT PTRNTST3)
LD HL, (nn) iRESTORE HL (SAVED AT PTRNTST2)
JP SHIFT10
GET HERE IF CORR PTRN FOUND
PTRNTST8 LD HL, (nn)
EX DE,HL iRESTORE DE (SAVED AT PTRNTST3)
LD HL, (nn) iRESTORE HL (SAVED AT PTRNTST2)
LD C,E iPLACE PTRN IN REGS
LD D,H EXPECTED BY
LD E,L NEXT ROUTINE
EXX
LD (nn) ,HL iSAVE HL
EXX
LD HL, (nn) iRESTORE HL SAVED 2 STEPS UP
- 321.,
.
I
CORRECT BYTES IN ERROR
CORRECT LD B,C ~ ;MOVE
LD C,D ; PATTERN
LD D,E ;
COMPUTE REV BYTE DISPLACEMENT
SCF
CCF
PUSH DE
EX DE,HL
LD HL,(RLBMO)
SBC HL,DE
POP DE
TEST REVERSE DISPLACEMENT CASES
LD A,H
OR A
JP NZ,CORR40 ;BR IF HI BYTE OF REV DISP NONZERO
LD A,L
CP 6
JP NC,CORR40 ;BR IF REV DISP EQ OR GTH THAN 6
CP 5
JP Z,CORR25 ;BR IF REV DISP EQ 5
CP 4
JP Z,CORR30 ;BR IF REV DISP EQ 4
; GET HERE IF ERROR IN CHECK BYTES
CORR20 JP EXIT ; IGNORE CORR ERR IN CHECK BYTES
; GET HERE IF ERROR STARTS IN NEXT TO LAST DATA BYTE
CURR25 LD A, (nn)
XOR B ;CORRECT NEXT TO LAST DATA BYTE
LD (nn) ,A
LD B,C
GET HERE IF ERROR STARTS IN LAST DATA BYTE
CORR30 LD A, (nn)
XOR B ;CORRECT LAST DATA BYTE
LD (nn) ,A
JP EXIT ; DONE
RECOMPUTE FWD BYTE DISPLACEMENT
CORR40 SCF
CCF
PUSH DE
EX DE,HL
LD HL, (RLBMO) .
SBC HL,DE
POP DE
- 322 -
TEST FWD DISPLACEMENT CASES
LD A,H
OR A
JP NZ,CORR45 ;BR IF HI BYTE OF FWD DISP NONZERO
LD A,L
CP 4
JP NC,CORR45 ;BR IF FWD DISP EQ OR GTH THAN 4
CP 3
JP Z,CORR60 ;BR IF FWD DISP EQ 3
CP 2
JP Z,CORR55 ;BR IF FWD DISP EQ 2
JP CORR70
~ GET HERE IF ERROR IN DATA BYTES
CORR45 PUSH DE
LD DE, (BUFFADR) ;LOAD BUFFER ADDRESS
CORR50 ADD HL,DE ~ADD DATA BUFFER ADDR TO
; TO DISPLACEMENT
POP DE
LD A, (HL)
XOR B ;CORRECT 1ST DATA BYTE IN ERROR
LD (HL) ,A
INC HL
LD A, (HL)
XOR C ~CORRECT 2ND DATA BYTE IN ERROR
LD (HL) ,A
INC HL
LD A, (HL)
XOR D ~CORRECT 3RD DATA BYTE IN ERROR
LD (HL) ,A
JP EXIT ;OONE
~ ERROR STARTS IN NEXT TO LAST OVHD BYTE
CORR55 LD A, (pn)
XOR B ;CORRECT NEXT TO LAST OVHD BYTE
LD (nn),A
LD B,C
LD C,D
LD D,O
- 323 -
GET HERE IF ERROR IN OVERHEAD BYTES
CORR70 PUSH DE
LD DE,nn ;OVERHEAD BYTES BUFFER ADDRESS
JP CORR50 ;JOIN COMMON PATH
WORK STORAGE
-m-
5.3.4 DIAGNOSTICS AND TESIING
The diagnostic routines for the small-systems magnetic-disk code should be devel-
oped using the techniques of Chapter 6 TESTING OF ERROR-CONTROL SYSTEMS.
One of the diagnostic approaches described in Chapter 6, requires a test record
that causes check bytes of zero to be generated. For the code described in this section
such a record can be constructed as follows. Set the first four bytes to hex 'OC 06 03
C3'. Set the last four bytes to hex 'F3 F9 FC 3C'. Clear the remaining bytes to zero.
For design debug, write the test record defined above. Debug the write path until
the write check bytes written for this record are zero. Next, debug the read path until
this record can be read without error. Finally, run diagnostics as defined in Chapter 6.
Protection for sync framing errors is built into circuits of Figures 5.3.2.1 and
5.3.2.2. First, the '1' state of each shift register stage is the low-voltage state.
Therefore, the clear function sets the shift register to all ones. Secondly, degating the
shift register input during ECC time forces '1 's into the high order stage. This is
equivalent to inverting certain groups of bits of the check bytes. Today's data integrity
requirements dictate greater protection for sync framing errors than provided by the
method discussed here. See Section 4.8.2 for a detailed discussion of sync framing
errors.
The following pages contain simulations of the hardware and software algorithms
for several correctable errors. Each step of the algorithm, hardware, and software, is
included in the simulation.
The test record for each simulation is the test record defined in Section 5.3.4.
Simulation run 1 is a dummy run that illustrates the first 40 shifts for each of the
remaining simulation runs. Runs 2 through 4 simulate the bit-serial software algorithm.
Runs 2 and 3 simulate error bursts in the data field, while run 4 simulates a single bit
error in a check byte. Runs 5 through 7 simulate the byte-serial software algorithm
and are similar to runs 2 through 4.
- 325 -
READ SIMULATION RUN. ~.
~ 326 -
RE~D SIMUL~TIO~ RUN * 2
- 327 -
SIMULATION RUN NO. 2 CONTINUED
R-S2 1. 1.0:1.001.01. 0001.0:1.:1.:1. ~091.:1.90:1. 1.1.9999:1.:1. -5
R-52 1. 01.09:1.0:1.0 991.91.:1.:1.1. 991:1.09:1.:1. :1.9090:1.1.0 -6
R-!5:1. 1.90:1.01.00 9:1.01.:1.1.1.0 0:1.:1.09:1.1.1. 9090:1.1.0:1.~6
R-51!l 99:1.01.999 1.01.:1.:1.:1.90 1:1.99:1.:1.1.0 900:1.:1.0:1.:1. -6
R-49 1. 0:1.9:1.900:1. 01.:1.:1.1.00:1. 1.091.:1.:1.00 00:1.:1.0:1.:1.:1. -6
R-48 :I. :1.0:1.090:1.0 1.:1.:1.:1.99:1.:1. 99:1.:1.1.990 9:1.:1.9:1.:1.:1.0 -5
R-47 :I. 9:1.009:1.0:1. 1.1.:1.09:1.:1.0 9:1.:1.:1.9909 1.:1.0:1.:1.:1.00 -5
R-46 :I. :1.000:1.0:1.:1. 1.1.90:1.:1.00 :1.:1.:1.0990:1. :1.0:1.:1.:1.99:1. -5
R-45 :I. 999:1.9:1.:1.:1. :1.99:1.:1.09:1. 1.:1.9990:1.:1. 9:1.:1.:1.00:1.0 -5
R-44 1 99:1.01:1.:1.:1. 99:1.:1.90:1.:1. 1.9909:1.:1.9 1.1.:1.99:1.00 -5
R-42 :I. 0:1.0:1.:1.:1.:1.0 0:1.:1.00:1.:1.:1. 0090:1.:1.0:1. 1.1.091001 -5
R-42 :1.01:1.:1.:1.00 1:1.99:1.:1.:1.0 990:1.:1.9:1.1 :1.09:1.09:1.1 -5
R-41. 0:1.:1.:1.:1.00:1. :1.00:1.:1.:1.90 00:1.:1.9:1.:1.:1. 90:1.09:1.:1.:1. -5
R-40 1:1.:1.1001:1. 991:1.:1.999 01:1.9:1.:1.:1.0 0:1.09:1.:1.:1.:1. -4
R-39 :1.:1.:1.09:1.10 01.:1.:1.9990 1:1.0:1.1:1.00 :1.001:1.1:1.0 -4
R-~8 1 1100:1.:1.00 1:1.:1.0000:1. 10:1.1:1.00:1. 00:1.:1.:1.:1.00 -4
R-37 1 :1.00:1.:1.00:1. 119009:1.:1. 01:1.10010 0:1.:1.:1.:1.000 -4
R-3'6 1. 00:1.:1.00:1.:1. 10000:1.:1.0 1.1:1.00:1.00 :1.:1.:1.10000 -4
R-s5 1 01:1.90:1.:1.:1. 0000:1.:1.9:1. :1.:1.90:1.00:1. :1.:1.:1.0009:1. -4
R-3:4 :1.:1.00:1.:1.:1.0 099:1.:1.91:1. :1.00:1.99:1.1 :1.:1.9990:1.0 -4
R-~3 :1.00:1.:1.:1.09 001.19:1.:1.:1. 90:1.00:1.:1.1 10000:1.0:1. -4
FINISHED READING DATA BYTES. NOW READ CHECK BYTES.
INPUT TO SHIFT REGISTER NOW DEGATED. PIN 9 OUTPUT
IS GATED TO DESERIALIZER TO BE STORED AS SYNDROME.
R-32 001.:1.:1.000 01:1.011.:1.0 01.001.:1.1.1 00091.01:1. -3 PIN 9= 8
R-3:1. 0:1.1.10090 11011109 10011110 900:1.0:1.:1.1 -3 PIN 9= 0
R-30 :1.1:1.0009:1. 101:1.1091 00:1.:1.:1.:1.00 00:1.0:1.1:1.:1. -3 PIN 9= 0
R-;oQ :1.:1.0000:1.:1. 8:1.:1.109:1.0 01:1.:1.:1.000 010:1.:1.:1.1:1. -3 PIN 9= 0
R-28 :1.0000110 1:1.:1.00100 1:1.:1.:1.0000 :1.0:1.:1.:1.:1.:1.:1. -3 PIN 9= 0
R-27 0000:1.:1.0:1. ~:l.00:1.00:1. :1.:1.:1.0009:1. 9:1.:1.:1.:1.:1.:1.:1. -3 PIN 9= 1
R-26 090:1.:1.0:1.:1. :1.90100:1.:1. 110090:1.9 1.:1.:1.:1.:1.:1.1.:1. -3 PIN 9= :I.
R-25 90:1.:1.011:1. 09:1.00:1.:1.:1. 19000:1.0:1. 1.1:1.:1.:1.:1.:1.:1. -3 PIN 9= 0
R-24 91:1.9:1.1.10 0:1.00:1.:1.:1.1. 0000:1.0:1.1 1.1:1.:1.:1.1:1.1 -2 PIN 9= 1
R-23 1:1.91:1.1.09 100:1.:1.:1.10 099:1.01:1.:1. :1.:1.11:1.:1.:1.:1. -2 PIN 9= 0
R-"';- :1.011:1.00:1. 00:1.:1.:1.:1.00 00:1.01:1.:1.:1. :1.:1.1:1.:1.:1.1:1. -2 PIN 9= 8
R-21 0:1.:1.:1.00:1.0 01:1.1:1.000 0:1.0:1.1:1.:1.:1. 1.:1.:1.:1.1:1.:1.:1. -2 PIN 9= 1
R-20 1:1.:1.00:1.00 1:1.:1.:1.0000 :1.9:1.:1.:1.:1.:1.1 11:1.1:1.1:1.:1. -2 PIN 9= 1
R-:l9 1:1.00:1.001 1.:1.:1.00001 01:1.:1.:1.:1.:1.1. :1.:1.1:1.:1.:1.:1.:1. -2 PIN 9= 8
R-18 :1.00:1.00:1.:1. :1.:1.0000:1.0 :1.:1.1.:1.1.:1.1.:1. :1.:1.:1.:1.:1.:1.1:1. -2 PIN 9= 8
R-~( 00100:1.:1.:1. :1.0900:1.0:1. 1:1.:1.:1.:1.:1.:1.:1. :1.1:1.:1.:1.:1.:1.:1. -2 PIN 9= 8
R-16 0:1.00:1.:1.1.1. 0000101.:1. 11.:1.:1.1.1.:1.1 1.1.1.1.11.:1.:1. -1 PIN 9= 0
R-15 10011.:1.1.0 0001.01.:1.1. 1.1:1.1.:1.11.1. 1.1.1.1.1.1.1.1. -:I. PIN 9= :I.
R-14 001.:1.:1.:1.0000:1.01.1.1:1. :1.:1.:1.:1.1.:1.1.1. :1.1.:1.:1.1.:1.1.1 -:I. PIN 9= 0
R-13: 0:1.:1.1.1000 01.0:1.1.:1.1:1. :1.:1.1.1.1.:1.:1.:1. :1.:1.1.:1.1.1.:1.:1. -:I. PIN 9= 0
R-:12 :1.:1.1.:1.0000 :10:1.:1.:1.:1.:1.:1. :1:1.:1.:1.1:1.:1.1. :1.1:1.1.1.:1.1.1. -1 PIN 9= 1
R-:1:1. :1.:1.1.0000:1. 01.:1.:1.1.:1.:1.:1. :1.:1.:1.1.:1.:1.:1.1. 11.:1.:1.:1.:1.:1.:1. -:I. PIN 9= :I.
R-10 :1.:1.8000:1.0 :1.:1.:1.:1.:1.:1.1.:1. :1.1:1.:1.1.1.:1.:1. 1:1.:1.:1.:1.:1.1:1. -:I. PIN 9= :I.
R -9 10000:1.0:1. :1:1.1.:1.:1.:1.:1.:1. :1.:1.1.:1.1.:1.11. 1:1.:1.:1.:1.:1.:1.:1. -:I. PIN 9= :I.
R -8 0000:1.0:1.1. :1.:1.:1.:1.:1.:1.:1.:1. :1:1.:1.:1.:1.:1.:1.:1. :1.:1.:1.:1.:1.:1.:1.:1. 0 PIN 9= 0
R -7 000:1.0:1.:1.1. :1.:1.:1.:1.:1.:1.:1.:1. :1.1:1.:1.:1.:1.:1.:1. :1.1.1.1.1:1.:1.:1. 0 PIN 9= 0
R -6 001.0:1.:1.:1.:1. :1.:1.:1.:1.:1.:1.:1.:1. :1.:1.:1.1:1.:1.:1.:1. :1.:1.:1.:1.:1.:1.:1.:1. 0 PIN 9= 0
R -5 0:1.0:1.:1.:1.:1.:1. 1:1.:1.:1.:1.:1.:1.:1. :1.:1.:1.:1.:1.:1.1.:1. :I.~:I.:I.:I.1. 0 PIN 9= :I.
R -4 :1.0:1.:1.:1.:1.:1.1 :1.:1.:1.:1.1:1.:1.:1. :1.:1.1.:1.1:1.:1.:1. :1.:1.:1.:1.:1.:1.:1.:1. 0 PIN 9= 0
R -3 0:1.:1.:1.:1.:1.:1.:1. :1.:1.:1.:1.1:1.:1.:1. :1.1.:1.:1.1.:1.:1.:1. 1.1.:1.:1.:1.:1.1:1. 0 PIN 9= 13
R -2 1.1:1.:1.:1.:1.:1.:1. :1.:1.:1.:1.:1.:1.:1.:1. :1:1.:1.1.:1.:1.:1.1. :1.:1.:1.:1.1:1.:1.:1. 0 PIN 9= :I.
R -1 :1.:1.:1.1:1.:1.:1.:1. :1.:1.:1.:1.:1.:1.:1.:1. :1.:1.:1.:1.:1.:1.:1.1 :1.:1.:1.:1.:1.:1.:1.:1. 0 PIN 9= 0
HDI4 PARt ~IOW COMPLETE - S'T'NDOME HAS BEEN STORED.
- 328 -
~IMIJL.ATIONRUN.. 2 CONTINUED
SIMULATION OF CORRECTION PROCEDURE
BEGJN SHI FTING SYNDROME
THIS PART SIMULATES INTERNAL XOR FORM OF SHIFT REG
(SHIFTING RIGHT WITH SOFTWARE)
o n
X X
R-25 00000011 01001100 00100111 10001001 -3
R-26 10001011 10100011 00010001 11100110 -3
R-2i 01000101 11010001 10001000 11110011 -3
R-28 10101000 11101101 11000110 01011011 -3
R-29 11011110 01110011 11100001 00001111 -3
R-30 11100101 00111100 11110010 10100101 -3
R-:U 11111000 10011011 01111011 01110000 -3
R-32 01111100 01001101 10111101 10111000 -3
R-33: 00111110 00100110 11011110 11011100 -4
R-34 00011111 0001001101101111 01101110 -4
R-35 00001111 10001001 10110111 10110111 -4
R-36 10001101 11000001 11011001 11111001 -4
R-37 11001100 11100101 11101110 11011110 -4
R-:$8 01100110 01110010 11110111 01101111 -4
R-39 10111001 00111100 01111001 10010101 -4
R-40 11010110 10011011 00111110 11101000 -4
R-41 01101011 01001101 10011111 01110100 -5
R-42 00110101 10100110 11001111 10111010 -5
R-43 00011010 11010011 01100111 11011101 -5
R-44 10000111 01101100 10110001 11001100 -5
R-45 01000011 10110110 01011000 11100110 -5
R-46 00100001 11011011 00101100 01110011 -5
R-47 10011010 11101000 10010100 00011011 -5
R-48 11000111 01110001 01001000 00101111 -5
R-4:9 11101001 10111101 10100110 00110101 -6
R-50 11111110 11011011 11010001 00111000 -6
R-51 01111111 01101101 11101000 10011100 -6
R-S2 00111111 10110110 11110100 01001110 -6
R-S3 00011111 1101101~ 01111010 00100111 -6
R-54 10000101 11101000 10111111 00110001 -6
R-55 11001000 11110001 01011101 10111010 -6
R-56 01100100 01111000 10101110 11011101 -6
R-57 10111000 00111001 01010101 01001100 -7
R-S8 01011100 00011100 10101010 10100110 -7
R-59 00101110 00001110 01010101 01010011 -7
R-60 10011101 00000010 00101000 10001011 -7
R-61 11000100 10000100 00010110 01100111 -7
R-62 11101000 01000111 00001001 00010001 -7
R-63 11111110 00100110 10000110 10101010 -7
R-64 01111111 00010011 01000011 01010101 -7
R-6S 10110101 10001100 10100011 10001000 -8
R-66 01011010 11000110 01010001 11000100 -8
R-67 00101101 01100011 00101000 11100010 -8
R-6S 00010110 10110001 10010100 01110001 -8
R-69 10000001 01011101 11001000 00011010 -8
R-7e) 01000000 10101110 11100100 00001101 -8
R-71 10101010 01010010 01110000 00100100 -8
R-72 01010101 00101001 00111000 00010010 -8
~-73 00101010 10010100 10011100 00001001 -9
R-74 10011111 01001111 01001100 00100110 -9
R-75 01901111 10100111 10100110 90010011 -9
R-76 10101101 11010110 11010001 00101011 -9
R-77' 11011100 11101110 01101010 10110111 -9
- 329-
SII1ULATION RUN NO. 2 CONTINUED
R-78 1.1.1.001.00 01.1.1.001.0 e01.1.01.1.1. 01.1.1.1.001. -9
R-79 1.1.1.1.1.000 001.1.1.1.00 0001.1.001. 1.001.1.1.1.0 -9
R-80 01.1.1.1.1.00 0001.1.1.1.0 00001.1.00 1.1.001.1.1.1. -9
R-81. 1.01.1.01.00 00001.01.0 000001.00 01.0001.01. -1.0
R-82 1.1.01.00e0 aaaa0000 a0000000 00e0ae00 -1.0
R-83 01.1.01.0e0 a000000e a0e00000 0a000e0e -1.0
R-84 ae11.01.00 e0eeeee0 00aeeeee aeee0See -10
R-8~ e001.1010 000eee0e e0eeee0e 0eeee00e -1.0
R-86 0eee1.101. 00e00000 00000000 a000000e -10
R-87 00ee0110 1e0e00ee 00000000 000e00e0 -10
R-88 00000011 010000e0 0000e00S e0000000 -10
R-89 000e00e1. 1010e0e0 e0000000 000000ee -11
R-ge 0ee0ee00 11.010000 000e00e0 0000e00e -11.
CORRECTABLE PATTERN FOUND, -90 IS BIT DISPLACEMENT.
NOl.J BEGIN BYTE ALIGNMENT.
R-91. 00000000 011.01e0e 0e00eee0 00e0000e -1.1
R-92 e0raee00e e011.01.0e e0000000 0e000ra00 -1.1.
R-9Z 00ra00eee eee1.101e 0ee0eeee eeeee0ee -11
R-94 000eeeee 00001.1.01 e0000000 0000e000 -11.
R-95 00ra00000 00000110 10000000 e0e0000e -11
R-96 00ee0000 0000001.1 01.000000 ee00e0e0 -11.
B'nE ALIGNMENT C0l1PLETE - SIMULATION COMPLETE
B'T'TE DISPLACEMENT IS 1.1.
COUNTING FROM END OF RECORD. LAST BYTE IS ZERO.
- 330-
REA~ SU1ULRTION RUN" 3
- 331 -
SIMULATION RUN NO. ~ CONTINUED
R-S:! :t 1011.1.901 91.1.10110 ge00:1:1.00 01011.:1.1.0 -6"
R-S2 1 9111.0019 ~01.1.90 999:1.1.999 1.011.1191 -6
R-S1 11.199191 11911999 091.1.0001. 91:L1.1.910 -6
R-S~ 11.991011 19119900 011.0901.0 11.11.91.91. -6
R-49 1 19010111 01100000 11999191 :1.1.19101.9 -6
R-4f; 1 991011.10 11009001 10901911 110191.91 -S
R-47 1 01.911.1.01 19999911. 9991.01.1.1. 1.91.01911. -S
R-46 1 10111911 00999119 09191.1.1.1. 9101.911.9 -S
R-45 1 01110110 09091190 01911110 10191109 -S
R-44 1 111.011.00 09011090 1011.1101 0101.1091 -5
R-4:; 1 11.011000 00110001 011.1.101.0 1.0110011 -S
R-42 10110090 91100010 11119101 91100111 -5
R-41 01100000 11090101 111.01010 11001119 -5
R-413 11009001 10901011 11.010101 10011.101 -4
R-:$9 10900811 00010111. 10101011 001.11010 -4
R-38 1. 00000110 a0101111 01010118 01110101 -4
R-~7 1 00081180 01011118 10101100 11101010 -4
R-36 1 00011000 10111101 01011901 11010101 -4
R-s5 1 00110891 01111010 10110011 1.0101010 -4
R-~4 01190010 11110101 01100111 010101.91 -4
R-3:? 11080101 11101019 1.1001110 101.0191.0 -4
F I m SHED READ I NG DATA BYTES. NOW READ CHECK BYTES.
INPUT TO SHIFT REGISTER NOW DEGATED. PIN 9 OUTPUT
IS GATED TO DESERIALIZER TO BE STORED AS SYNDROME.
R-32 10001.811 11010101 100111.01 01.010101 -~ PIN 9= 1
R-31 0001011.1. 10181011. 001.1.1.01.0 1019101.1 -~ PIN 9= J.
R-::;(t 001.01.111 01.91.011.0 01.11.0:UU 0:1.01011.1 -3 PIN 9= J.
R-:;"=O 0191.1110 10191190 1.1.1.0191.9 191.91111 -3 PIN 9= 0
R-2:3 1011.1.191 0191.1091 11010101 01.91.1111. -3 PIN 9= 1
R-27 81.1.1.1.818 1011891.1 10101810 1011.11.1.1 -3 PIN 9= 1
R-26 11.110101 01100111. 81010101 81.11111.1 -3 PIN 9= 1
R-25 11.191819 1109111.0 101.01.918 111111.11 -3 PIN 9= 1
R-24 1.1.01.0101' '10911.101 91.010101. 11.11.1.111 -2 PIN 9= J.
R-23 101.81.01.1. 0011.1010 10101011. 11111111 -2 PIN 9= 1
R-22 0101011.0 01119181 0101011.1. 1.111111.1 -2 PIN 9= 13
R-21 101.01100 11101.010 1.0101.11.1. 111111.11 -2 PIN 9= 13
R-20 01011001 1.1010101 01.f.l1111.1 11111111 -2 PIN 9= 1
R-19 101.1.0011 10191919 10111111 111.11111 -2 PIN 9= 0
R-1.8 01100111 91919101 01111111 11111111 -2 PIN 9= 1
R-17 1100111.0 19191010 11111111 11111111 -2 PIN 9= 9
R-1.6 10911191. 91918191 :11.111111 1.1111111 -1 PIN 3= 1
R-1.~ 001.11019 10101011 11111111 11111111 -1 PIN 3= 1
R-14 01110101 91010111 1111.1111. 1111.1.111 -1 PIN 9= 1
R-l.:: 111010101910111.1 11111.111 11111111 -1 PIN 9= 1
R-1.2 11910101 01011111 11.111111 111111.11 -1 PIN 9= 9
R-t1 10101910 10111111 111.1111.1 11111111 -1 PIN 9= 1
R-tO 01910101 01111111 11111111 11111111 -1 PIN 9= 13
R -9 10191010 11111111 1111111.1 11111111 -1 PIN 9= 1
R -8 01910101 11111111 11111111 11111111 0 PIN 9= 9
R -7 10101011 11111111 11111111 1.1111111 0 PIN 9= 1
R -6 01910111 11111111 111111.11 11111111. 0 PIN 9= 0
R -5 10101111 11111111 11111111 1.1111111 0 PIN 9= 13
R -4 01011111 11111111 11111111 11.11.1.11.1 0 PIN 9= 0
R -3 1011.1111 11111111. 11.1.1111.1 11111111 0 PIN 9= 0
R -2 0111111.1. 1.111.1.1.1.1. 11.1.1.111.1 1.1.1.1.1111 0 PIN 9= 13
R.-1 11.11.1111. 11.111.1.11 11111.111 1111.1111 0 PIN 9= 1
Hr,l·t PART Nm4 COMPLETE - SYNDm1E HAS BEEN STORED.
- 332 -
*
51 MIJLATION RUN 3 CONTI NUED
SntlJLATION OF CORRECTION PROCEDURE
BEGIN SHIFTING SYNDROME
THIS PART SIMULATES INTERNAL XOR FORM OF SHIFT REG
(SH I FTI NG RIGHT WITH SOFTWARE)
8 ~
X X
R-25 11111191 11199999 91111999 199139919 -3
R-26 91111119 11119999 99111199 91999991 -3
R-27 19119191 91111191 99911199 999139919 -3
R-28 91911919 19111119 ~9991119 99999991 -3
R-29 19199111 81811819 131888181 981138818 -3
R-39 81918911 18191181 98198919 18819891 -3
181980.:~ 1191991:1. 18919911 81191910 -3
R-31 91918~01 11191991 11981991 19119191 -3
R-32 18100019 11119981 11189119 11111998 -4
R-3~
R-34 91819981 91111999 11118911 91111199 -4
R-3S 99181999 19111199 91111991 19111119 -4
R-~6
98919198 91911119 98111199 11811111 -4
R-37 19909988 98191919 99911189 91991191 -4
R-3S 11081919 09918999 99981188 8891391139 -4
R-39 91198191 88881999 88898118 881389919 -4
R-40 89119918 19988199 9989881:1. 99999891 -4
R-41 19919911 91888111 98998811 18189919 -5
R-42 91991981 19188811 19998891 11819981 -5
R-43 18181118 11818189 11998919 11891919 -5
R-44 91919111 91191919 91198991 91199191 -5
R-4S 10199981 18118888 99119918 19819899 -5
R-46 91919989 11811089 98911091 91991989 -5
R-4i 99181999 91191199 89891189 19189199 -5
R-4S 99919199 98119119 99999119 91919819 -5
R"';'49 90991919 99911011 991399911 99191001 -6
R-~0
19991111 99991998 ~9998911 18119119 -6
R-51 81989111 19998199 91988991 11911811 -6
P..,52 19191981 11988111 89198919 11991111 -6
R-53 111311119 11199119 19918811. 911398191 -6
R-54 11198191 911191113 91091011 18999099 -6
R-55 91119919 19111911 98198191 11998999 -6
R-S6 98111991 91911181 19818818 11199999 -6
R-57 88811188 18191119 118811381 81119988 -7
R-S8 88991119 81919111 91198198 19111999 -7
R-59 99999111 981911311 18118918 1319111139 -7
R-60 99999911 10919191 11911991 991131119 -7
R-61 9£11380991 11981910 11191199 18910111 -7
R-62 10991919 11199999 91118189 81191891 -7
R-63 111391111 91110191 99111990 99019110 -7
R-64 91109111 19111919 19911199 99991911 -7
R-6S 10111901 11911999 01091199 99199111 -8
R-66 11919110 11191891 90199198 98118981 -8
R-67 11199991 91119991 ~ge19999 99111918 -8
R-68 91119999 19111999 11091099 99911191 -8
R-69 19119919 91911981 91199119 99191199 -8
R-79 9191~981 09191199 19119811 89919119 -8
R-71 99191199 19919119 91911991 19991911 -8
R-72 19911189 81991119 139191119 11199111 -9
R-73 11999189 98199819 99819191 91919991 -9
R-74 ~1191999 99919199 991391999 19991919 -9
R-f'5 91119199 98991919 139999199 91999191 -9
R-76 19119999 99999999 99999999 90999999 -9
R-77 91911999 99999099 99999999 99999999 -9
- 333 -
SIfollJLATION RUN t-~O. 3 CONTINUED
- 334-
READ 5 INULATION RUN # 4
_ 'l'l7 _
READ SIMULATION RUN. S
- 338 -
SIMULATION RUN NO. 5 CONTINUED
- 339 -
SHIULATION RUN tt· 5" CONTINUED
- 340 -
READ SIMULATION RUN # 6
- 341 -
SIf>1IJLATION RUN NO. 6 CONTINUED
- 342 -
SIMULATION RUN.. 6 CONTINUED
- 343 -
READ SIMULATION RUN. 7
13
.# (SEE SIMULATION RUN tt :1 FOR FIRST 49 SHIFTS)
(R IS RECORD LEN IN BITS INCLUDING CHK AND OVERHD)
9 31.
X X
R-96 139999090 90999000 00000900 00999999 -11.
R-95 90999999 999139999 99999999 99999099 -1.1.
R-94 13139913999 99999999 99999999 999139909 -1.1.
R-93 90999999 99999999 99999999 99999999 -11.
R-92 90999999 99999999 99099999 99999999 -:11.
R-91 90099999 99099999 09999099 999999913 -11
R-99 90999999 99999999 90999990 99999990 -11
R-89 90999999 99999999 99999999 99999099 -11
R-8B 09999999 99999990 999999099999091313 -19
R-87 99999999 99999999 99999999 90900090 -1.0
R-86 09099999 99999909 999999913 99990999 -:19
R-B5 130999999 991399990 9999131399 99999099 -19
R-B4 9991391399 99999999 1399139990 9991391399 -19
R-83 99999999 90999999 99999999 999139999 -:19
R-82 991399999 99999999 139099999 09099099 -19
R-8:1 99909990 09090909 99090099 09999999 -:19
R-89 00000090 99990990 999139999 99999999 -9
R-79 909001399 99913131399 90999999 990909139 -9
R-713 99099999 99990999 99999999 99S90999 -9
R-77 909139090 99099999 99999999 99990990 -9
R-76 00999090 99999999 99999090 999139000 -9
R-75 gege9990 139999090 99999099 99999999 -9
R-74 99999990 99999999 99990999 99999099 -9
R-t:3 99090999 99999999 99099999 99999999 -9
R-72 99999999 139990999 90990999 99990999· ·-8
R-71 99099909 99000009 99909090 1391399099 -8
R-70 999001300 99999999 999139999 139999999 -8
R-69 990099a9 90909009 90999999 99909099 -8
R-6S 13913990913139090999 99099999 99999099 -8
R-67 9999131399 9991391399 99099999 99999999 -8
R-66 09131391399 99999999 99099999 99999999 -8
R-"::5 99990999 99999999 99999099 99909099 -8
R-64 1. 99909090 099991300 99999909 9999900:1 -7
R-63 1 13913991309 99990900 99999990 9991391311. -7
R-"::2 :1 9913991300 99099999 09099099 091399:111. -7
R-61 1 900091390 09999999 139009909 90991.:111. -7
R-60 99999990 90999999 99009999 99911.111. -7
R-S9 990130909 99999999 99990090 901.1~11 -7
R-58 1 091399009 90909990 90990999 9111111.1 -7
R-S7 :1 990091399 139999090 90990999 :11.111111 -7
R-56 1 009909013 990990ge 90009991 11111111 -6
R-'55 1 000130990 09090090 e9999911 111111:11 -6
R-~4 "1 0ge90090 09999099 e9999111 11111111 -6
R-'5:; 1. 00090900 e9009ge0 00991111 11~11111 -6
- ~44-
SIMULATION RUN NO. 7 CONTINUED
BYTE 0 ISPLACEMENT I S ~.
COUNTING FROM END OF RECORD. LAST BYTE IS ZERO.
SINIJLATION COMPLETE.
- 346 -
5.3.7 RECIPROCAL POLYNOMIAL TABLES
The byte-serial software algorithm requires four, 256-byte tables. These tables are
listed on the following pages. Since data entry is error prone, the tables should be
regenerated by computer.
To regenerate the tables, implement a right-shifting intemal-XOR serial shift
register in software, using the reciprocal polynomial. For each address of the tables
(0-255), place the address in the eight most significant (right-most) bits of the shift
register and clear the remaining bits. Shift eight times, then store the four bytes of
shift register contents in tabled T1 through T4 at the location indexed by the current
a~1ress. The coefficient of x is stored as the high-order bit of T1; the coefficient of
x is stored as the low-order bit of T4. Check the resulting tables against those on
the following pages.
- 347-
RECIPROCAL POLYNOMIAL TABLE T1
0 1 2 3 4 5 6 7 8 9 A B C 0 E F
00 00 14 28 3C 50 44 78 6C AO B4 88 9C FO E4 08 cC
10 54 40 7C 68 .04 10 2C 38 F4 EO DC C8 A4 BO 8C 98
20 A8 BC 80 94 F8 EC DO C4 08 1C 20 34 58 4C 70 64
30 FC E8 D4 CO AC B8 84 90 5C 48 74 60 ·OC 18 24 30
40 45 51 6D 79 15 ·01 3D 29 E5 Fi CD D9 BS Ai 9D 89
50 11 05 39 2D 41 55 69 7D B1 A5 99 8D Ei FS c9 DD
60 ED F9 C5 D1 BD A9 95 81 4D 59 65 71 1D 09 35 21
70 B9 AD 91 85 E9 FD C1 D5 19 00 31 25 49 5D 61 75
80 8A 9E A2 B6 DA CE F2 E6 2A 3E 02 16 7A 6E 52 46
90 DE CA F6 E2 8E 9A A6 B2 7E 6A 56 42 2E 3A 06 12
AO 22 36 OA 1E 72 66 5A 4E 82 96 AA BE D2 C6 FA EE
BO 76 62 5E 4A 26 32 OE 1A D6 C2 FE EA 86 92 AE BA
CO CF DB E7 F3 9F 8B B7 A3 6F 7B 47 53 3F 2B 17 03
DO 9B 8F B3 A7 CB DF E3 F7 3B 2F 13 07 6B 7F 43 57
EO 67 73 4F 5B 37- 23 1F OB C7 D3 EF FB 97 83 BF AB
FO 33 27 1B OF 63 77 4B 5F 93 87 BB AF C3 D7 EB FF
00 00 82 04 86 09 8B OD 8F 12 90 16 94 IB 99 IF 9D
10 21 A3 25 A7 28 AA 2C AE 33 Bl 37 B5 3A B8 3E BC
20 42 CO 46 C4 4B C9 4F CD 50 D2 54 D6 59 DB 5D DF
30 63 El 67 E5 6A E8 6E EC 71 F3 75 F7 78 FA 7C FE
40 81 03 85 07 88 OA 8C OE 93 11 97 15 9A 18 9E lC
50 AO 22 A4 26 A9 2B AD 2F B2 30 B6 34 BB 39 BF 3D
60 C3 41 C7 45 CA 48 CE 4C Dl 53 D5 57 D8 5A DC 5E
70 E2 60 E6 64 EB 69 EF 6D FO 72 F4 76 F9 7B FD 7F
80 02 80 06 84 OB 89 OF 8D 10 92 14 96 19 9B ID 9F
90 23 Al 27 A5 2A A8 2E AC 31 B3 35 B7 38 BA 3C BE
AO 40 C2 44 C6 49 CB 4D CF 52 DO 56 D4 5B D9 5F DD
BO 61 E3 65 E7 68 EA 6C EE 73 Fl 77 F5 7A F8 7E FC
CO 83 01 87 05 8A 08 8E OC 91 13 95 17 98 lA 9C IE
DO A2 20 A6 24 AB 29 AF 2D BO 32 B4 36 B9 3B BD 3F
EO Cl 43 C5 47 C8 4A CC 4E D3 51 D7 55 DA 58 DE 5C
FO EO 62 E4 66 E9 6B ED 6F F2 70 F6 74 FB 79 FF 7D
0 1 2 3 4 5 6 7 8 9 A B C D E F
00 00 51 A2 F3 44 15 E6 B7 88 D9 2A 7B CC 9D 6E 3F
10 55 04 F7 A6 11 40 B3 E2 DD 8C 7F 2E 99 C8 3B 6A
20 AA FB 08 59 EE BF 4C ID 22 73 80 Dl 66 37 C4 95
30 FF AE 5D OC BB EA 19 48 77 26 D5 84 33 62 91 CO
40 11 40 B3 E2 55 04 F7 A6 99 C8 3B 6A DD 8C 7F 2E
50 44 15 E6 B7 00 51 A2 F3 CC 9D 6E 3F 88 D9 2A 7B
60 BB EA 19 48 FF AE 5D OC 33 62 91 CO 77 26 D5 84
70 EE BF 4C ID AA FB 08 59 66 37 C4 95 22 73 80 Dl
80 22 73 80 Dl 66 37 C4 95 AA FB 08 59 EE BF 4C ID
90 77 26 D5 84 33 62 91 CO FF AE 5D DC BB EA 19 48
AD 88 D9 2A 7B CC 9D 6E 3F 00 51 A2 F3 44 15 E6 B7
BO DD 8C 7F 2E 99 C8 3B 6A 55 04 F7 A6 11 40 B3 E2
CO 33 62 91 CO 77 26 D5 84 BB EA 19 48 FF AE 5D DC·
DO 66 37 C4 95 22 73 80 Dl EE BF 4C ID AA FB 08 59
EO 99 C8 3B 6A DD 8C 7F 2E 11 40 B3 E2 55 04 F7 A6
FO CC 9D 6E 3F 88 D9 2A 7B 44 15 E6 B7 00 51 A2 F3
- 349-
5.4 APPLICATION TO MASS STORAGE DEVICES
- The data format includes a resync field after every 32 data bytes. This limits
the length of an error burst resulting from synchronization loss.
The media data format is shown below. Data is transferred to and from the media
one row at a time. Checking is performed in the column dimension.
32 INTERLEAVES RESYNC
r-I -----1.1-----,1 FIELDS
o 1 • • • 30 31 I
-t-
65
DATA
-t-
SYMBOLS
-t-
6
-t-
CHECK
SYMBOLS
- 350-
The following pages show, for the implementation:
- Algorithms for fmding the roots of the error locator polynomial in the
single-, double-, and triple-error cases.
- Algorithms for determining error values for the single-, double-, and triple-
error cases.
- ROM tablts for taking logarithms and antilogarithms, for finding the roots of
equation y + y + c =0, and for taking the cube root of a finite field element.
- 351 -
ENCODE POLYNOMIAL
(x + l)~(X + a)·(x + a 2 ).(x + a 3 ).(x + a 4 ).(x + a 5 )
= x6 + a 94 .x5 + a 10 .x4 + a 136 .x3 + a 15 .x2 + a104.x + a 15
WRITE ENCODER
GATE
WRITE DATA
SYNDROME CIRCUITS
There are six circuits (i =0 to 5) and each circuit is interleaved to depth 32.
- 352 -
FINITE FIELD PROCESSOR
LOG
ROM
- 353-
DETERMINE NUMBER OF ERRORS AND GENERATE ERROR LOCATOR POLYNOMIAL
YES YES
°1=SloSl+S00S2 0=SOoS3+S1 oS 2
u1=(SloS2+S00S3)/01 a1=(SloS3+S00S4)/0
U2=(SloS3+S2oS2)/01 a2=(S2° (S3+a1oS2)+SO° (SS+a1 oS 4»/0
°2=S4+a1oS3+a2oS2 a3=(S3+a1os2+a2oS1)/SO
NO YES
NO a1=(a1) ,
a2=(a2) ,
a3=(a3) ,
NO
NO
UNCORRECTABLE
- 354-
COMPUTE ERROR LOCATIONSAND ERROR VALUES
ry
= aLl = (11
y
C = (12/(11 2 1
y
K= «(11)2+(12
IXl
K3
C =
«(11 (12+(13)2
0
X2 = a L2 = (11 '12 0
I
Tl = TBLB~Ul)
T2 = Tloa 5
Ll = LOGa (Xl) T3 = T2 oa85
L2 = LOGa (X2)
I
Ll
Xl = a L = (1l+Tl+K/Tl
X2 = a 2 = (1l+T2+K/ T2
X3 = a L3 = (1l+T3+K/ T3
I
Ll = LOGa(Xl)
L2 = LOGa (X2)
L3 = LOG a (X3)
I
1 E = So I El =
X2 oS 0+ S l
El =
S2+Slo(X2+X3)+SOoX2oX3
Xl+X2 (Xl+X2)° (Xl+X3)
E2 = El+SO SOoX3+Sl+ Elo (Xl+X3)
E2 =
X2+ X3
E3 = SO+El+E2
!
I FINISHED I
- 355 -
SOLVING THE THREE-ERROR LOCATOR POLYNOMIAL IN GF(~)
The three-error locator polynomial is
x 3 + al ox 2 + a2°x + a3 = 0
First, substitute w = x + al to obtain
w3 + «al)2 + a2)ow + (al oa2 + a3) = 0
Second, apply the substitution
w = t + «ul)2 + u2)/t
to obtain
and thus
Finally, substitute
«al)2 + a2)3
v2 + v + ------ = 0
(a1 oa2 + a3)2
Now fetch a root V 1 from the table developed for the two-error case:
«a1)2 + a2)3 ]
V1 = TBLA [
(a1 oa2 + a3)2
Next, apply the reverse substitution
T2 = Tloa k
T3 = T2 a k
0
w = t + «al)2 + a2)/t
to obtain
(al)2 + a2
Tl +
Tl
(al)2 + a2
T2 +
T2
(al)2 + a2
Xl = aLl = Tl + + al
Tl
(al)2 + a2
X2 a L2 = T2 + + al
T2
(al)2 + a2
X3 = a L3 = T3 + + al
T3
The error locations L}. L2. and L3 are the logs base a of X }. X2. and X3. respectively.
- 357-
ANTILOG TABLE
(INPUT IS n, OUTPUT IS an)
0 1 2 3 4 5 6 7 8 9 A B C 0 E F
00 01 02 04 08 10 20 40 80 71 E2 B5 IB 36 6C 08 Cl
10 F3 97 5F BE 00 lA 34 68 00 01 03 07 OF CF EF AF
20 2F 5E BC 09 12 24 48 90 51 A2 35 6A 04 09 C3 F7
30 9F 4F 9E 40 9A 45 8A 65 CA E5 BB 07 OE lC 38 70
40 EO Bl 13 26 4C 98 41 82 75 EA A5 3B 76 EC A9 23
50 46 8C 69 02 05 OB C7 FF 8F 6F OE CO EB A7 3F 7E
60 FC 89 63 C6 FO 8B 67 CE EO AB 27 4E 9C 49 92 55
70 AA 25 4A 94 59 B2 15 2A 54 A8 21 42 84 79 F2 95
80 5B B6 10 3A 74 E8 Al 33 66 CC E9 A3 37 6E OC C9
90 E3 B7 IF 3E 7C F8 81 73 E6 BO OB 16 2C 58 BO 11
AO 22 44 88 61 C2 F5 9B 47 8E 60 OA C5 FB 87 7F FE
BO 80 6B 06 00 CB E7 BF OF IE 3C 78 FO 91 53 A6 30
CO 7A F4 99 43 86 70 FA 85 7B F6 90 4B 96 50 BA 05
00 OA 14 28 50 AO 31 62 C4 F9 83 77 EE AO 2B 56 AC
EO 29 52 A4 39 72 E4 B9 03 06 OC 18 30 60 CO Fl 93
FO 57 AE 20 5A B4 19 32 64 C8 El B3 17 2E 5C B8 01
LOG TABLE
(INPUT IS an, OUTPUT IS n)
0 1 2 3 4 5 6 7 8 9 A B C 0 E F
00 00 01 E7 02 CF E8 3B 03 23 00 9A E9 14 3C B7
10 04 9F 24 42 01 76 9B FB EA F5 15 OB 30 82 B8 92
20 05 7A AO 4F 25 71 43 6A 02 EO 77 00 9C F2 FC 20
30 EB 05 F6 87 16 2A OC 8C 3E E3 83 4B B9 BF 93 5E
40 06 46 7B C3 Al 35 50 A7 26 60 72 CB 44 33 6B 31
50 03 28 El BO 78 6F OE FO 90 74 F3 80 FO CO 21 12
60 EC A3 06 62 F7 37 88 66 17 52 2B Bl 00 A9 80 59
70 3F 08 E4 97 84 48 4C OA BA 70 CO C8 94 C5 5F AE
80 07 96 47 09 7C C7 C4 AO A2 61 36 65 51 BO A8 58
90 27 BC 6E EF 73 7F CC 11 45 C2 34 A6 6C CA 32 30
AO 04 86 29 8B E2 4A BE 50 79 4E 70 69 OF OC Fl IF
BO 9E 41 75 FA F4 OA 81 91 FE E6 CE 3A 22 99 13 B6
CO EO OF A4 2E 07 AB 63 56 F8 8F 38 B4 89 5B 67 10
00 18 19 53 lA 2C 54 B2 IB OE 20 AA 55 8E B3 5A lC
EO 40 F9 09 90 E5 39 98 B5 85 8A 49 5C 40 68 OB IE
FO BB EE 7E 10 Cl A5 C9 2F 95 08 C6 AC 60 64 AF 57
- 358-
QUADRATIC SOLUTION TABL~
FOR FINDING SOLUTION TO Y + y + C· = °
(INPUT IS C, OUTPUT IS YI; YI =0 => NO SOLUTION, ELSE Y2 = YI + a<»
0 1 2 3 4 5 6 7 8 9 A B C D E F
00 01 DB 8F 55 8D 57 03 D9 00 00 00 00 00 00 00 00
10 89 53 07 DD 05 DF 8B 51 00 00 00 00 00 00 00 00
20 00 00 00 00 00 00 00 00 C3 19 4D 97 4F 95 Cl IB
30 00 00 00 00 00 00 00 00 4B 91 C5 IF C7 ID 49 93
40 00 00 00 00 00 00 00 00 09 D3 87 5D 85 5F DB Dl
50 00 00 00 00 00 00 00 00 81 5B OF D5 OD D7 83 59
60 CB 11 45 9F 47 9D C9 13 00 00 00 00 00 00 00 00
70 43 99 CD 17 CF 15 41 9B 00 00 00 00 00 00 00 00
80 FF 25 71 AB 73 A9 FD 27 00 00 00 00 00 00 00 00
90 77 AD F9 23 FB 21 75 AF 00 00 00 00 00 00 00 00
AO 00 00 00 00 00 00 00 00 3D E7 B3 69 Bl 6B 3F E5
BO 00 00 00 00 00 00 00 00 B5 6F 3B El 39 E3 B7 6D
CO 00 00 00 00 00 00 00 00 F7 2D 79 A3 7B Al F5 2F
DO 00 00 00 00 00 00 00 00 7F A5 Fl 2B F3 29 7D A7
EO 35 EF BB 61 B9 63 37 ED 00 00 00 00 00 00 00 00
FO BD 67 33 E9 31 EB BF 65 00 00 00 00 00 00 00 00
0 1 2 3 4 5 6 7 8 9 A B C D E F
00 00 DB 00 EC 00 98 00 00 02 00 00 00 00 00 OD lC
10 00 45 36 34 00 00 00 00 A9 00 80 00 00 00 00 00
20 00 00 00 00 00 00 00 00 41 00 00 00 9A 00 D5 00
30 00 82 69 D9 00 D8 10 00 00 00 00 Dl 00 00 4F 00
40 04 00 A2 Bl 00 00 00 00 00 00 48 00 00 97 00 00
50 00 00 3B 70 51 24 A5 46 00 00 8C 00 00 00 IB 40
60 00 00 00 00 00 00 00 BC 00 00 00 07 00 00 F7 00
70 IA 00 76 00 D4 DO 00 00 38 00 EO 00 00 00 00 BB
80 00 9E 00 00 00 00 00 00 8A 00 5F 00 D7 00 CA 00
90 6C 00 00 00 00 00 4C 00 68 00 00 00 12 00 00 F3
AD 00 00 00 00 00 00 00 AF 00 D3 00 09 00 00 00 00
BO 00 00 90 00 00 00 6A 00 00 00 00 00 00 4D 00 00
CO 23 20 00 00 00 E5 5E 00 00 00 00 DE 00 00 00 00
DO 71 00 00 00 00 OF 00 E2 00 Cl 00 00 00 00 EF 00
EO 00 D2 08 9F 00 BE 00 00 00 C3 00 00 00 00 EA B5
FO 00 00 35 00 00 65 26 00 00 75 13 00 2F 00 00 CF
- 359-
AN ALTERNATIVE FINITE FIELD PROCESSOR DESIGN
The ftnite fteld processor shown below could be used instead of the one shown
earlier in this section. It uses subfteld multiplication; see Section 2.7 for more informa-
tion. the timing for ftnite fteld mUltiplication includes only one ROM delay. This path
for the other processor included two ROM delays and a binary adder delay. Inversion is
accomplished with a ROM table.
GF(256)SUBFIELD
MULTIPLIER
USING 4 ROMS:
SEE SECTION 2.7
The following pages show, for this alternative ftnite fteld processor:
- A ROM table for the four multipliers comprising the GF(256) subfteld multi-
plier.
- 360·
SUBFIELD MULTIPICATION TABLE
(INPUT IS TWO 4-BIT NIBBLES, OUTPUT IS ONE 4-BIT NIBBLE)
0 1 2 3 4 5 6 7 8 9 A B C 0 E F
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 2 3 4 5 6 7 8 9 A B C 0 E F
2 0 2 4 6 8 A C E 9 B 0 F 1 3 5 7
3 0 3 6 5 C F A 9 1 2 7 4 0 E B 8
4 0 4 8 C 9 0 1 5 B F 3 7 2 6 A E
5 0 5 A F 0 8 7 2 3 6 9 C E B 4 1
6 0 6 C A 1 7 0 B 2 4 E 8 3 5 F 9
7 0 7 E 9 5 2 B C A 0 4 3 F 8 1 6
8 0 8 9 1 B 3 2 A F 7 6 E 4 C 0 5
9 0 9 B 2 F 6 4 0 7 E C 5 8 1 3 A
A 0 A 0 7 3 9 E 4 6 C B 1 5 F 8 2
B 0 B F 4 7 C 8 3 E 5 1 A 9 2 6 0
C 0 C 1 0 2 E 3 F 4 8 5 9 6 A 7 B
0 0 0 3 E 6 B 5 8 C 1 F 2 A 7 9 4
E 0 E 5 B A 4 F 1 0 3 8 6 7 9 2 C
F 0 F 7 8 E 1 9 6 5 A 2 0 B 4 C 3
00 01 OC 08 06 OF 04 OE 03 00 OB OA 02 09 07 05
10 CC CO 6A 6C 58 50 08 05 FA F5 8E 86 3E 3D 76 71
20 66 07 60 OA 35 F3 36 FC E4 B5 EA BE A4 47 AE 43
30 44 56 53 40 F2 24 26 FO A8 C3 CF A2 3F 10 lC 3C
40 33 52 AF 2F 30 57 A5 20 DE 72 BO 9E 03 75 B6 97
50 BB 6F 41 32 69 BO 31 45 14 5C 92 84 59 15 8C 9B
60 22 DB E3 CA ED C6 20 06 Bl 54 12 60 13 6B BA 51
70 77 IF 49 OF 02 40 IE 70 B9 F7 C8 90 94 C4 F8 B2
80 DO AD El FE 5B 93 IB 8F DO A7 EF Fl 5E 9A lA 87
90 AA E7 5A 85 7C C5 B7 4F E9 AO 80 5F C9 7B 4B BC
AO 99 E8 3B CE 2C 46 01 89 38 C2 90 E6 DC 81 2E 42
BO 55 68 7F F9 E5 29 4E 96 F6 78 6E 50 9F 4A 2B EB
CO 11 CD A9 39 70 95 65 EC 7A 9C 63 E2 10 Cl A3 3A
DO 88 A6 74 4C 09 17 67 21 16 04 23 61 AC 80 48 73
EO FF 82 CB 62 28 B4 AB 91 Al 98 2A BF C7 64 FO SA
FO EE 8B 34 25 FB 19 B8 79 7E B3 18 F4 27 37 83 EO
- 361 -
ANTILOG TABLE FOR ALTERNATIVE FINITE FIELD PROCESSOR
(INPUT IS n, OUTPUT IS an)
0 1 2 3 4 5 6 7 8 9 A B C 0 E F
00 01 10 12 32 16 72 5E BA IF E2 C5 91 8B 39 A6 CO
10 11 02 20 24 64 2C E4 A5 FO 27 54 lA B2 9F 6B DC
20 13 22 04 40 48 C8 41 58 OA 73 4E A8 20 F4 B7 CF
30 31 26 44 08 80 89 19 82 A9 3D E6 85 09 43 78 FE
40 17 62 4C 88 09 90 9B 2B 94 DB 63 5C 9A 3B 86 E9
50 75 2E C4 81 99 OB BO BF 4F B8 3F C6 Al BO 6F 9C
60 5B EA 45 18 92 BB OF FO F7 87 F9 67 lC 02 F3 C7
70 Bl AF 50 8A 29 B4 FF 07 70 7E 9E 7B CE 21 34 76
80 IE F2 07 A3 90 4B F8 77 DE EO E5 B5 EF 15 42 68
90 EC 25 74 3E 06 B3 8F 79 EE 05 50 5A FA 57 2A 84
AO C9 51 4A E8 65 3C F6 97 EB 55 OA AD AD 70 AE 4D
BO 98 IB A2 8D 59 CA 61 7C BE 5F AA OD DO 03 E3 D5
CO 83 B9 2F D4 93 AB ID C2 El F5 A7 DD 03 30 36 56
DO 3A 96 FB 47 38 B6 DF 23 14 52 7A DE 33 06 60 6C
EO AC 6D BC 7F 8E 69 FC 37 46 28 A4 ED 35 66 OC CO
FO Cl Dl C3 Fl E7 95 CB 71 6E 8C 49 D8 53 6A CC 01
0 1 2 3 4 5 6 7 8 9 A B C D E F
00 00 11 CC 22 99 DD 77 33 44 AA 55 EE BB 88 66
10 01 10 02 20 08 80 04 40 63 36 IB Bl 6C C6 80 08
20 12 70 21 D7 13 91 31 19 E9 74 9E 47 15 2C 51 C2
30 CD 30 03 DC 7E EC CE E7 D4 OD DO 4D A5 39 93 5A
40 23 26 8E 3D 32 62 E8 03 24 FA A2 85 42 AF 2A 58
50 9A Al D9 FC lA A9 CF 90 27 B4 9B 60 4B 72 06 B9
60 DE B6 41 4A 14 A4 ED 6B 8F E5 FD IE DF El F8 5E
70 78 F7 05 29 92 50 7F 87 3E 97 OA 7B B7 AD 79 E3
80 34 53 37 CO 9F 3B 4E 69 43 35 73 OC F9 B3 E4 96
90 45 OB 64 C4 48 F5 01 A7 BO 54 4C 46 5F 84 7A ID
AO AB 5C B2 83 EA 17 OE CA 2B 38 BA C5 EO AC AE 71
BO 56 70 lC 95 75 8B 05 2E 59 Cl 07 65 E2 50 B8 57
CO EF FO C7 F2 52 OA 5B 6F 25 AD B5 F6 FE OF 7C 2F
DO BC Fl 60 BO C3 BF 94 82 FB 3C 28 49 IF CB DB D6
EO 89 CB 09 BE 16 8A 3A F4 A3 4F 61 A8 90 EB 98 8C
FO 67 F3 81 6E 2D C9 A6 68 86 6A 9C D2 E6 18 3F 76
- 362-
QUADRATIC SOLUTION TABL~ FOR ALTERNATIVE FINITE FIELD PROCESSOR
FOR FINDING SOLUTION TO Y + y + C = 0
(INPUT IS C, OUTPUT IS YI; YI =0 => NO SOLUTION, ELSE Y2 = YI + a<»
0 1 2 3 4 5 6 7 8 9 A B C D E F
00 01 OB 11 1B 13 19 03 09 1D 17 OD 07 OF 05 1F 15
10 B5 BF A5 AF A7 AD B7 BD A9 A3 B9 B3 BB B1 AB A1
20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ' 00 00
60 3D 37 2D 27 2F 25 3F 35 21 2B 31 3B 33 39 23 29
70 89 83 99 93 9B 91 8B 81 95 9F 85 8F 87 8D 97 9D
80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
AO CF C5 DF D5 DD D7 CD C7 D3 D9 C3 C9 C1 CB D1 DB
BO 7B 71 6B 61 69 63 79 73 67 6D 77 7D 75 7F 65 6F
CO F3 F9 E3 E9 E1 EB F1 FB EF E5 FF F5 FD F7 ED E7
DO 47 4D 57 5D 55 5F 45 4F 5B 51 4B 41 49 43 59 53
EO 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
FO 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
a 1 2 3 4 5 6 7 8 9 A B C D E F
00 00 OB 00 09 00 08 00 00 02 00 00 00 00 00 00 04
10 00 00 00 00 94 CF 00 00 22 20 E2 85 48 4C 00 00
20 5E 00 91 00 00 00 00 00 00 00 00 00 BA 00 1A 00
30 00 11 10 00 4E 00 00 3B 00 00 00 00 82 24 26 6B
40 00 00 00 00 00 00 00 00 8B 00 19 00 E4 00 A6 00
50 00 00 00 99 00 00 90 00 39 D9 00 13 27 41 12 00
60 63 00 00 00 00 00 E9 00 00 00 00 C5 00 5C 00 00
70 DA 00 00 00 00 00 00 F4 00 00 00 73 43 00 00 00
80 00 00 00 17 89 00 54 40 00 00 00 16 81 00 9A 44
90 A5 00 00 00 FD 00 00 00 00 B2 00 00 00 2D 00 00
AO 3D 00 00 00 86 00 00 00 00 00 78 00 00 00 E6 00
BO 00 00 00 00 58 00 2B 00 00 00 00 00 00 DC 00 9F
CO 00 75 00 00 00 00 00 C8 00 00 00 C4 00 72 00 00
DO 00 00 00 FE 62 00 00 00 00 64 00 00 00 00 DB 00
EO 00 00 32 00 00 B7 00 00 00 00 00 A9 31 00 00 00
FO 00 2E A8 00 CD 88 00 00 00 00 80 9B 00 1F 2C 00
- 363-
CHAPTER 6 - TESTING OF ERROR-CONTROL SYSTEMS
This chapter is concerned primarily with diagnostic capability for storage device
applications. However, the techniques described are adaptable to semiconductor memory,
communications, and other applications.
6.1 MICRODIAGNOSTICS
There are several approaches for, implementing diagnostics for storage device er-
ror-correction circuits. Two approaches are discussed here. The first approach requires
the implemeptation of "read long" and "write long" commands in the controller.
The "read long" command is identical to the normal read command except that
check bytes are read as' if they were data bytes. The "write long" command is identical
to the normal write command except that check bytes to be written are supplied, not
generated. They are supplied immediately behind the data bytes.
Use the "read long" command to read a known defect-free data record and its
check bytes. XOR into the record a simulated error condition. Write the modified data
record plus check bytes back to the storage device using the "write long" command. On
read back, using the normal read command, an ECC error should be detected and the
correction routines should generate the correct response for the error condition simu-
lated. Repeat the test for several simulated error conditions, correctable and uncorrec-
table.
It is often desirable to reserve one or more diagnostic records for the testing of
error-correction functions. It is important for any diagnostic routines testing these
functions to first verify that the diagnostic record is error free.
In some cases, hardware computes syndromes but is not involved in the correction
algorithm. The correction algorithm is totally contained in software. In this case, it is
easy to get a breakdown between hardware and software failures by testing the software
first. Supply syndromes to the software, for which proper responses have been record-
ed.
Using the second diagnostic approach, the hardware is designed so that, under
diagnostic control, data records can be written with the check bytes forced to zero. A
data record is selected that would normally cause all check bytes to be zero. Simulated
error conditions are XOR'd into this record. The record is then written to the storage
device under diagnostic control and check bytes are forced zero. On normal read back
of this record, an error should be detected and the proper responses generated.
- 364-
These techniques apply to error-control systems employing very complex codes as
well as those employing simple codes. They apply to the interleaved Reed-Solomon code
as well as the Fire code.
If the controller corrects data before it is transferred to the host, the host diag-
nostic software must check that the simulated error condition is corrected in the test
record. The entire test record must be checked to verify that the error is corrected
and that correct data is not altered. Alternatively, the controller could have a diagnos-
tic status or sense command that transfers error pattem(s) and displacement(s) to the
host for checking. However, this is not as protective as checking corrected data.
Read back the record just written using the normal read command. Verify that
the controller corrected the simulated error condition. Repeat, using many random
guaranteed-correctable error conditions.
Some nonrandom error conditions should be forced as well. Select a set of error
conditions that is known to test all paths of the error-correction implementation.
- 365-
FORCING DETECI'ABLE ERROR CONDITIONS
Repeat the test defined under FORCING CORRECTABLE ERROR CONDITIONS,
except use simulated error conditions that exceed guaranteed correction capability but
not guaranteed detection capability. An uncorrectable error should be detected for each
simulated error condition.
For implementations where the data is actually corrected by the controller, it may
be desirable to include an error-logging capability within the controller. A minimum er-
ror-logging capability would count the errors recovered by reread and the errors recov-
ered by error correction. Logging requires the controller to have a method of signaling
the host when the counters overflow and a command for offloading counts to the host.
A more sophisticated error log would also store information useful for:
- Reassigning areas of media for repeated errors.
- 366-
6.5 SELF-CHECKING
HARDWARE SELF-CHECKING
Hardware self-checking can limit the amount of undetected erroneous data trans-
ferred when error-correction circuits fail.
Self-checking should be added to the design if the probability of error-correction
circuit failure contributes significantly to the probability of transferring undetected
erroneous data. One self-checking method duplicates the error-correction circuits and,
on read, verifies that the error latches for both circuits agree. No circuits from the
two sets of error-correction hardware share the same Ie package. This concept can be
extended by having separate sources and/or paths for clocks, power, and ground.
Another self-checking method is called parity predict. It is used for the
self-checking of shift registers that are part of an error-correction implementation. On
each clock, new parity for each shift register is predicted. The actual parity of each
shift register is continuously monitored and at each clock, is compared to the predicted
parity. If a difference is found, a hardware check flag is set.
The diagrams below define when parity is predicted to change for four shift-regis-
ter configurations.
DIVIDE BY g(x). ODD NUMBER OF FEEDBACKS
~_....L--l~l Hb
DATA r
The parity of the shift register will flip each time the data bit is '1'.
DIVIDE BY g(r). EVEN NUMBER OF FEEDBACKS
~r--------'---,l Hb
DATA r
The parity of the shift register will flip if a '1' is shifted out of the shift regis-
ter, or (exclusive) if the data bit is '1'.
- 367-
MULTIPLY BY:xl1!AND DIVIDE BY glI). ODD # OF FEEDBACKS
DATA .&"----L--l-L--I ;J
The parity of the shift register will flip if the data bit is '1'.
The parity of the shift register will flip if a '1' is shifted out of the shift regis-
ter.
An m-bit shift register circuit using parity predict for self-checking is shown on
the following page. An odd number of feedbacks and premultiplication by xm is as-
sumed. It is also assumed that the feedbacks are disabled during write check-bit time
but not during read check-bit time. While writing data bits, reading data bits, and
reading check bits, parity of the shift register is predicted to change for each data bit
that is '1'. While writing check bits, parity is predicted to change for each '1' that is
shifted out of the shift register.
WRITE
DATA
......
PARITY TREE
1
1----.---1 J Q J Q t - - - - - -.....
o PARITY
K K PREDICT
MUX ERROR
- 368-
Another technique that aids the detection of error-correction hardware failures is
to design the circuits so that nonzero check bytes result when the data is all zeros.
- 369-
SUPPLEMENTARY PROBLEMS
2. Write out the error-locator polynomi~ for errors 1t locations 0, 3, and 5 for a
Reed-Solomon code operating over GF(2 ) dermed by x + x + 1.
3. Show a Chien search circuit to solve the error-locator polynomial from problem 2.
4. Once error locations for a Reed-Solomon code are known, the syndrome equations
become a system of simultaneous linear equations with the error values as un-
knowns. The error-location vectors are coefficients of the unknown error values.
Solve this set of simultaneous linear equations for the two error case.
6. G1ven a small field generated by the rule p3 = P + 1 and a large field generated by
a = a + p, develop the rule for accomplishing the square of any element in the
large field by performing computation in the small field.
8. Find a polynomial for a code of length 7 that has single-, double-, and triple-bit
error detection.
9. For detection of random bit errors on a 32-bit memory word, would it be better to
place parity on each byte or use a degree four error-detection polynomial across
the entire 32-bit word?
10. A device using a 2048 bit record, including 16 check bits, has a random bit error
rate of lE-4. The 16 check bits are defined by the polynomial below. Can the
device meet a lE-I5 specification for Pued (probability of undetected erroneous
data)?
x I6 + xI2 + x5 + 1
= (x + 1)· (x I5 + x14 + x 13 + x 12 + x4 + x3 + x2 + x + 1)
11. Compute the probability for three or more error bursts in a block of 256 bytes
when the raw burst error rate is 1E-7 .
- 370-
12. Compute the block error probability for a channel using a detection only code
when the raw burst error rate is IE-IO.
13. Design a circuit to sojve the equation y2 + y + C = 0 for Y when C is given. The
field is generated by x + x + 1.
x24 + x 17 + x14 + x lO + x3 + 1
a) For a correction span of four, determine the detection span using the ine-
qualities for a Fire code.
b) Determine the miscorrection probability for correction span four and record
length 259 bytes, (data plus check bytes.)
15. For an error-detection code using the shift register below for encoding and decod-
ing of 2048 byte records:
DATA .~r--3-2---B-I-T-S-H-I-F-T-R-E-G-I-S-T-E-R--,;J
a) Determine the misdetection pr~babi1ity for all possible error bursts.
15 , 45
9 , 31
7 , 11
14 , 127
18. Listed below are residues for several integers modulo 5 and 9. Compute the Ai
and mi of the Chinese Remainder Method. Then use the Chinese Remainder
Method to determine the integers.
a) aMOD5 = 4, aMOD9 = 6, a = ?
b) aMOD5 = 3, aMOD9 = 5, a = 1
c) aMOD5 = 0, aMOD9 = 4, a = 1
What is the total number of unique integers that can be represented by residues
modulo 5 and 91
- 371 -
19. DefiJ;le a fast division algorithm for dividing by 2S5 on an 8-bit. processor that
does not have a divide instruction. The dividend must be less than 65,536.
20. What is the total number of unique integers that can be represented by residues
modulo 6 and 8?
21. Which of the finite field functions listed below are linear?
Log Square Antilog
Cube Square Root Cube Root
Sixth Power Eight Root Inverse
Modulo
22; Determine the period of the following polynomials:
a) x43 + 12
b) x + x + x + 1
23. Con:pute the reciprocal polynomial ofx3 + x + 1.
24. How many primitive polynomials are of degree eight?
25. Compute the residue of x7 MOD x3 + x + 1.
26. For a small-systems magnetic disk,list sevel'al factors influencing data accuracy.
27. Is it possible for. a polynomial with an. <Xl~ number of terms to have a factor of
the form (XC + I)? Why?
28. Describe the difference between error locations and error-location vectors. Which
are roots of an error-locator polynomial?
a) aO
b) a1
c) a2
-372 -
33. For a symbol-error-correcting code (symbol size eight bits) used with a 128 symbol
(byte) record, what must the symbol-correcting capability be to have a block error
rate less than lE-8 for a raw symbol error rate of lE-4? The block error rate is
the ratio of block errors to blocks transferred.
34. Show a circuit to implement the equation below in GF(28).
R2 = Rl + a O
35. In a Reed-Solomon code implementation, it may be necessary to test an equality
similar to the one below for true:
Suggest an equivalent test that would not require finite field multiplication or
division.
36. Write log and antilog tables for the field generated by x3 + x + 1.
37. Consider' a Reed-Solomon code implementation where data is read from a storage
device into a buffer. The data is corrected in the buffer and then transferred to
a host. Deime a way of loading and unloading the buffer such that the imite
field processor does not have to take logs of error-location vectors before making
corrections to the buffer.
38. Deime an algorithm for computing the square root in a field of 15 elements using
log and antilog tables.
39. Remember that miscorrection probability is the ratio of valid syndromes to all
possible syndromes. Generate a miscorrection formula for a two-symbol-correcting
Reed-Solomon code using GF(2 8). The symbol size is eight bits. The record
length is 255 bytes, including check bytes.
40. List the first ten entries in an antilog table for a large field. The small field is
g~erated by the rule {33 = {3 + 1 and the large field is generated by the rule
a = a + {3.
- 373 -
APPENDIX A. PRIME FACTORS OF 2n-1
n Factors of 2 n -1
3 7
4 3 5
5 31
6 3 3 7
7 127
8 3 5 17
9 7 73
10 3 11 31
11 23 89
12 3 3 5 7 13
13 8191
14 3 43 127
15 7 31 151
16 3 5 17 257
17 131071
18 3 3 3 7 19 73
19 524287
20 3 5 5 11 31 41
21 7 7 127 337
22 3 23 89 683
23 47 178481
24 3 3 5 7 13 17 241
25 31 601 1801
26 3 2731 8191
27 7 73 262657
28 3 5 29 43 113 127
29 233 1103 2089
30 3 3 7 11 31 151 331
31 2147483647
32 3 5 17257 65537
- 374 -
APPENDIXB
The following paper is included in slightly modified form.
Neal Glover
DATA SYSTEMS TECHNOLOGY. CORP.
A Subsidiary of Cirrus Logic. Inc.
INTERLOCKEN BUSINESS PARK
100 Technology Drive, Suite 300
Broomfield, Colorado 80021
Phone (303) 466-5228 FAX (303) 466-5482
T.K. Truong
COMMUNICATIONS SYSTEM RESEARCH
Jet Propulsion Laboratory
Pasadena, California 91103
This work was supported in part by NASA contract No. NAS 7-100, in part by the U.S.
Air Force Office of Scientific Research, under grant AFOSR-80-0151.
- 375-
ABSTRACT
Let GF(q) be a fmite field, where q =pDl and p is prime. Multiplications are per-
formed often using log and antilog tables of pm-l non-zero field elements. It is shown
in this paper that forq =p2n and pD+ 1 a prime, that the log and the antilog of a field
element can be found with two substantially smaller tables of pD + 1 and pD-l elements,
respectively. The method is based on a use of the Chinese Remainder theorem. This
technique results in a significant reduction in the memory requirements of the problem.
It is shown more generally that for:
where, mi=(p/i for 1 SiS k, tables of ml elements, m2 elements, ... , and mk elements
also can be used to fmd logs and antilogs over GF(q). In the later method, further
reductions in the memory requirements are achieved, however, at the expense of a
greater number of operations.
- 376-
I. lNTRODUcnON
In order to efficiently encode and decode BCH and RS codes over a finite field
GF(q) , each symbol of GF(q) is representable as a power of a selected primitive element
in GF(q), i.e., a=1 i for a, 1 € GF(q) where r is primitive.
To multiply two field elements a,p € GF(q) , where a=r i and P=1j, one only needs
to add i and j modulo (q-l) to obtain the resulting exponent k. That is,
In the actual implementation of this multiplication process, a log table can be used to
find the exponents. If the field elements are represented in the binary representation,
binary addressing is used to locate a logarithm in the table. After the addition of the
exponents modulo (q-l) , an antilog table is used to find the binary representation of 1k.
The exponent k serves as the address of the antilog table. If q is large for many
applications, such log and antilog tables may be prohibitively large.
In the next section it is shown that for a q of form p2n where pfi+ 1 is a prime
that substantially smaller tables of sizes pO+ 1 and pO-I can be used to find the log and
antilog of a field element. Since q-l = p2n_l = {pfi+ 1)· (pO-I) and (pfi+l,pO-l) = 1, the
Chinese Remainder theorem can be used to decompose the tables of p2n_l elements into
smaller ones of pO+ 1 elements and pO-I elements respectively. The results obtained
from the tables of pfi+ 1 elements and pO-I elements can be recombined to yield the
desired log table of p2n_l elements. A similar reduction can be made for the antilog
table. The memory requirements of this new method for finding the log and antilog are
reduced from 2(p2n-l) to 2[pO+ I +pO-l] = 4pO memory elements.
In Section III a more involved method is developed that yields the logarithm with
a minimum memory requirement but with a greater number of operations. Suppose:
where Pi is prime and mi=(p/i for 1 ~ i ~ k, (mi,mj)=1 for i t j. Then the Chinese
Remainder theorem can be used to decompose tables of p2n_l elements into k smaller
ones of ml elements, m2 elements, ... and mk elements, respectively. The log and
antilog of a field element can be found by utilizing these k tables.
- 377-
II. A LOG AND AN17LOG ALGORITHM OVER GFrp2nj
Let P be a primitive element in GF(pD) and x € GF(pD). Also let m be the least
integer such that x=pm.
It follows that:
and:
(3)
Since p is primitive over GF(pD), pO-I is least integer such that p(pD-I)/r = 1. Hence:
- 378-
Thus, by (2) opP"-l)/r +1 unless r=l. Therefore, the order of a is p2n_l and a
is primitive in GF(p2n).
Q.E.D.
The above theorem guarantees the root ex to be primitive in the extension field
only when p"+ 1 is a prime. To show that the theorem is not generally true for p"+ 1
not a prime, consider the following counter example: Let GF{ll2) be the extension field
of GF(U). It is verified readily that 13=112 € GF(ll) and is a primitive element in this
field. Also:
p(x)=x2-x+ 112
Definition 2. For a € GF(p2n) and a+exb € GF(p2n), where a,b € GF(p") , the norm
ofa+ab is:
Using the results of Theorem 1 and Definition 1 and Definition 2, the following
theorem is demonstrated.
Theorem 2. Let f3 be a primitive element in GF(p") such that the quadratic poly-
2
nomial x + x + 13 is irreducible over GF(p"). Suppose that p" + 1 is prime. Next let a be
the root of this polynomial in the extension field GF(p2n) = {a + exb Ia,b € GF(pn)} of
GF(p"). Suppose ex m = a+ab € GF(p2n). The following holds:
- 379-
By Theorem 2 one can construct a logp table of p11 - 1 elements by storing the
value ml = m mod(p11-l), where:
1 S ml S p11-l,
at location a 2 +ab+b 2p such that cr m = a+crb. Then with a and b known, one can find ml
using the logp table. A logp table is given in Section VI for p11-l =15. Similarly, the
antilogp table is constructed by storing the binary representation of a2 +ab+b 2p at loca-
tion m 1 such that crm = a + crb and:
(4)
An antilogp table is also given in Section VI for p11-1 = 15. Next, the constructions of
tables of p11 + 1 elements is shown.
a+ab
log1' [ ---- 1=m mod(pn+l)
a+crb
- 380-
To construct the log.,. table, notice that when a=O:
and m=1. For m2 == mod (pfi+ 1), one has m2=1 when a=O. When b=O:
a+O
f (alb) = -- = 1.
a+O
Thus, m=O and m2=0. The remaining part of the log.,. table can then be constructed by
storing the value m2==m mod(pfi+l) at location alb for am = a+ab, where 2Sm2spfi. A log.,.
table for pD+ 1 = 17 is given in Section VII. Also, given there is an antilog.,. table for
pfi+l=17. It is constructed by storing the binary representation of (alb) e {pl,p2, •• • pI5}
at the corresponding location i=m2 for 2SiS16. Thus:
(5)
From (4) and (5) the following two simultaneous equations need to be solved for a and
b in order to reconstruct a m = a+ab:
=x
{6}
- 381 -
Relations (6) yield the following solution:
(7)
a = boy (8)
lOgPZ]
b = antilogp [ 2 (9)
where:
z = _-"x=--
y2+y+p
Now, the logarithm of am=a+ab € GF(p2n), where a,b € GF(pO) and a € GP(p2n) is
primitive, can be found in terms of ml and m2 by using the tables of pn-l elements and
pn+ 1 elements, respectively. Then the Chinese Remainder theorem warrants that:
(10)
where:
- 382-
and (ntt t and (nv- t are the smallest numbers such that:
To recapitulate, the following algorithms for the log and antilog are given:
y = alb
2. Use the 10813 table to fmd mt = 10gJ3(x) and the log,. table to fmd
m2= log,.(y) for aiO, biO.
- 383 -
3. By equation (10):
where:
Then:
a = boy
To illustrate the above procedures, the following examples are given over GF(2 8).
- 384-
Example 1: Given a l27 =(0,1, 1,0)+a(l, 1,1,0) E GF(28). Then, a=(O,I,1 ,0) and b=(l, 1,1,0).
By the WG algorithm:
(1,1,1,0)
y = alb = (1,1,1,1)
Now use Tables VI.1 and VI.3 to fmd ml and m2, respectively. The results
are ml=7 and m2=8. For this example, nl=17, n2=15, (nlt 1=8, (n2t l =8,
n 1· (nt}-1 = 136 and n2· (n2f 1 = 120. By equation (10):
= 127
Example 2. Given m=127, find a 127 = a+ab E GF(2 8). Using the ANTILOG algorithm:
ml = m mod (pfi-l) = 7
m2 = mmod(pfi+l) = 8
Then use Tables VI.2 and VI.4 to find x and y, respectively. The results are:
Thus:
x
z = --- (0,0,1,1)
- 385-
By equation (9):
_ [ lOgP2 (z) ]
b = antilogp = antilogp
and:
b = (1,1,1,0)
Thus:
a = bey = (0,1,1,0)
Therefore:
a 127 = (0,1,1,0) + a(1,t,l,O).
- 386-
III. A GENERALALGORrrHM FOR FlNDINGWGAND AN11WG OYER GF(q)
where Pi is prime and ni =(p/i for l~i~k. Let a ~ GF(q) be primitive. Then any field
element of GF(q) can be represented by a for some i, where l~i~q-l. By the Chinese
i
Remainder theorem an exponent i is mapped onto (it mod nI, i2 mod n2, ... , ik mod nJ.;).
Then a primitive element a is expressed in the notation of the Chinese Remainder
theorem as follows:
a (1,0,0, ... ,0) ,a (0,1,0 ..• ,0) , ... ,a (0,0, ..• ,0,1) (11)
Here:
_(0,0, ... ,0,1,0, •.• ,0) s (12)
... Tj
where the integer 1 in the exponent is in the location j. Element Tj is an nj-th root
of unity. It follows from (11) that:
where:
- 387 -
and:
nj omj=q-l (15)
Now, suppose one computes Cam)mjocmjtl for any j such that l:sj:sk. Observe by (15)
that one has:
where Cj ;: m mod nj for l:sj:sk and m=Cj t~anj for sOIne integer a. Then, by (14) it
follows that:
m mJ·· (mJ' ) -1 _
(a ) [~mj 0 (mj) -lJ Cj mod q-1
= (fj) Cj (17)
Therefore, by the use of equation (17) one can compute (fj)Cj from am for l:sj:sk.
Note that k small tables, each containing the value Cit for l:Si:snj, at location
(fj}Ci. for l:sj:sk, can be used to find the k exponents Cl,C2 .... ,Ck. respectively. Once
the Cj are found, the Chinese Remainder th~orem is used to compute the logarithm of
am as follows:
- 388 -
By (13), (14), (16), and (17), the antilog ofm is computed as:
(19)
LOGARITHM
Using the tables in Section VII, one finds Cl, C2, and C3 from the following
computations:
=20.
- 389-
ANTILOG
From m=20, one computes:
C1 E 20 mod 3 =2
C2 = 20 mod 5 0
C3 = 20 mod 17 3
- 390 -
IV. eONaUS/ON
To find the log and antilog of an element in a finite field GF(q) , it q =p2n for
some prime p, the technique shown in Section II can be used to reduce the table mem-
ory requirement from 2(p2n_l) elements to 4pll elements. A further memory reduction
can be achieved, i.e. from (q-l) elements to:
elements, by using the general method shown in Section III, however, at the expense of
a greater number of operations.. A comparison of the number of operations needed in
these methods is given in Table IV.l. It is evident from Table IV.l that the number of
multiplications required in the general case can be prohibitively large in some situations.
Thus, the technique shown in Section II has a better potential than the general method
for many practical applications.
- 391 -
Table IV. 1
when ~-1=p2n-l
and p +1 is prime q-l=nl n2 ...
General Method for
nk
No. of LOG ANTILOG LOG ANTILOG
--1L
Multiplication 7 5
L- m'J • em'J ) -1 *
k+ \ k-l
j-l
Additions 4 2 k-l 0
Table Look-Ups 2 4 k k
Modulus 0 2 1 k
operations
* mj • (mj) -1 == 1 mod nj
- 392-
v. PROOF OF THEOREMS
Proof of Theorem 2. Since x2+x+,8 is irreducible over GF(pn), it has roots a and
~=apn in the extension field GF(p2n). By theorem 1, a is primitive in GF(p2n). By
Definition 2 and relations (1) and (2), one has the following:
= (a+ab) (a+ab)
(V.1)
n n
(a+ab) (c+ad) = (a+ab)P (c+ad)P
(V.2)
I Ia I I = a~a = ,8
so that the theorem is true for m = 1. For purposes of induction, assume that:
- 393-
for all k such that l~k~m. Then by (V.3), for k=m+ 1:
Represent am by a+ab for some a,b € GF(pD). Then, by (V. I) and (V.4):
The theorem follows by the definition of the logarithm and the fact that 13 has order
pn-I.
Q.E.D.
(V.6)
(V. 7)
- 394-
Representing am by a+ab for some a,b E GF(pD), it follows from (V.6) that:
(V.8)
II a+ab II (a+ab) 2
I la+abl 12
(V.9)
The theorem follows by the definition of the logarithm and the fact that the order of
Tis pn+1.
Q.E.D.
- 395-
VI. Let p(x)=x4 +x3 +1 be irreducible over GF(2) and fJ € GF(24) is a solution of p(x).
Then:
fJl
fJ2
/33
/3 4 = /33+1
/35 = /33+/3+1
/36 = /3 3 +/3 2 +/3+1
/3 7 = /32+/3+1
/38 = /3 3+/3 2+/3
/39 = /3 2+1
/310 = /33+/3
/311 = /33+/32+1
/312 = /3+1
/313 = /3 2+/3
/314 = /33+/32
/315 = 1
- 396 ..
Table VI.l
Logp
Location ~ontent
0 o 0 1 3
0 0 1 0 2
0 0 1 1 14
0 1 0 0 1
0 1 0 1 10
0 1 1 0 13
0 1 1 1 8
1 0 0 0 15
1 0 0 1 4
1 0 1 0 9
1 0 1 1 11
1 1 o 0 12
1 1 0 1 5
1 1 1 0 7
1 1 1 1 6
- 397 -
Table VI.2
Antilog.B
Location Content
0 0 0 0 0 0 0 0
0 0 0 1 0 1 0 0
0 0 1 0 0 0 1 0
0 0 1 1 0 o 0 1
0 1 0 0 1 0 0 1
o 1 0 1 1 1 0 1
0 1 1 0 1 1 1 1
0 1 1 1 1 1 1 0
1 0 o 0 0 1 1 1
1 0 0 1 1 0 1 0
1 0 1 0 0 1 0 1
1 0 1 1 1 0 1 1
1 1 0 0 1 1 0 0
1 1 0 1 0 1 1 0
1 1 1 0 001 1
1 1 1 1 1 0 o 0
- 398 -
For the .,. table, note the following:
i) lfa=O:
ii) Ifb=O:
- 399-
Table VI. 3
Antilog,
Location Content
0 0 0 1 14
0 0 1 0 12
o 0 1 1 6
0 1 0 0 2
0 1 0 1 10
0 1 1 0 4
0 1 1 1 9
1 0 0 0 16
1 0 0 1 3
1 0 1 0 5
1 0 1 1 11
1 1 0 0 15 ,.'
1 1 0 1 7
1 1 1 0 13
1 1 1 1 8
- 400 -
Table VI.4
Antiloqr
Location ~ontent
0 0 1 0 0 1 0 0
0 a 1 1 1 a a 1
0 1 a a a 1 1 0
0 1 a 1 1 a 1 0
a 1 1 0 a a 1 1
a 1 1 1 1 1 a 1
1 a 0 0 1 1 1 1
1 0 0 1 a 1 1 1
1 0 1 0 a 1 0 1
1 0 1 1 1 0 1 1
1 1 0 a a a 1 a
1 1 0 1 1 1 1 a
1 1 1 0 a 0 0 1
1 0 0 0 1 1 0 a
1 0 0 0 0 1 a 0 a
- 401 -
VIT. Tables for ml =3, m2=5, and m3=17, where m=255=ml'm2'm3, when
a 8 +a 4 +a 3 +a 2 +1=O,
(fl)m 1 m2 (f2)m2 m2
1 (0 0 0 0 0 0 0 1) 0 1 (0 0 0 0 0 o 0 1) 0
a 85 (0 1 1 0 1 0 1 1) 1 a 51 (0 0 0 0 0 1 0 1) 1
a 170 (1 1 1 o 0 1 0 1) 2 a 102 (0 0 1 o 0 0 1 0) 2
a 153 (0 1 0 0 1 0 0 1) 3
a 204 (1 1 1 0 o 0 o 0) 4
(f3)m3 m3
1 (0 0 o 0 0 0 0 1) 0
a 120 (1 0 0 1 0 0 1 1) 1
a 240 (0 0 0 1 0 1 1 0) 2
a 105 (0 o 0 0 1 1 0 1) 3
a 225 (0 1 1 1 0 1 1 0) 4
a 90 (1 1 1 0 0 0 0 1) 5
a 210 (1 0 1 0 0 0 1 0) 6
a 75 (1 o 0 0 1 0 0 1) 7
a 195 (0 0 1 1 0 0 1 0) 8
a 60 (1 1 0 1 o 0 1 0) 9
a 180 (0 1 0 0 1 0 1 1) 10
a 45 (1 1 1 0 1 1 1 0) 11
a 165 (1 1 0 0 0 1 1 0) 12
a 30 (0 0 1 1 0 0 0 0) 13
a 150 (1 0 1 0 0 1 0 0) 14
a 15 (0 0 1 0 0 1 1 0) 15
a 135 (1 1 0 1 1 0 1 0) 16
- 402 -
ABBREVIATIONS
CLK Clock
CNT Count
FWD Forward
GF Galois Field
RS Reed-Solomon (code)
REV Reverse
SR Shift Register
- 403-
GLOSSARY
ALARM
BLOCK CODE
A block code is a code in which the check bits cover only the immediately preceding
block of information bits.
BURST LENGTH
The number of bits between and including the first and last bits in error; not all of the
bits in between are necessarily in error.
The probability that a given defect event causes an error burst which exceeds the
correction capability of a code.
CHARACTERISTIC
-404-
CODE POLYNOMIAL
See Codeword.
CODE RATE
See Rate.
CODE VECTOR
See Codeword.
CODEWORD
A set of data symbols (i.e. information symbols or message symbols) together with its
associated redundancy symbols; also called a code vector or a code polynomial.
CONCATENATION
A method of combining an inner code and an outer code, to form a larger code. The
inner code is decoded first. An example would be a convolutional inner code and a
Reed-Solomon outer code.
CONVOLUTIONAL CODE
A code in which the check bits check information bits of prior blocks as well as the
immediately preceding block.
CORRECTABLE ERROR
-405,.
CORRECTION SPAN
CYCLIC CODE
A linear code with the property that each cyclic (end-around) shift of each codeword is
also a codeword.
An error-detection method in which check bits are generated by taking the remainder
after dividing the data bits by a cyclic code polynomial.
DEFECT
DEFECT EVENT
A single occurrence of a defect regardless of the numbe~ of bits in error caused by the
defect.
The ratio of total defect events to total bits, having the units of defect events per bit.
DETECTION SPAN
For a single-burst detection code, the single-burst detection span is the maximum length
of an error burst which is guaranteed to be detected.
For a single-burst correction code, the single-burst detection span is the maximum
length of an error burst which is guaranteed to be detected without possibility of
miscorrection.
-406 -
If a correction code has a double-burst detection span, then each of two bursts is
guaranteed to be detected without possibility of miscorrection, provided neither burst
exceeds the double-burst detection span.
A channel for which noise affects each transmitted symbol independently, for example,
the binary symmetric channel (BSC).
DISTANCE
Elementary symmetric functions are the coefficients of the error locator polynomial.
ERASURE
An errata for which location information is known. An erasure has a known location,
but an unknown value.
ERASURE CORRECTION
The process of correcting errata when erasure pointers are available. A Reed-Solomon
code can correct more errata when erasure pointers are available. It is not necessary
for erasure pointers to be available for all errata when erasure correction is employed.
ERASURE POINTER
Information giving the location of an erasure. Internal erasure pointers might be de-
rived from adjacent interleave error locations. External erasure pointers might be
derived from run-length violations, amplitude sen~ing. timing sensing. etc.
-407 -
ERRATA LOCATOR POLYNOMIAL
ERRATUM
An errata for which location information is not known. In general, an error represents
two unknowns, error location and value. In the binary case, the only unknown is the
location.
ERROR BURST
The distance by some measure (e.g., bits or bytes) from a reference point (e.g., beginn-
ing or end of sector or interleave) to the burst. For Reed-Solomon codes, the error
location is the log of the error-location vector and is the symbol displacement of the
error from the end of the codeword.
ERROR VALUE
The error value is the bit pattern which must be exclusive-or-ed (XOR-ed) against the
data at the burst location in order to correct the error.
-408 -
EXPONENT
See Period.
EXTENSION FIELD
FINITE FIELD
A field with a finite number of elements; also called a Galois field and denoted as GF(n)
where n is the number of elements in the field.
FORWARD-ACTING CODE
An error-control code that contains sufficient redundancy for correcting one or more
symbol errors at the receiver.
GROUND FIELD
A finite field with q elements, GF(q) , exists if, and only if, q is a power of a prime.
Let q =pfi where p is a prime and n is an integer, then GF(p) is referred to as the
ground field and GF{pll) as the extension field of GF(p).
-409-
GROUP CODE
HAMMING DISTANCE
The Hamming distance between two vectors IS the number of corresponding symbol
positions in which the two vectors differ.
HAMMING WEIGHT
The Hamming weight of a vector is the number of nonzero symbols in the vector.
HARD ERROR
IRREDUCIBLE
ISOMORPHIC
If two fields are isomorphic they have the same structure. That is, one can be obtained
from the other by some appropriate one-to-one mapping of elements and operations.
A code wherein the EXCLUSIVE-OR sum of every pair of codewords is also a codeword.
LINEAR FUNCTION
-410-
LINEARLY DEPENDENT
A set of n vectors is linearly dependent if, and only if, there exists a set of n scalars
Ci, not all zero, such that:
LINEARLY INDEPENDENT
A set of vectors is linearly independent if they are not linearly dependent. See Linear-
ly Dependent.
A check byte or check word at the end of a block of data bytes or words, selected to
make the parity of each column of bits odd or even.
MAJORITY LOGIC
A majority logic gate has an output of one if, and only if, more than half its inputs are
ones.
A code that can be decoded with majority logic gates. See Majority Logic.
The minimum Hamming distance between all possible pairs of codewords. The minimum
distance of a linear code is equal to its minimum weight.
MINIMUM FUNCTION
- 411 -
MINIMUM POLYNOMIAL OF Qi
The monic polynomial m(x) of smallest ~egree with coefficients in a ground field such
that m(Qi) =0, where Qi is any element of an extension field. The minimum polynomial of
Qi is also called the minimum function of Qi.
The probability that an error burst which exceeds the guaranteed capabilities of a code
will appear correctable to a decoder. In this case, the decoder actually increases the
number of errors by changing correct data. Miscorrection probability is determined by
record length, total redundancy, and correction capability of the code.
Pmc usually represents the miscorrection probability for all possible error bursts, assum-
ing all' errors are possible and equally probable. Some codes, such as the Fire Code,
have a higher miscorrection' probability for particular error bursts than for all possible
error bursts.
The probability that an error burst which exceeds the correction and detection capabil-
ities of a code will cause all syndromes to be zero and thereby go undetected. Mis-
detection probability is determined by the total number of redundancy bits, assuming
that all errors are possible and equally probable.
MONIC POLYNOMIAL
A polynomial is said to be monic if the coefficient of the highest degree term is one.
(o.k) CODE
A block code with k information symbols, n-k check symbols, and n total symbols
(information plus check symbols).
- 412-
A convolutional code with constant length n, code rate R (efficiency), and information
symbols k=Rn.
n!
= r!(n-r)!
n-TUPLE
ORDER OF A FIELD
The order of a field is the number of elements in the field. The number of elements
may be infinite (infinite field) or finite (finite field).
The order e of a field element f3 is the least positive integer for which f3e=1. Elements
of order 2n-l in GF(2 n) are called primitive elements.
PARITY
The property of being odd or even. The parity of a binary vector is the parity of the
number of ones the vector contains. Parity may be computed by summing modulo-2 the
bits of the vector.
- 413-
PARITY CHECK CODE
A code in which the encoder accepts a block of information bits and computes for
transmission, a set of modulo-2 sums (XOR) across .various of these information bits and
possibly information bits in prior blocks. A decoder at the receiving point reconstructs
the original information bits from the set of modulo-2 sums. Every binary parity-check
code is also a linear, or group code. See also Block Code and Convolutional Code.
PERFECT CODE
An e error correcting code over GF(q) is said to be perfect if every vector is distance
no greater than e from the nearest codeword. Examples are Hamming and Golay codes.
PERIOD
The period of a polynomial P(x) is the least positive integer e such that xe + 1 is divi-
sible by P(x).
POINTER
POLYNOMIAL CODE
A linear block code whose codewords can be expressed in polynomial form and are
divisible by a generator polynomial. This clas~ of codes includes the cyclic and shor-
tened cyclic.codes.
PRIME FJELD
A field is called prime if it possesses no. sub fields except that consisting of the whole
field.
~414 -
PRIME SUBFIELD
The prime subfield of a field is the intersection of all sub fields of the field.
PRIME POLYNOMIAL
See Irreducible.
PRIMITIVE POLYNOMIAL
A polynomial is said to be primitive if its period is 2m-I, where m is the degree of the
polynomial.
RANDOM ERRORS
For the purposes of this book, the term 'random errors' refers to an error distribution
in which error bursts (defect events) occur at random intervals and each burst affects
only a single symbol, usually one bit or one byte.
The code rate, or rate (R) of a code is the ratio of information bits (k) to total
bits (n); information bits plus redundancy. It is a measure of code efficiency.
R =.k.
n
READABLE ERASURE
- 415-
RECIPROCAL PoLYNOMIAL
····RECURRENTCOI>E
RErr(JCIBLE
. RELATIVEi.yPRIME
1r; the greatest'corilmon divisorof'i~oP61yri6niials: is ,. 1,': th'~y are' ~aid to;!6e:r~j~ti ~ely
prime.
:SELF,cRECIPROCALPOLYNOMIAL
SHORTENiHiCYCLICCODE
SOFT ERROR
A subset of a field which satisfies the defmition of a field. See Section 2.8 for the
definition of a field.
SYNDROME
SYSTEMATIC CODE
A code in which the codewords are separated into two parts, with all information
symbols occurring first and all redundancy symbols following.
UNCORRECTABLEERROR
UNCORRECTABLESECTOR
.The ratio of total uncorrectable sectors to total bits, having the units of uncorrectable
sector events per bit.
- 417-
UNDE.TECTED ERRONEOUS DATA PROBABILITY (Pued)
The probability that erroneous data will be transferred and not detected, having the
units of undetected erroneous data events per bit. Pued for a code that does not have
pattern sensitivity is the product of miscorrection probability (pmc) of the error COr-
recting code (if present). the misdetection probability (pmd) of the error detectiIlg code
(if present), and the probability of having an error that exceeds guaranteed capabilities
of the code (Pe*Pc).
A code with pattern sensitivity will have two undetected erroneous data rates: one for
all possible error bursts, and a higher one for the sensitive patterns.
UNREADABLE ERASURE
UNRECOVERABLE ERROR
Check bit(s) on a byte or word selected to make total byte or word parity odd or even.
WEIGHT
- 418-
BIBLIOGRAPHY
BOOKS
Abramson, N., InfomUJtion Theory and Coding, McGraw-Hili, New York, 1963.
Aho, A., et. al., The Design and Analysis of Computer Algorithms, Addison-Wesley,
Massachusetts, 1974.
Albert, A., Fundamental Concepts of Higher Algebra, 1st ed., University of Chicago
Press, Chicago, 1956.
Artin, E., Galois Theory, 2nd ed., University of Notre Dame Press, Notre Dame, 1944.
Ash, R., InfomUJtion Theory, Wiley-Interscience, New York, 1965.
Berlekamp, E. R., Algebraic Coding Theory, McGraw-Hill, New York, 1968.
Berlekamp, E. R., A Survey of Algebraic Coding Theory, Springer-Verlag,
New York, 1970.
Berlekamp, E. R., Key Papers in The Development of Coding Theory, IEEE Press,
New York, 1974.
Bhargava, V., et. aI., Digital Communications by Satellite, Wiley, New York, 1981.
Birkhoff, G. and T. C. Bartee., Modem App/ied Algebra, McGraw-HilI, New York, 1970.
Birkhoff, G. and S. MacLane, A Survey of Modem Algebra, 4th ed., Macmillan,
New York,1977.
Blake, I. F., Algebraic Coding Theory: History and Development, Dowden, Hutchinson &
Ross, Pennsylvania, 1973.
Blake, I. F. and R. C. Mullin, The Mathematical Theory of Coding, Academic Press,
New York, 1975.
Burton, D. M., Elementary Nwnber Theory, Allyn & Bacon, Boston, 1980.
Cameron, P. and J. H. Van Lint, Graphs, Codes and Designs, Cambridge University Press,
Cambridge, 1980.
Campbell, H. G., Linear Algebra With Applications, 2nd ed., Prentice-HaIl,
New Jersey, 1980.
Carlson, A. B., Communication Systems, 2nd ed., McGraw-Hill, New York, 1968.
Clark, Jr., G. C. and 1. B. Cain, Error-Correction Coding for Digital Communications,
Plenum Press, New York, 1981.
Cohen, D. I. A., Basic Techniques of Combinatorial Theory , Wiley, New York, 1978.
- 419 -
Crouch, R. and E. Walker, Introduction to Modem Algebra and Analysis, Holt, Rinehart
& Winston, New York, 1962.
Doll, D. R., Data Communications: Facilities, Networks, and Systems Design, Wiley,
New York, 1978.
Feller, W., An Introduction to Probability Theory and Its Applications, 2nd ed., Wiley,
New York, 1971.
Fisher, J. L., Application-Oriented Algebra, Harper & Row, New York, 1977.
Folts, H. C., Data Communications Standards, 2nd ed., McGraw-HilI, New York, 1982.
Gallager, R. G., Injo171Ultion Theory and Reliable Communication, Wiley, New York, 1968.
Gere, J. M. and W. W. Williams, Jr., Matrix Algebra for Engineers, Van Nostrand,
New York, 1965.
Gill, A., Applied A1gebra for the Computer Sciences. Prentice-Han, New Jersey, 1976.
Golomb, S., et. aI., Digital Communications with Space Applications, Peninsula Publishing,
Los Altos, California, 1964.
Golomb, S., et. al., Shift Register Sequences, Aegean Park Press, Laguna Hills, Cali-
fornia, 1982.
Gregg, W. D., Analog and Digital Communication, Wiley, New York, 1977.
Hamming, R. W., Coding and Injo171Ultion Theory, Prentice-Hall, New Jersey, 1980.
Herstein, LN., Topics in Algebra, 2nd ed., Wiley, New York, 1975.
Jayant, N. S., Wavefonn Quantization and Coding, IEEE Press, New York, 1976.
Kaplansky, I., Fields and Rings, 2nd ed., The University of Chicago Press, Chicago, 1965.
-420 -
Khinchin, A. I., Mathematical Foundations of Information Theory, Dover, New York,
1957.
Knuth, D. E., The An of Computer Programming, Vol. I, 2nd ed., Addison-Wesley,
Massachusetts, 1973.
Lucky, R. W., et. al., Principles of Data Communication, McGraw-Hill, New York, 1968.
Martin, J., Telecommunications and the Computer, Prentice-Hall, New Jersey, 1969.
Niven, I., An Introduction to the Theory of Numbers, 4th ed., Wiley, New York, 1960.
Owen, F. E., PCM and Digital Transmission Systems, McGraw-Hill, New York, 1982.
Peterson, W. W., and E. J. Weldon, Jr., Error-Correcting Codes, 2nd ed., MIT Press,
Massachusetts, 1972.
Pless, V., Introduction to the Theory of Error-Correcting Codes, Wiley, New York, 1982.
Rao, T. R. N., Error Coding for Arithmetic Processors, Academic Press, New York, 1974.
Sawyer, w. W.,A Concrete Approach to Abstract Algebra, Dover, New York, 1959.
Sellers, Jr., F. F., et. aI., Error Detecting Logic for Digital Computers, McGraw-Hill,
New York, 1968.
- 421 -
Shanmugam, K. S., Digital and Analog Communication Systems, Wiley, New York, 1979.
Slepian, D., Key Papers in The Development of Infonnation Theory, IEEE Press,
New York, 1974.
Spencer, D. D., Computers in Nwnber Theory, Computer Science Press, Maryland, 1982.
Viterbi, A. J., Principles of Digital Communication and Coding, McGraw-Hili, New York,
1979.
Wakerly, J., Error Detecting Codes, Self-Checking Circuits and Applications, North-
Holland, New York, 1978.
Wiggert, D., Error-Control Coding and Applications, Artech House, Massachusetts, 1978.
- 422 -
IBM TECHNICAL DISaoSURE BUUETlN
(Chronologically Ordered)
L. R. Bahl and D. T. Tang, "Shortened Cyclic Code With Burst Error Detection and
Synchronization Recovery Capability." 16 (6),2028-2030 (Nov. 1973).
P. Hodges, "Error Detecting Code With Enhanced Error Detecting Capability." 16 (11),
3749-3751 (Apr. 1974).
G. H. Thompson, "Error Detection and Correction Apparatus." 17 (1), 7-8 (June 1974).
R. A. Healey, "Error Checking and Correction of Microprogram Control Words With a
Late Branch Field." 17 (2), 374-381 (July 1974).
A. M. Patel, "Coding Scheme for Multiple Selections Error Correction." 17 (2), 473-475
(July 1974).
D. C. Bossen and M. Y. Hsiao, "Serial Processing of Interleaved Codes." 17 (3), 809-810
(Aug. 1974).
K. B. Day and H. C. Hinz, "Error Pointing in Digital Signal Recording." 17 (4), 977-978
(Sept. 1974).
- 423 -
R. C. Cocking, et. al., "Self-Checking Number Verification and Repair Techniques."
22 (to), 4673-4676 (Mar. 1980).
P. Hodges, "Error-Detecting Code for Buffered Disk." 22 (12), 5441- 5443 (May 1980).
F. G. Gustavson and D. Y. Y. Yun, "Fast Computation of Polynomial Remainder
Sequences. " 22 (12), 5580-5581 (May 1980).
V. Goetze, et, al., "Single Error Correction in CCD Memories." 23 (1), 215-216 (June
1980).
J. W. Barrs and J. C. Leininger, "Modified Gray Code Counters." 23 (2), 460-462 (July
1980).
J. L. Rivero, "Program for Calculating Error Correction Code." 23 (3), 986-988 (Aug.
1980).
N. N. Nguyen, "Error Correction Coding for Binary Data." 23 (4), 1525-1527 (Sept. 1980).
J. C. Mears, Jr., "High-Speed Error Correcting Encoder/Decoder." 23 (4), 2135-2136 (Oct.
1980).
G. W. Kurtz, et. al., "Odd-Weight Error Correcting Code for 32 Data Bits and 13 Check
Bits." 23 (6), 2338 (Nov.1980).
J. R. Calva, et. al., "Distributed Parity Check Function." 23 (6),2451-2456 (Nov. 1980).
J. R. Calva and B. J. Good, "Fail-Safe Error Detection With Improved Isolation of 110
Faults." 23 (6),2457-2460 (Nov. 1980).
S. G. Katsafouros and D. A. Kluga, "Memory With Selective Use of Error Detection and
Correction Circuits." 23 (7a), 2866-2867 (Dec. 1980).
R. A. Forsberg, et. al., "Error Detection for Memory With Partially Good. Chips. "
23 (7b), 3272-3273 (Dec. 1980).
R. H. Linton, "Detection of Single Bit Failures in Memories Using Longitudinal Parity.·
23 (8), 3603-3604 (Jan. 1981). .
C. L. Chen, "Error Correcting Code for Multiple Package Error Detection." 23 (8),
3808-3810 (Jan. 1981).
D. C. Bossen, et. al., "Separation of Error Correcting Code Errors and Addressing
Errors. " 23 (9), 4224 (Feb. 1981).
G. S. Sager and A. J. Sutton, "System Correction of Alpha-Particle- Induced Uncorrec-
table Error Conditions by a Service Processor. " 23 (9), 4225-4227 (Feb. 1981).
W. G. Bliss, et. al., "Error Correction Code." 23 (10),4629-4632 (Mar. 1981).
W. G. Bliss, "Circuitry for Performing Error Correction Calculations on Baseband
Encoded Data to Eliminate Error Propagation." 23 (10), 4633-4634 (Mar. 1981).
- 424 -
P. A. Franaszek, "Efficient Code for Digital Magnetic Recording.· 23 (11), 5229:.5232
(Apr. 1981).
C. L. Chen, "Error Checking of ECC Generation Circuitry." 23 (11), 5055-5057 (Apr.
1981).
C. L. Chen and B. L. Chu, "Extended Error Correction With an Error Correction Code."
23 (11), 5058-5060 (Apr. 1981).
G. G. Langdon, Jr., "Table-Driven Decoder Involving Prefix Codes." 23 (12), 5559-5562
(May 1981).
D. F. Kelleher, "Error Detection for All Errors in a 9-Bit Memory Chip." 23 (12), 5441
(May 1981).
S. W. Hinkel, "Utilization of CRC Bytes for Error Correction on Multiple Formatted Data
Strings. " 24 (lb), 639-643 (June 1981).
D. A. Goumeau and S. W. Hinkel, "Error Correction as an Extension of Error Recovery
on Information Strings." 24 (lb), 651-652 (June 1981).
J. D. Dixon, et. al., "Parity Mechanism for Detecting Both Address and Data Errors."
24 (lb), 794 (June 1981).
A. M. Patel, "Dual-Function ECC Employing Two Check Bytes Per Word." 24,(2),
1002-1004 (July 1981).
D. Meltzer, "CCD Error Correction System." 24 (3), 1392-1396 (Aug. 1981).
I. Jones, "Variable-Length Code-Word EncoderIDecoder." 24 (3), 1514-1515 (Aug. 1981).
D. B. Convis, et. al., "Sliding Window Cross-Hatch Match Algorithm for Spelling Error
Correction. " 24 (3), 1607-1609 (Aug. 1981).
N. N. Heise and W. G. Verdoom, ·Serial Implementation of b-Adjacent Codes." 24 (5),
2366-2370 (Oct. 1981). .
S. R. McBean, "Error Correction at a Display Terminal During Data Verification." 24 (5),
2426-2427 (Oct. 1981).
D. T. Tang and P. S. Yu, "Error Detection With Imbedded Forward Error Correction."
24 (5), 2469-2472 (Oct. 1981).
R. W. Alexander and J. L. Mitchell, "Uncompressed Mode Trigger." 24 (5), 2476-2480
(Oct. 1981).
V. A. Albaugh, et. al., "Sequencer for Converting Any Shift Register Into a Shift
Register Having a Lesser Number of Bit Positions.· (Oct. 1981).
A. R. Barsness, W. H. Cochran, W. A. Lopour and L. P. Segar, "Longitudinal Parity
Generation for Single- Bit Error Correction." 24 (6),2769-2770 (Nov. 1981).
S. Lin and P. S. Yu, "Preventive Error Control Scheme.· 24 (6), 2886-2891 (Nov. 1981).
- 425 - I
D. T. Tang and P. S. Yu, "Hybrid Go-Back-N ARQ With Extended Code Block." 24 (6),
2892-2896 (Nov. 1981). .
F. Neves and A. K. Uht, "Memory Error Correction Without ECC." 24 (7a) , 3471 (Dec.
1981).
E. S. Anolick, et. aI., "Alpha Particle Error Correcting Device. " 24 (8), 4386 (Jan. 1982).
W. H. McAnney, "Technique for Test and Diagnosis of Shift-Register Strings." 24 (8),
4387-4389 (Jan. 1982).
F. J. Aichelmann, Jr. and L. K. Lange, "Paging Error Correction for Intermittent Errors. II
- 426 -
NATIONAL TECHNICAL INFORMATION SERVICE
Altman, F. J., et. aI., "Satellite Communications Reference Data Handbook." AD-746 165,
(July 1972).
Assmus, Jr., E. F. and H. F. Mattson, Jr., "Research to Develop the Algebraic Theory of
Codes." AD-678 108, (Sept. 1968).
Assmus, Jr., E. F., et. aI., "Error-Correcting Codes." AD-754234, (Aug. 1972).
Assmus, Jr., E. F., et. aI., "Cyclic Codes." AD-634989, (Apr. 1966).
Assmus, Jr., E. F., et. aI., "Research to Develop the Algebraic Theory of Codes."
AD-656 783, (June 1967).
Bahl, L. R., "Correction of Single and Multiple Bursts of Error. " AD-679 877, (0 c t.
1968).
Benelli, G., "Multiple-Burst-Error-Correcting-Codes." N78-28316, (Apr. 1977).
Benice, R. J., et. aI., "Adaptive Modulation and Error Control Techniques." AD-484 188,
(May 1966).
Brayer, K., "Error Patterns and Block Coding for the Digital High-Speed Autovon
Channel. " AD-A022 489, (Feb. 1976).
Bussgang, J. J. and H. Gish, "Analog Coding." AD-721 228, (Mar. 1971).
Cetinyilmaz, N., "Application of the Computer for Real Time Encoding and Decoding of
Cyclic Block Codes. " AD/A-021 818, (Dec. 1975).
Chase, D., et. aI., "Troposcatter Interleaver Study RepOrt." AD/A-008 523, (Feb. 1975).
Chase, D., et. aI., "Coding/MUX Overhead Study." AD/A-009 174, (Mar. 1975.
Chase, Dr., D., et. aI., "Multi-Sample Error Protection Modulation Study." AD/A-028 985,
(May 1976).
Chase, Dr. D., et. aI., "DemodlDecoderIntegration. " AD/A-053 685, (Apr. 1978).
Chien, R. T. and S. W. Ng., "L-Step Majority Logic Decoding." AD-707 877, (June 1970).
Chien, R. T., et. aI., "Hardware and Software Error Correction Coding." AD/A-OI7 377,
(Aug. 1975).
Choy, D. M-H., "Application of Fourier Transform Over Finite Fields to Error-Correcting
Codes." AD-778 102, (Apr. 1974).
Covitt, A. L., "Performance Analysis of a Frequency Hopping Modem." AD-756 840, (Dec.
1972).
DonnaIly, W., "Error Probability in Binary Digital FM Transmission Systems."
AD/A-056 237, (Feb. 1978).
- 427 -
Ellison, J. T., "Universal Function Theory and Galois Logic Studies." AD-740 849, (Mar.
1972).
Ellison, J. T. and B. Kolman, "Galois Logic Design." AD-717 205, (Oct. 1970).
Forney, Jr" G., "Study of Correlation Coding." AD-822 106, (Sept. 1967).
Gilhousen, K. S., et. al., "Coding Systems Study for High Data Rate Telemetry Links."
N71-27786, (Jan. 1971).
Gish, H., "Digital Modulation Enhancement Study." AD-755 939, (Jan. 1973).
Massey, J. L., "Joint Source and Channel Coding." AD/A-045 938, (Sept. 1977).
Mitchell, M. E., "Coding for Turbulent Channels." AD-869-973, (Apr. 1970).
Mitchell, M. E. and Colley, L. E., "Coding for Turbulent Channels." AD-869 942, (Apr.
1970).
Mitchell, M. E., et. al., "Coding for Turbulent Channels. " AD-869 941, (Apr. 1970).
Morakis, J. C., "Shift Register Generators and Applications to Coding." X-520-68-133,
(Apr. 1963).
Muggia, A., "Effect of the Reduction of the Prandtl in the Stagnation Region Past an
Axisymmetric Blunt Body in Hypersonic Flow. " AD-676 388, (July 1968).
McEliece, R. J., et. al., "Synchronization Strategies for RFI Channels. " N77-21123.
- 428 -
Nesenbergs, M. "Study of Error Control Coding for the U. S. Postal Service Electronic
Message System." PB-252 689, (May 1975).
Oderwalder, 1. P., et. al., "Hybrid Coding Systems Study Final Report." N72-32206, (Sept.
1972).
Paschburg, R. H., "Software Implementation of Error-Correcting Codes." AD-786 542,
(Aug. 1974).
Pierce, 1. N., "Air Force Cambridge Research Laboratories." AD-744 069, (Mar. 1972).
Reed, I. S., "kth-Order Near-Orthogonal Codes." AD-725 901, (1971).
Reed, I. S. and T. K. Truong, "A Simplified Algorithm for Correcting Both Errors and
Erasures ofR-S Codes." N79-16012, (Sept.lOct. 1978).
Roome, T. F., "Generalized Cyclic Codes Finite Field Arithmetic." AD/A-070 673, (May
1979).
Rudolph, L. D., "Decoding Complexity Study." AD/A-002 155, (Nov. 1974).
Rudolph, L. D., "Decoding Complexity Study II." AD/A-039 023, (Mar. 1977).
Sarwate, D. V., "A Semi-Fast Fourier Transform Algorithm Over GF(2 m)." AD/A-034982,
(Sept. 1976).
Schmandt, F. D., "The Application of Sequential Code Reduction." AD-771 587, (Oct.
1973).
Sewards, A., et. al., "Forward Error-Correction for the Aeronautical Satellite Commun-
ications Channel." N79-19193, (Feb. 1979).
Skoog, E. N., "Error Correction Coding with NMOS Microprocessors: Concepts. "
AD/A-072 982, (May 1979).
Solomon, G., "Error Correcting Codes for the English Alphabet and Generalizations."
AD-774 850, (July 1972).
Solomon G. and D. J. Spencer, "Error Correction/Multiplex for Megabit Data Channels."
AD-731 567, (Sept. 1971).
Solomon, G., et. al., "Error Correction. Multiplex for Megabit Data Channels."
AD-731 568, (Sept. 1971).
Stutt, C. A., "Coding for Turbulent Channels." AD-869979, (Apr. 1970).
Tomlinson, M. and B. H. Davies, "Low Rate Error Correction Coding for Channels with
Phase Jitter." AD/A-044 658, (Feb. 1977).
Viterbi, A. J., et. aI., "Concatenation of Convolutional and Block Codes" N71-32505,
(June 1971).
- 429 -
Welch, L. R., et. aI., "The Fast Decoding of Reed-Solomon Codes Using Fermat Theoretic
Transforms and Continued Fractions." N77-14056.
Wong, J. S. L., et. aI., "Review of Finite Fields: Applications to Discrete Fourier Trans-
forms and Reed-Solomon Coding. " N77-33875, (July 1977).
- 430 -
A UDTO ENGINEERING SOCTErr PREPRINTS
Adams, R. W., "Filtering in the Log Domain." 1470 (B-5), (May 1979).
Doi, T. T., "Channel Codings for Digital Audio Recordings." 1856 (1-1), (Oct.lNov. 1981).
Doi, T. T., "A Design of Professional Digital Audio Recorder." 1885 (G-2), (Mar. 1982).
Doi, T. T., et. at., "Cross Interleave Code for Error Correction of Digital Audio
Systems." 1559 (H-4), (Nov. 1979).
Doi, Dr. T. T., et. at., "A Long Play Digital Audio Disc System." 1442 (G-4), (Mar. 1979).
Doi, T. T., et. at., "A Format of Stationary-Head Digital Audio Recorder Covering Wide
Range of Application." 1677 (H-6), (Oct.lNov. 1980).
Engberg, E. W., "A Digital Audio Recorder Format for Professional Applications."
1413 (F-1), (Nov. 1978).
Muraoka, T., et. aI., "A Group Delay Analysis of Magnetic Recording Systems."
1466 (A-5), (May 1979).
Nakajima, H., et. at., "A New PCM Audio System as an Adapter of Video Tape
Recorders." 1352 (B-l1), (May 1978).
Nakajima, H., et. al., "Satellite Broadcasting System for Digital Audio." 1855 (L-8) ,
(Oct.lNov. 1981).
- 431 -
Odaka, K., et. al., "LSls for Digital Signal Processing to be Used in "Compact Disc
Digital Audio" Players." 1860 (G-5), (Mar. 1982).
Sadashige, K. and H. Matsushima, "Recent Advances in Digital Audio Recording Tech-
nique." 1652 (K-5), (May 1980).
Seno, K., et. al., "A Consideration of the Error Correcting Codes for PCM Recording
System." 1397 (H-4), (Nov. 1978).
Tanaka, K., et. al., "2-Channel PCM Tape Recorder for Professional Use." 1408 (F-3) ,
(Nov. 1978).
Tanaka, K., et. al., "Improved Two Channel PCM Tape Recorder for Professional Use."
1533 (G-3), (Nov. 1979).
Tanaka, K., et. al., "On a Tape Format for Reliable PCM Multi-Channel Tape Recorders. "
1669 (K-l), (May 1980).
Tanaka, K., et. aI., "On PCM Multi-Channel Tape Recorder Using Powerful Code
Format." 1690 (H-5), (Oct.lNov. 1980).
Tsuchiya, Y., et. al., "A 24-Channel Stationary-Head Digital Audio Recorder." 1412 (F-2) ,
(Nov. 1978).
Van Gestel, W. J, et. al., " A Multi-Track Digital Audio Recorder for Consumer Applica-
tions." 1832 (1-4), (Oct. 1981).
Vries, L. B., "The Error Control System of Philips Compact Disc." 1548 (G-8), (Nov.
1979).
Vries, L. B., et. aI., "The Compact Disc Digital Audio System: Mudulation and Error-
Correction." 1674 (H-8), (Oct. 1980).
White, L., et. al., "Refinements of the Threshold Error Correcting Algorithm."
1790 (B-5), (May 1981).
Yamada, Y., et. al., "Professional-Use PCM Audio Processor With a High Efficiency Error
Correction System." 1628 (G-7), (May 1980).
- 432 -
PATENTS
3,162,837, "Error Correcting Code Device With Modulo-2 Adder and Feedback Means,"
Meggitt, (1964).
3,163,848, "Double Error Correcting System," Abramson, (1964).
3,226,685, "Digital Recording Systems Utilizing Ternary, N Bit Binary and Other Self-
Clocking Forms, " Potter, et al., (1965).
3,227,999, "Continuous Digital Error-Correcting System," Hagelbarger, (1966).
3,264,623, "High Density Dual Track Redundant Recording System," Gabor, (1966).
3,398,400, "Method and Arrangement for Transmitting and Receiving Data Without
Errors," Rupp, et al., (1968).
3,402,390, "System for Encoding and Decoding Information Which Provides Correction of
Random Double-Bit and Triple-Bit Errors," Tsimbidis, et al., (1968).
3,411,135, "Error Control Decoding System," Watts, (1968).
- 433 -
3,416,132, "Group Parity Handling," MacSorley, (1968).
3,418,629, "Decoders For Cyclic EiTar-Correcting Codes," (1968).
3,421,147, "Buffer Arrangement," Burton, et al., (1969).
3,421,148, "Data Processing Equipment," (1969).
3,423,729, "Anti-Fading Error Correction System," Heller, (1969).
3,437,995, "Error Control Decoding System," Watts, (1969).
3,452,328, "Error Correction Device For Parallel Data Transmission System," Hsiao,
et al., (1969).
3,457,562, "Error Correcting Sequential Decoder," (1969).
3,458,860, "Error Detection By Redundancy Checks," Shimabukuro, (1969).
3,465,287, "Burst Error Detector," (1969).
3,475,723, "Error Control System," Burton, et al., (1969).
3,475,724, "Error Control System," Townsend, et al., (1969).
3,475,725, "Encoding Transmission System," Frey, Jr., (1969).
3,478,313, "System For Automatic Correction Of Burst-Errors, " Srinivasan, (1969).
3,504,340, "Triple Error Correction Circuit," Allen, (1970).
3,506,961, "Adaptively Coded Data Communications System," Abramson, et aI., (1970).
- 434 -
3,542,756, "Error Correcting," Gallager, (1970).
3,562,711, "Apparatus for Detecting Circuit Malfunctions, " Davis, et al., (1971).
3,568,148, "Decoder for Error Correcting Codes," Clark, Jr., (1971).
3,573,728, "Memory With Error Correction for Partial Store Operation," Kolankowsky,
et al., (1971).
3,576,952, "Forward Error Correcting Code Telecommunicating System," VanDuuren,
et at., (1971).
- 435 -
3,624,637, "Digital Code to Digital Code Conversions," Irwin, (1971).
3,629,824, "Apparatus for Multiple-Error Correcting Codes, • Bossen, (1971).
3,631,428, "Quarter-Half Cycle Coding for Rotating Magnetic Memory System," King,
(1971).
3,634,821, "Error Correcting System," (1972).
3,638,182, "Random and Burst Error-Correcting Arrangment with Guard Space Error
Correction, " Burton, et al., (1972)
3,639,900, "Enhanced Error Detection and Correction for Data Systems," Hinz, Jr.,
(1972).
3,668,631, "Error Detection and Correction System with Statistically Optimized Data
Recovery," Griffith, et aI., (1972).
3,668,632, "Fast Decode Character Error Detection and Correction System," Oldham III,
(1972).
3,675,200, "System for Expanded Detection and Correction of Errors in Parallel Binary
Data Produced by Data Tracks," (1972).
- 436 -
3,685,016, "Array Method and Apparatus for Encoding, Detecting, and/or Correcting
Data, " (1972).
3,697,947, "Character Correcting Coding System and Method for Deriving the Same,"
(1972).
3,697,948, "Apparatus for Correcting Two Groups of Multiple Errors," Bossen, (1972).
3,697,949, "Error Correction System for Use With a Rotational Single-Error Correction,
Double-Error Detection Hamming Code," (1972).
3,697,950, "Versatile Arithmetic Unit for High Speed Sequential Decoder," (1972).
3,718,903, "Circuit Arrangement for Checking Stored Information," Oiso, et al., (1973).
3,725,859, "Burst Error Detection and Correction System," Blair, et al., (1973).
3,742,449, "Burst and Single Error Detection and Correction System," Blair, (1973).
,3,745,528, "Error Correction for Two Tracks in a Multi-Track System," Patel, (1973).
3,753,227, "Parity Check Logic for a Code Reading System," Patel, (1973).
3,764,998, "Methods and Apparatus for Removing Parity Bits from Binary Words,"
Spencer, (1973).
- 437 -
3,766,521, "Multiple B-Adjacent Group Error Correction and Detection Codes and
Self-Checking Translators Therefor," (1973).
3,768,071, "Compensation for Defective Storage Positions," Knauft, et al., (1973).
3,771,126, "Error Correction for Self-Synchronized Scramblers," (1973).
3,771,143, "Method and Apparatus for Providing Alternate Storage Areas on a Magnetic
Disk Pack," Taylor, (1973).
3,774,154, "Error Control Circuits and Methods," Devore, et al., (1973).
3,775,746, "Method and Apparatus for Detecting Odd Numbers of Errors and Burst
Errors of Less Than a Predetermined Length in Scrambled Digital
Sequences," Boudreau, et aI., (1973).
3,777,066, "Method and System for Synchronizing the Transmission of Digital Data
While Providing Variable Length Filler Code," Nicholas, (1973).
3,780,271, "Error Checking Code and Apparatus for an Optical Reader," (1973).
3,780,278, "Binary Squaring Circuit," Way, (1973).
3,781,109, "Data Encoding and Decoding Apparatus and Method," Mayer, Jr., et aI.,
(1973).
3,781,791, "Method and Apparatus for Decoding BCH Codes," Sullivan, (1973).
3,786,201, "Audio-Digital Recording System," (1974).
3,786,439, "Error Detection Systems, " McDonald, et al., (1974).
3,794,819, "Error Correction Method and Apparatus," Berding, (1974).
3,798,597, "System and Method for Effecting Cyclic Redundancy Checking," Frambs,
et al., (1974).
3,800,281, "Error Detection and Correction Systems," Devore, et al., (1974).
3,801,955, "Cyclic Code Encoder/Decoder," Howell, (1974).
3,810,111, "Data Coding With Stable Base Line for Recording and Transmitting Binary
Data, " (1974).
3,820,083, "Coded Data Enhancer, Synchronizer, and Parity Remover System," Way,
(1974).
- 438 -
3,825,893, "Modular Distributed Error Detection and Correction Apparatus and Method,"
(1914).
3,831,142, "Method and Apparatus for Decoding Compatible Convolutional Codes, " (1974).
3,831,143, "Concatenated Burst-Trapping Codes," Trafton, (1914).
3,832,684, "Apparatus for Detecting Data Bits and Error Bits In Phase Encoded Data,"
Besenfelder, (1914).
3,842,400, "Method and Circuit Arrangement for Decoding and Correcting Information
Transmitted in a Convolutional Code," (1914).
3,843,952, "Method and Device for Measuring the Relative Displacement Between Binary
Signals Corresponding to Information Recorded on the Different Tracks of a
Kinematic Magnetic Storage Device," Husson, (1914).
3,851,306, "Triple Track Error Correction," (1914).
3,858,119, "Error Detection Recording Technique," (1974).
3,872,431, "Apparatus for Detecting Data Bits and Error Bits in Phase Encoded Data,"
Besenfelder, et al., (1975).
3,876,978, "Archival Data Protection," (1975).
3,893,070, "Error Correction and Detection CirCuit With Modular Coding Unit," (1975).
3,893,071, "Multi Level Error Correction System for High Density Memory," (1975).
- 439 -
3,893,078, "Method and Apparatus for Calculating the Cyclic Code of a Binary
Message," Finet, (1975).
3,895,349, "Pseudo-Random Binary Sequence Error Counters," Robson, (1975).
3,896,416, "Digital Teleccommunications Apparatus Having Error-Correcting Facilities,"
Barrett, et al., (1975).
3,903,474, "Periodic Pulse Check Circuit, " (1975).
3,909,784, "Information Coding With Error Tolerant Code, " Raymond, (1975).
3,913,068, .i Error Correction of SeriaI Data Using a Sub field Code," (1975).
3,920,976, "Information Storage Security System," Christensen, et aI., (1975).
3,921,210, "High Density Data Processing System," Halpern, (1975).
3,925,760, "Method of and Apparatus for Optical Character Recognition, Reading and
Reproduction," Mason, et aI., (1975).
3,928,823, "Code Translation Arrangement," (1975).
3,930,239, "Integrated Memory," SaIters, et aI., (1975).
3,938,085, "Transmitting Station and Receiving Station for Operating With a Systematic
Recurrent Code," (1976).
3,944,973, "Error Syndrome and Correction Code Forming Devices," Masson, (1976).
3,949,380, "Peripheral Device Reassignment Control Technique," Barbour, et aI., (1976).
3,958,110, "Logic Array with Testing Circuitry," Hong, et al., (1976).
3,997,876, "Apparatus and Method for Avoiding Defects in the Recording Medium within
a Peripheral Storage System," Frush, (1976).
- 440 -
4,009,469, "Loop Communications System with Method and Apparatus for Switch to
Secondary Loop," Boudreau, et al., (1977).
4,020,461, "Method of and Apparatus for Transmitting and Receiving Coded Digital
Signals," Adams, et al., (1977).
4,024,498, "Apparatus for Dead Track Recovery," (1977).
4,030,067, "Table Lookup Direct Decoder for Double-Error Correcting Correcting (DEC)
BCH Codes Using a Pair of Syndromes, " Howell, et al., (1977).
4,035,767, "Error Correction Code and Apparatus for the Correction of Differentially
Encoded Quadrature Phase Shift Keyed Data (DQPSK)," Chen, et al., (1977).
4,044,328, "Data Coding and Error Correcting Methods and Apparatus, " Herff, (1977).
4,047,151, "Adaptive Error Correcting Transmission System, " Rydbeck, et al., (1977).
4,052,698, "Multi-Parallel-Channel Error Checking," (1977).
4,064,483, "Error Correcting Circuit Arrangement Using Cube Circuits, " (1977).
4,072,853, "Apparatus and Method for Storing Parity Encoded Data from a Plurality of
Input/Output Sources," Barlow, et al., (1978).
4,074,228, "Error Correction of Digital Signals, " (1978).
- 441 -
4,081,789, "Switching Arrangement for Correcting the Polarity of a Data Signal
Transmitted With a Recurrent Code," (1978).
4,087,787, "Decoder for Implementing an Approximation of the Viterbi Algorithm Using
Analog Processing Techniques," (1978).
4,092,713, "Post-Write Address Word Correction in Cache Memory System," Scheuneman,
(1978).
4,099,160, "Error Location Apparatus and Methods," (1978).
4,105,999, "Parallel-Processing Error Correction System," Nakamura, (1978).
4,107,650, "Error Correction Encoder and Decoder," Luke, et al., (1978).
4,112,502, "Conditional Bypass of Error Correction for Dual Memory Access Time
Selection," Scheuneman, (1978).
4,115,768, "Sequential Encoding and Decoding of Variable Word Length, Fixed Rate Data
Codes," Eggenberger, et al., (1978).
4,117,458, "High Speed Double Error Correction Plus Triple Error Detection System,"
(1978).
4,119,945, "Error Detection and Correction," Lewis, Jr., et al., (1978).
4,129,355, "Light Beam Scanner with Parallelism Error Correction," Noguchi, (1978).
4,138,694, "Video Signal Recorder/Reproducer for Recording and Reproducing Pulse
Signals, " (1979).
4,139,148, "Double Bit Error Correction Using Single Bit Error Correction, Double Bit
Error Detection Logic and Syndrome Bit Memory," (1979).
4,141,039, "Recorder Memory With Variable Read and Write Rates," Yamamoto; (1979).
4,142,174, "High Speed Decoding of Reed-Solomon Codes," Chen, et al., (1979).
4,145,683, "Single Track Audio-Digital Recorder and Circuit for Use Therein Having
Error Correction," Brookhart, (1979).
4,151,510, "Method and Apparatus for an Efficient Error Detection and Correction
System," Howell, et al., (1979).
- 442 -
4,151,565, "Discrimination During Reading of Data Prerecorded in Different Codes,"
Mazzola, (1979).
4,156,867, "Data Communication System With Random and Burst Error Protection and
Correction, " Bench, et al., (1979).
4,157,573, "Digital Data Encoding and Reconstruction Circuit," (1979).
4,159,468, "Communications Line Authentication Device, " Barnes, et al., (1979).
4,159,469, "Method and Apparatus for the Coding and Decoding of Digital Information,"
(1979).
4,160,236, "Feedback Shift Register," Oka, et al., (1979).
4,162,480, "Galois Field Computer," (1979).
4,163,147, "Double Bit Error Correction Using Double Bit Complementing," (1979).
4,167,701, "Decoder for Error-Correcting Code Data," Kuki, et al., (1979).
4,193,062, "Triple Random Error Correcting Convolutional Code, " En, (1980).
4,196,445, "Time-Base Error Correction," (1980).
4,201,337, "Data Processing System Having Error Detection and Correction Circuits,"
Lewis, etal., (1980).
- 443 -
4,201 ,976, "Plural Channel Error Correcting Methods and Means Using Adaptive
Reallocation of Redundant Channels Among Groups of Channels, " (1980).
4,202,018, "Apparatus and Method for Providing Error Recognition and Correction of
Recorded Digital Information," Stockham, Jr., (1980).
4,204,199, "Method and Means for Encoding and Decoding Digital Data," Isailovic,
(1980).
4,204,634, "Storing Partial Words in Memory," (1980).
4,205,324, "Methods and Means for Simultaneously Correcting Several Channels in Error
in a Parallel Multi Channel Data System Using Continuously Modifiable
Syndromes and Selective Generation of Intemal Channel Pointers,· (1980).
4,205,352, "Device for Encoding and Recording Information with Peak Shift Compen-
sation," Tomada, (1980).
4,206,440, "Encoding for Error Correction of Recorded Digital Signals," Doi, et aI.,
(1980).
4,209,809, "Apparatus and Method for Record Reorientation Following Error Detection
in a Data Storage Subsystem,· Chang, et al., (1980).
4,209,846, "Memory Error Logger Which Sorts Transient Errors From Solid Errors,"
Seppa, (1980).
4,211,996, "Error Correction System for Differential Phase-Shift-Keying," Nakamura,
(1980).
4,211,997, "Method and Apparatus Employing an Improved Format for Recording and
Reproducing Digital Audio,· Rudnick, et aI., (1980).
4,213,163, "Video-Tape Recording," Lemelson, (1980).
4,216,541, "Error Repairing Method and Apparatus for Bubble Memories," Clover, et aI.,
(1980).
4,234,804, "Signal Correction for Electrical Gain Control Systems," Bergstrom, (1980).
- 444 -
4,236,247, "Apparatus for Correcting Multiple Errors in Data Words Read From a
Memory," Sundberg, (1980).
4,254,500, "Single Track Digital Recorder and Circuit for Use Therein Having Error
Correction," Brookhart, (1981).
4,255,809, "Dual Redundant Error Detection System for Counters, " Hillman, (1981)
4,277 ,844, "Method of Detecting and Correcting Errors in Digital Data Storage
Systems," (1981).
4,281,355, "Digital Audio Signal Recorder," Wada, et al., (1981).
4,283,787, "Cyclic Redundancy Data Check Encoding Method and Apparatus," (1981).
4,291,406, "Error Correction on Burst Channels by Sequential Decoding," (1981).
- 445 -
4,300,231, "Digital System Error Correction Arrangement,· Moffitt, (1981).
4,306,305, "PCM Signal Transmitting System With Error Detecting and Correcting
Capability," Doi, et al., (1981).
4,309,721, "Error Coding for Video Disc System," (1982).
4,312,068, "Parallel Generation of Serial Cyclic Redundancy Check, • Goss, et al., (1982).
4,312,069, "Serial Encoding-Decoding for Cyclic Block Codes," Ahamed, (1982).
4,317,201, "Error Detecting and Correcting RAM Assembly," Sedalis, (1982).
4,319,357, "Double Error Correction Using Single Error Correcting Code," Bossen,
(1982).
4,336,612, "Error Correction Encoding and Decoding System," Inoue, et. aI, (1982).
4,337,458, "Data Encoding Method and System Employing Two-Thirds Code Rate with
Full Word Look-Ahead," Cohn, et al., (1982).
4,344,171, "Effective Error Control Scheme for Satellite Communications," Lin, et aI.,
(1982).
4,345,328 "ECC Check Bit Generation Using Through Checking Parity Bits," White,
(1982).
4,355,391, "Apparatus and Method of Error Detection and/or Correction in a Data Set,"
Alsop IV, (1982).
- 446 -
4,358,848, "Dual Function ECC System with Block Check Byte,· Patel, (1982).
4,360,916, "Method and Apparatus for Providing for Two Bits-Error Detection and
Correction, " Kustedjo, et at., (1982).
4,375,100, "Method and Apparatus for Encoding Low Redundancy Check Words from
Source Data," Tsuji, et at., (1983).
4,380,071, "Method and Apparatus for Preventing Errors in PCM Signal Processing
Apparatus,· Odaka, (1983).
4,380,812, "Refresh and Error Detection and Correction Technique for a Data
Processing System,· Ziegler II, et at., (1983).
4,382,300, "Method and Apparatus for Decoding Cyclic Codes Via Syndrome Chains, "
Gupta, (1983).
4,384,353, "Method and Means for Internal Error Check in a Digital Memory, "
Varshney, (1983).
4,388,684, "Apparatus for Deferring Error Detection of Multibyte Parity Encoded Data
Received From a Plurality of Input/Output Data Sources," Nibby, Jr., et al.,
(1983).
4,395,768, "Error Correction Device for Data Transfer System,· Goethats, et at., (1983).
,
4,397,022 "Weighted Erasure Codec for the (24,12) Extended Golay Code," Weng, et al.,
(1983).
- 447 -
4,398,292, "Method and Apparatus for Encoding Digital with Two Error-Correcting
Codes," Doi, et al., (1983).
4,402,045, "Multi-Processor Computer System," Krol, (1983).
4,402,080 "Synchronizing Device for a Time Division Multiplex System," Mueller, (1983).
4,404,673, "Error Correcting Network, " Yamanouchi, (1983).
4,404,674, "Method and Apparatus for Weighted Majority Decoding of FEC Codes Using
Soft Detection," Rhodes, (1983).
4,404,675, "Frame Detection and Synchronization System for High Speed Digital Trans-
mission Systems, " Karchevski, (1983).
4,425,645, "Digital Data Transrriission with Parity Bit Word Lock-On," Weaver, et aI.,
(1984).
- 448 -
4,441,184, "Method and Apparatus for Transmitting a Digital Signal," Sonoda, et al.,
(1984).
4,447,903, "Forward Error Correction Using Coding and Redundant Transmission,"
Sewerinson, (1984).
4,450,561, "Method and Device for Generating Check Bits Protecting a Data Word,"
Gotze, et al., (1984).
4,450,562, "Two Level Parity Error-Correction System,· Wacyk, et al., (1984).
4,451,919, "Digital Signal Processor for Use in Recording and/or Reproducing
Equipment," Wada, et al., (1984).
4,451,921, "PCM Signal Processing Circuit," Odaka, (1984).
4,453,248, "Fault Alignment Exclusion Method to Prevent Realignment of Previously
Paired Memory Defects, " Ryan, (1984).
- 449 -
4,464,755, "Memory System with Error Detection and Correction," Stewart, et al.,
(1984).
4,468,769, "Error Correcting System for Correcting Two or Three Simultaneous Errors
in a Code," Koga, (1984).
4,468,770, "Data Receivers Incorporating Error Code Detection and Decoding," Metcalf,
et al., (1984).
4,472,805, "Memory System with Error Storage," Wacyk, et al., (1984).
4,473,902, "Error Correcting Code Processing System," Chen, (1984).
4,476,562, "Method of Error Correction," Sako, et al., (1984).
4,477,903, "Error Correction Method for the Transfer of Blocks of Data Bits, a Device
for Preforming such a Method, A Decoder for Use with such a Method, and
a Deyice Comprising such a Decoder, " Schouhamer Immink, et al., (1984).
4,494,234, "On-The-Fly Multibyte Error Correcting System," Patel, (1985).
4,495,623, "Digital Data Storage in Video Format," George, et aI., (1984).
4,497,058, "Method of Error Correction," Sako, et al., (1985).
4,498,174, "Parallel Cyclic Redundancy Checking Circuit," LeGresley, (1985).
4,498,175, "Error Correcting System," Nagumo, et al., (1985).
4,498,178, "Data Error Correction Circuit," Ohhashi, (1985).
4,502,141, "Circuits for Checking Bit Errors in a Received BCH Code Succession by the
Use of Primitive and Non-Primitive Polynomials," Kuki, (1985).
4,504,948, "Syndrome Processing Unit for MuItibyte Error Correcting Systems," Patel,
(1985).
4,506,362, "Systematic Memory Error Detection and Correction Apparatus and Method,"
Morley, (1985).
4,509,172, "Double Error Correction - Triple Error Detection Code," Chen, (1985).
4,512,020, "Data Processing Device for Processing Multiple-Symbol Data-Words Based on
a Symbol-Correcting Code and Having Multiple Operating Modes," Krol,
et al., (1985).
4,519,058, "Optical Disc Player," Tsurushima, et al., (1985).
4,525,838, "Multibyte Error Correcting System Involving a Two-Level Code Structure,"
Patel, (1985).
- 450 -
4,525,840, "Method and Apparatus for Maintaining Word Synchronization After a
Synchronizing Word Dropout in Reproduction of Recorded Digitally Encoded
Signals," Heinz, et al., (1985).
4,527,269, "Encoder Verifier," Wood, etal., (1985).
4,538,270, "Method and Apparatus for Translating a Predetermined Hamming Code to an
Expanded Class of Hamming Codes," Goodrich, Jr., et al., (1985).
4,541,091, "Code Error Detection and Correction Method and Apparatus," Nishida, et al.,
(1985).
4,541,092, "Methtxl for Error Correction," Sako, et al., (1985).
4,541,093, "Method and Apparatus for Error Correction," Furuya, et al., (1985).
4,544,968, "Sector Servo Seek Control," Anderson, et al., (1985).
4,546,474, "Method of Error Correction," Sako, et al., (1985).
4,549,298, "Detecting and Correcting Errors in Digital Audio Signals," Creed, et al.,
(1985).
4,554,540, "Signal Format Detection Circuit for Digital Radio Paging Receiver," Mori,
et al., (1985).
4,555,784, "Parity and Syndrome Generation for Error Detection and Correction in
Digital Communication Systems," Wood, (1985).
4,556,977, "Decoding of BCH Double Error Correction - Triple Error Detection
(DEC-TED) Codes," Olderdissen, et al., (1985).
4,559,625, "Interleavers for Digital Communications," Berlekamp, et al., (1985).
4,562,577, "Shared EncoderlDecoder Circuits for Use with Error Correction Codes of an
Optical Disk System," Glover, et al., (1985).
4,564,941, "Error Detection System," Woolley, et al., (1986).
4,564,944, "Error Correcting Scheme," Arnold, et al., (1986).
4,564,945, "Error-Correction Code for Digital Data on Video Disc," Glover, et al.,
(1986).
4,566,105, "Coding, Detecting or Correcting Transmission Error System," Oisel, et al.,
(1986).
4,567,594, "Reed-Solomon Error Detecting and Correcting System Employing PipeIined
Processors," Deodhar, (1986).
4,569,051, "Methods of Correcting Errors in Binary Data," Wilkinson, (1986).
4,573,171, "Sync Detect Circuit," McMahon, Jr., et al., (1986).
- 451 "'
4,583,225, "Reed-Solomon Code Generator," Yamada, et al., (1986).
4,584,686, "Reed-Solomon Error Correction Apparatus," Fritze, (1986).
4,586,182, "Source Coded Modulation System," Gallager, (1986).
4,586,183, "Correcting Errors in Binary Data," Wilkinson, (1986).
4,589,112, "System for Multiple Error Detection with Single and Double Bit Error
Correction," Karim, (1986).
4,592,054, "Decoder with Code Error Correcting Function," Namekawa, et al., (1986).
4,593,392, "Error Correction Circuit for Digital Audio Signal," Kouyama, (1986).
4,593,393, "Quasi Parallel Cyclic Redundancy Checker,· Mead, et al., (1986).
4,593,394, "Method Capable of Simultaneously Decoding Two Reproduced Sequences,"
Tomimitsu, (1986).
4,593,395, "Error Correction Method for the Transfer of Blocks of Data Bits, a Device
and Performing such a Method, A Decoder for Use with such a Method, and
a Device Comprising such a Decoder," Schouhamer Immink, et al., (1986).
4,597,081, "Encoder Interface with Error Detection and Method Therefor," Tassone,
(1986).
4,597,083, "Error Detection and Correction in Digital Communication Systems,"
Stenerson, (1986).
4,598,402, "System for Treatment of Single Bit Error in Buffer Storage Unit,"
Matsumoto, et al., (1986).
4,604,747, "Error Correcting and Controlling System," Onishi, et al., (1986).
4,604,750, "Pipeline Error Correction, " Manton, et al., (1986).
4,604,751, "Error Logging Memory System for Avoiding Miscorrection of Triple Errors,"
Aichelmann, Jr., et al., (1986).
4,606,026, "Error-Correcting Method and Apparatus for the Transmission of Word-Wise
Organized Data," Baggen, (1986).
4,607,367, "Correcting Errors in Binary Data,· Ive, et al., (1986).
4,608,687, "Bit Steering Apparatus and Method for Correcting Errors in Stored Data,
Storing the Address of the Corrected Data and Using the Address to
Maintain a Correct Data Condition," Dutton, (1986).
4,608,692, "Error Correction Circuit," Nagumo, et al., (1986).
- 452 -
4,617,664, "Error Correction for Multiple Bit Output Chips," Aichelmann, Jr., et al.,
(1986).
4,623,999, "Look-up Table Encoder for Linear Block Codes," Patterson, (1986).
4,627,058, "Code Error Correction Method," Moriyama, (1986).
4,630,271, "Error Correction Method and Apparatus for Data Broadcasting System,"
Yamada, (1986).
4,630,272, "Encoding Method for Error Correction," Fukami, et at, (1986).
4,631,725, "Error Correcting and Detecting System," Takamura, et al., (1986).
4,633,471, "Error Detection and Correction in an Optical Storage System," Perera,
et al., (1986).
4,637,023, "Digital Data Error Correction Method and Apparatus," Lounsbury, et at,
(1987).
4,639,915, "High Speed Redundancy Processor," Bosse, (1987).
4,642,808, "Decoder for the Decoding of Code Words which are Blockwise Protected
Against the Occurrence of a Plurality of Symbol Errors within a Block by
Means of a Reed-Solomon Code, and Reading Device for Optically Readable
Record Carriers, n Baggen, (1987).
4,646,301, "Decoding Method and System for Doubly-Encoded Reed-Solomon Codes,"
Okamoto, et at, (1987).
4,646,303, "Data Error Detection and Correction Circuit," Narusawa, et at, (1987).
- 453 -
PERIODICALS
Abramson, N., "Cascade Decoding of Cyclic Product Codes." IEEE Trans. on Comm.
Tech., Com-16 (3),398-402 (June 1968).
Alekar, S. V., "M6800 Program Performs Cyclic Redundancy Checks." Electronics, 167
(Dec. 1979).
Bahl, L. R. and R. T. Chien, "Single- and Multiple-Burst-Correcting Properties of a Class
of Cyclic Product Codes." IEEE Trans. on Info. Theory, IT-17 (5), 594-600 (Sept.
1971).
Bartee, T. C. and D. I. Schneider, "Computation with Finite Fields." Info. and Control,
6, 79-98 (1963).
Basham G. R., "New Error-Correcting Technique for Solid-State Memories Saves
Hardware. " Computer Design, 110-113 (Oct. 1976).
Baumert L. D. and R. J. McEliece, "Soft Decision Decoding of Block Codes." DSN
Progress Report 42-47,60-64 (July/Aug. 1978).
Beard, Jr., J., "Computing in GF(q)." Mathematics of Comp., 28 (128), 1159-1166 (Oct.
1974).
Berlekamp, E. R., "On Decoding Binary Bose-Chaudhuri-Hocquenghem Codes." IEEE Trans.
on Info. Theory, IT-l1 (4),577-579 (Oct. 1965).
Berlekamp, E. R., "The Enumeration of Information Symbols in BCH Codes." The Bell
Sys. Tech. J., 1861-1880 (Oct. 1967).
Berlekamp, E. R., "Factoring Polynomials Over Finite Fields." The Bell Sys. Tech. J.,
1853-1859 (Oct. 1967).
Berlekamp, E. R., "Factoring Polynomials Over Large Finite Fields." Mathematics of
Comp., 24 (111), 713-735 (July 1970).
Berlekamp, E. R., "Algebraic Codes for Improving the Reliability of Tape Storage."
National Computer Conference, 497-499 (1975).
Berlekamp, E. R., "The Technology of Error-Correcting Codes." Proceedings of the IEEE,
68 (5), 564-593 (May 1980).
Berlekamp, E. R. and J. L. Ramsey, "Readable Erasures Improve the Performance of
Reed-Solomon Codes." IEEE Trans. on Info. Theory, IT-24 (5), 632-633 (Sept.
1978).
Berlekamp, E. R., et. al., "On the Solution of Algebraic Equations Over Finite Fields."
Info. and Control, 10, 553-564 (1967).
Blum, R., "More on Checksums." Dr. Dobb's J., (69), 44-45 (July 1982).
Bossen, D. C., "b-Adjacent Error Correction." IBM J. Res. Develop., 402-408 (July 1970).
- 454 -
Bossen, D. C. and M. Y. Hsiao, "A System Solution to the Memory Soft Error Problem. "
IBM J. Res. Develop., 24 (3),390-397 (May 1980).
Bossen, D. C. and S. S. Yau, "Redundant Residue Polynomial Codes." Info. and Control,
13, 597-618 (1968).
Brown, D. T. and F. F. Sellers, Jr., "Error Correction for IBM 800-Bit-Per-Inch Magnetic
Tape." IBM J. Res. Develop., 384-389 (July 1970).
Bulthuis, K., et. aI., "Ten Billion Bits on a Disk. " IEEE Spectrum, 18-33 (Aug. 1979).
Burton, H. 0., "Some Asymptotically Optimal Burst-Correcting Codes and Their Relation
to Single-Error-Correcting Reed-Solomon Codes." IEEE Trans. on Info. Theory,
IT-17 (1), 92-95 (Jan. 1971).
Chien, R. T., "Block-Coding Techniques for Reliable Data Transmission." IEEE Trans. on
Comm. Tech., Com-19 (5), 743-751 (Oct. 1971).
Chien, R. T., "Memory Error Control: Beyond Parity." IEEE Spectrum, 18-23 (July 1973).
Chien, R. T., et. ai., "Correction of Two Erasure Bursts." IEEE Trans. on Info. Theory,
186-187 (Jan. 1969).
Comer, E., "Hamming's Error Corrections." Interface Age, 142-143 (Feb. 1978).
Davida, G. I. and J. W. Cowles, "A New Error-Locating Polynomial for Decoding of BCH
Codes." IEEE Trans. on Info. Theory, 235-236 (Mar. 1975).
Delsarte, P., "On Subfield Subcodes of Modified Reed-Solomon Codes." IEEE Trans. on
Info. Theory, 575-576 (Sept. 1975).
Doi, T. T., et. aI., "A Long-Play Digital Audio Disk System." Journal of the Audio
Eng. Soc., 27 (12), 975-981 (Dec. 1979).
- 455 -
Duc. N. Q., "On the Lin-Weldon Majority-Logic Decoding Algorithm for Product Codes."
IEEE Trans. on Info. Theory, 581-583 (July 1973).
Forney, Jr., G. D., "Burst-Correcting Codes for the Classic Bursty Channel." IEEE Trans.
on Comm. Tech., Com-19 (5), 772-781 (Oct. 1971).
Gorog, E., "Some New Classes of Cyclic Codes Used for Burst-Error Correction."
IBM 1., 102-111 (Apr. 1963).
Greenberger, H., "An Iterative Algorithm for Decoding Block Codes Transmitted Over a
Memoryless Channel." DSN Progress Report 42-47,51-59 (July/Aug. 1978).
Greenberger, H. J., "An Efficient Soft Decision Decoding Algorithm for Block Codes."
DSN Progress Report 42-50, 106-109 (Jan.lFeb. 1979).
Hartmann, C. R. P., "A Note on the Decoding of Double- Error-Correcting Binary BCH
Codes of Primitive Length." IEEE Trans. on Info. Theory, 765-766 (Nov. 1971).
Hellman, M. E., "Error Detection in the Presence of Synchronization Loss." IEEE Trans.
on Comm., 538-539 (May 1975).
Herff, A. P., "Error Detection and Correction for Mag Tape Recording." Digital Design,
16-18 (July 1978).
Hindin, H. J., "Error Detection and Correction Cleans Up Wide-Word Memory Act."
Electronics, 153-162 (June 1982).
Hodges, D. A., "A Review and Projection of Semiconductor Components for Digital
Storage." Proceedings of the IEEE, 63 (8), 1136-1147 (Aug. 1975).
Hong, S. 1. and A. M. Patel, "A General Class of Maximal Codes for Computer
Applications." IEEE Trans. on Computers, C-21 (12), 1322-1331 (Dec. 1972).
Hsu, H. T., et. al, "Error-Correcting Codes for a Compound Channel." IEEE Trans. on
Info. Theory, IT-14 (1), 135-139 (Jan. 1968).
- 456 -
Imamura, K., "A Method for Computing Addition Tables in GF(pD)." IEEE Trans. on Info.
Theory, IT-26 (3),367-368 (May 1980).
Iwadare, Y., "A Class of High-Speed Decodable Burst-Correcting Codes." IEEE Trans. on
Info. Theory, IT-18 (6),817-821 (Nov. 1972).
Johnson, R. C., "Three Ways of Correcting Erroneous Data." Electronics, 121-134 (May
1981).
Justesen, J., "A Class of Constructive Asymptotically Good Algebraic Codes." IEEE Trans.
Info. Theory, IT-18, 652-656 (Sept. 1972).
Justesen, J., "On the Complexity of Decoding Reed-Solomon Codes." IEEE Trans. on Info.
Theory, 237-238 (Mar. 1976).
Kasami, T., and S. Lin, "On the Construction of a Class of Majority- Logic Decodable
Codes." IEEE Trans. on Info. Theory, IT-17 (5), 600-610 (Sept. 1971).
Kobayashi, H., "A Survey of Coding Schemes for Transmission or Recording of Digital
Data. " IEEE Trans. on Comm. Tech., Com-19 (6), 1087-1100 (Dec. 1971).
Koppel, R., "Ram Reliability in Large Memory Systems-Improving MTBF With ECC."
Computer Design, 196-200 (Mar. 1979).
Korodey, R. and D. Raaum, "Purge Your Memory Array of Pesky Error Bits." EDN,
153-158 (May 1980).
Laws, Jr., B. A. and C. K. Rushforth, "A Cellular-Array Multiplier for GF(2m)." IEEE
Trans. on Computers, 1573-1578 (Dec. 1971).
Leung, K. S. and L. R. Welch, "Erasure Decoding in Burst-Error Channels." IEEE Trans.
on Info. Theory, IT-27 (2), 160-167 (Mar. 1981).
Levine, L. and W. Meyers, "Semiconductor Memory Reliability With Error Detecting and
Correcting Codes." Computer, 43-50 (Oct. 1976).
Levitt, K. N. and W. H. Kautz, "Cellular Arrays for the Parallel Implementation of
Binary Error-Correcting Codes." IEEE Trans. on Info. Theory, IT-15 (5), 597-607
(Sept. 1969).
Liccardo M. A., "Polynomial Error Detecting Codes and Their Implementation." Computer
Design, 53-59 (Sept. 1971).
Lignos D., "Error Detection and Correction in Mass Storage Equipment." Computer
Design, 71-75 (Oct. 1972).
Lim, R. S. and J. E. Korpi, "Unicon Laser Memory: Interlaced Codes for Multiple-
Burst-Error Correction." Wescon, 1-6 (1977).
Lim, R. S., "A (31,15) Reed-Solomon Code for Large Memory Systems." National
Computer Conf., 205-208 (1979).
- 457 -
Lin, S. and E. J. Weldon, "Further Results on Cyclic Product Codes." IEEE Trans. on
Info. Theory, IT-16 (4), 452-459 (July 1970).
Liu, K. Y., "Architecture for VLSI Design of Reed-Solomon Encoders." IEEE Transactions
on Computers, C-31 (2), 170-175 (Feb. 1982).
Locanthi, B., et. al., "Digital Audio Technical Committee Report." J. Audio Eng. Soc.,
29 (112), 56-78 (Jan'/Feb. 1981).
Lucy, D., "Choose the Right Level of Memory-Error Protection." Electronics Design,
ss37-ss39 (Feb. 1982).
Maholick, A. W. and R. B. Freeman, "A Universal Cyclic Division Circuit." Fall Joint
Computer Conf., 1-8 (1971).
Mandelbaum, D., "A Method of Coding for Multiple Errors." IEEE Trans. on Info. Theory,
518-521 (May 1968).
Mandelbaum, D., "On Decoding of Reed-Solomon Codes." IEEE Trans. on Info. Theory,
IT-17 (6), 707-712 (Nov. 1971).
Mandlebaum, D., "Construction of Error Correcting Codes by Interpolation. H IEEE Trans.
on Info. Theory, IT-25 (1), 27-35 (Jan. 1979).
Miller, R. L., et. aI., "A Reed Solomon Decoding Program for Correcting Both Errors and
Erasures." DSN Progress Report 42-53, 102-107 (July/Aug. 1979).
Miller, R. L. et. al. B "An Efficient Program for Decoding the (255, 223) Reed-Solomon
Code Over GF(2 ) with Both Errors and Erasures, Using Transform Decoding." IEEE
Proc., 127 (4), 136-142 (July 1980).
Morris, D., "ECC Chip Reduces Error Rate in Dynamic Rams." Computer Design, 137-142
(Oct. 1980).
Naga, M. A. E., "An Error Detecting and Correcting System for Optical Memory."
Cal. St. Univ., Northridge, (Feb. 1982).
Oldham, I. B., et. aI., "Error Detection and Correction in a Photo-Digital Storage
System." IBM J. Res. Develop., 422-430 (Nov. 1968).
- 458 -
Patel, A. M., "A Multi-Channel CRC Register." Spring Joint Computer Conf., 11-14
(1971).
Patel, A. M., "Error Recovery Scheme for the IBM 3850 Mass Storage System." IBM J.
Res. Develop., 24 (I), 32-42 (Jan. 1980).
Patel A. M. and S. J. Hong, "Optimal Rectangular Code for High Density Magnetic
Tapes." IBM J. Res. Develop., 579-588 (Nov. 1974).
Peterson, W. W., "Encoding and Error-Correction Procedures for the Bose-Chaudhuri
Codes. " IRE Trans. on Info. Theory, 459-470 (Sept. 1960).
Peterson, W. W. and D. T. Brown, "Cyclic Codes for Error Detection." Proceedings of the
IRE, 228-235 (Jan. 1961).
Plum, T., "Integrating Text and Data Processing on a Small System." Datamation, 165-175
(June 1978).
Pohlig, S. C. and M. E. Hellman, "An Improved Algorithm for Computing Logarithms
Over GF(p) and Its Cryptographic Significance." IEEE Trans. on Info. Theory, IT-24
(1), 106-110 (Jan. 1978).
Poland, Jr., W. B., et. aI., "Archival Performance of NASA GFSC Digital Magnetic Tape."
National Computer Conf., M68-M73 (1973).
Pollard, J. M., "The Fast Fourier Transform in a Finite Field." Mathematics of Compu-
tation, 23 (114), (Apr. 1971).
Promhouse, G. and S. E. Tavares, "The Minimum Distance of All Binary Cyclic Codes of
Odd Lengths from 69 to 99." IEEE Trans. on Info. Theory, IT-24 (4), 438-442 (July
1978).
Reddy, S. M., "On Decoding Iterated Codes." IEEE Trans. on Info. Theory, IT-16 (5),
624-627 (Sept. 1970).
Reddy, S. M. and J. P. Robinson, "Random Error and Burst Correction by Iterated
Codes." IEEE Trans. on Info. Theory, IT-18 (1), 182-185 (Jan. 1972).
Reed, I. S. and T. K. Truong, "The Use of Finite Fields to Compute Convolutions."
IEEE Trans. on Info. Theory, IT-21 (2),208-213 (Mar. 1975).
Reed, I. S. and T. K. Truong, "Complex Integer Convolutions Over a Direct Sum of
Galois Fields." IEEE Trans. on Info. Theory, IT-21 (6), 657-661 (Nov. 1975).
Reed, I. S. and T. K. Truong, "Simple Proof of the Continued Fraction Algorithm for
Decoding Reed-Solomon Codes." Proc. IEEE, 125 (12), 1318-1320 (Dec. 1978).
Reed, I. S., et. aI., "Simplified Algorithm for Correcting Both Errors and Erasures of
Reed-Solomon Codes." Proc. IEEE, 126 (10), 961-963 (Oct. 1979).
Reed, I. S., et. aI., "The Fast Decoding of Reed-Solomon Codes Using Fermat Theoretic
Transforms and Continued Fractions." IEEE Trans. on Info. Theory, IT-24 (I),
100-106 (Jan. 1978).
- 459 -
Reed, I. S., et. aI., "Further Results on Fast Transforms for Decoding Reed-Solomon
Codes Over GF(2n) for n=4,5,6,8.· DSN Progress Report 42-50, 132-155 (Jan.lFeb.
1979).
Reno, C. W. and R. J. Tarzaiski, "Optical Disc Recording at 50 Megabits/Second."
SPIE, 177, 135-147 (1979).
Rickard, B., "Automatic Error Correction in Memory Systems.· Computer Design, 179-182
(May 1976).
Ringkjob, E. T., "Achieving a Fast Data-Transfer Rate by Optimizing Existing Tech-
nology." Electronics, 86-91 (May 1975).
SanyaI, S. and K. N. Venkataraman, "Single Error Correcting Code Maximizes Memory
System Efficiency." Computer Design, 175-184 (May 1978).
Sloane, N. J. A., "A Survey of Constructive Coding Theory, and a Table of Binary Codes
of Highest Known Rate." Discrete Mathematics, 3, 265-294 (1972).
Sloane, N. J. A., "A Simple Description of an Error-Correcting Code for High-Density
Magnetic Tape." The Bell System Tech. J., 55 (2), 157-165) (Feb. 1976).
Steen, R. F., "Error Correction for Voice Grade Data Communication Using a
Communication Processor." IEEE Trans. on Comm., Com-22 (10), 1595-1606 (Oct.
1974).
Stiffler, J. J., "Comma-Free Error-Correcting Codes." IEEE Trans. on Info. Theory,
107-112 (Jan. 1965).
Stone, H. S., "Spectrum of Incorrectly Decoded Bursts for Cyclic Burst Error Codes."
IEEE Trans. on Info. Theory, IT-17 (6), 742-748 (Nov. 1971).
Stone, J. J., "Multiple Burst Error Correction." Info. and Control, 4, 324-331 (1961).
Stone, J. J., "Multiple-Burst Error Correction with the Chinese Remainder Theorem.·
J. Soc. Indust. Appl. Math., 11 (1), 74-81 (Mar. 1963).
Sundberg, C. E. W., "Erasure and Error Decoding for Semiconductor Memories." IEEE
Trans. on Computers, C-27 (8),696-705 (Aug. 1978).
Swanson, R., "Understanding Cyclic Redundancy Codes." Computing Design, 93-99 (Nov.
1975).
Tang, D. T. and R. T. Chien, "Coding for Error Control." IBM Syst. J., (1),48-83 (1969).
Truong, T. K. and R. L. Miller, "Fast Technique for Computing Syndromes of B.C.H. and
Reed-Solomon Codes." Electronics Letters, 15 (22), 720-721 (Oct. 1979).
Ullman, J. D., "On the Capabilities of Codes to Correct Synchronization Errors." IEEE
Trans. on Info. Theory, IT-13 (1), 95-105 (Jan. 1967).
- 460 -
Ungerboeck, G., "Channel Coding With Multilevel/Phase Signals." IEEE Trans. on Info.
Theory, IT-28 (1),55-67 (Jan. 1982).
Van Der Horst, I. A., "Complete Decoding of Triple-Error-Correcting Binary BCH Codes."
IEEE Trans. on Info. Theory, IT-22 (2), 138-147 (Mar. 1976).
Wainberg, S., "Error-Erasure Decoding of Product Codes." IEEE Trans. on Info. Theory,
821-823 (Nov. 1972).
Wall, E. L., "Applying the Hamming Code to Microprocessor-Based Systems." Electronics,
103-110 (Nov. 1979).
Welch, L. R. and R. A. Scholtz, "Continued Fractions and Berlekamp's Algorithm." IEEE
Trans. on Info. Theory, IT-25 (1), 19-27 (Ian. 1979).
Weldon, Ir., E. I., "Decoding Binary Block Codes on Q-ary Output Channels." IEEE
Trans. on Info. Theory, IT-17 (6), 713-718 (Nov. 1971).
Weng, L. I., "Soft and Hard Decoding Performance Comparisons for BCH Codes." IEEE,
25.5.1-25.5.5 (1979).
White, G. M., "Software-Based Single-Bit 110 Error Detection and Correction Scheme."
Computer Design, 130-146 (Sept. 1978).
Whiting, I. S., "An Efficient Software Method for Implementing Polynomial Error
Detection Codes." Computer Design, 73-77 (Mar. 1975).
Willett, M., "The Minimum Polynomial for a Given Solution of a Linear Recursion."
Duke Math. I., 39 (1), 101-104 (Mar. 1972).
Willett, M., "The Index of an M-Sequence." Siam I. Appl. Math., 25 (1), 24-27 (July
1973).
Willett, M., "Matrix Fields Over GF(Q)." Duke Math. I., 40 (3), 701-704 (Sept. 1973).
Willett, M., "Cycle Representations for Minimal Cyclic Codes." IEEE Trans. on Info.
Theory, 716-718 (Nov. 1975).
Willett, M., "On a Theorem of Kronecker." The Fibonacci Quarterly, 14 (1), 27-30 (Feb.
1976). .
(Oct. 1980).
Wimble, M., "Hamming Error Correcting Code." BYTE Pub. Inc., 180-182 (Feb. 1979).
- 461 -
Wolf, J.• "Nonbinary Random Error-Correcting Codes." IEEE Trans. on Info. Theory,
236-237 (Mar. 1970).
Wolf, J. K., et. al., "On the Probability of Undetected Error for Linear Block Codes."
IEEE Trans. on Comm .• Com-30 (2),317-324 (Feb. 1982).
Wong, J.. et. al., "Software Error Checking Procedures for Data Communication
Protocols. " Computer Design, 122-125 (Feb. 1979).
Wu, W. W., "Applications of Error-Coding Techniques to Satellite Communications."
Comsat Tech. Review, 1 (1), 183-219 (Fall 1971).
Wyner, A. D., •A Note on a Class of Binary Cyclic Codes Which Correct Solid-Burst
Errors." IBM J.• 68-69 (Jan. 1964).
Yencharis, L., "32-Bit Correction Code Reduces Errors on Winchester Disks.· Electronics
Design, 46-47 (Mar. 1981).
Ziv, J., "Further Results on the Asymptotic Complexity of an Iterative Coding Scheme."
IEEE Trans. on Info. Theory, IT-12 (2), 168-171 (Apr. 1966).
- 462 -
INDEX
- 463 -
Feedback shift register, 18, 280, 282-284, 295-305, Minimum weight of a code, 412
367-368, 403
Field,409 Miseorrection, 5, 6, 61, 64~7, 135-136, 166, 176, 196,
Finite field 201, 202-204, 23~236
Circuits, 103-128, 134 Probability, 5, 65~7, 135-136, 140, 200-204,
Computation in, 91-97,129-133 231-235, 242, 258, 275, 281, 286, 294, 366, 371,
Definition of, 87, 409 373,412
Extension, 88,407,409 Misdetection, 5, 6, 230, 252, 267
Ground, 88, 409 Probability,6,54, 241,250,281,286,371,412
Order, 88, 413 Modulo function, 9, 372
Processor, 126-128, 152,351,353,360 Monic polynomial, 9-10, 15,412
Roots of equations, 121-125, 148, 167 Multiplication circuits, 19-21,281-284
Fire code, 64, 66, 135-149, 231-233, 242-243, 274,
279,365,371 Nibble, I, 195,361
Forward-acting code, 409
Forward error correction, 304-305, 403 On-the-fly correction, 68-73, 235, 370
Forward polynomial, 37-38, 293,301,409 Order
of a finite field, 88, 413
Galois field, see finite field of a fmite field element, 88, 413
Greatest common divisor
Integers, 8 Parity,1-7,32,35,52,54,275,370,413
Polynomials, 10 Parity check code, 205, 414
Ground field, 409 Parity predict, 240, 280, 283, 367-369
Parity sector, 272
Hamming code, 59, 61 Parity tree, 146, 149, 151, 162
Hamming distance, 145, 159,410 Pattern sensitivity, 64~, 135-136, 140, 230-232, 239,
Hamming weight, 410 241-242,274-275
Hard error, 235, 240, 274, 293, 410 Perfect code, 414
Period,414
Integer function, 9 Pointer, 214, 222, 228, 271-272, 274-275, 414
Interleaving, 202, 265-267, 270, 272, 285, 350 Polynomials, 10, 16-17, 29-30, 35-48, 135, 140-141,
Inversion, 92, 103, 131,261-264,266,280,360 145,186,197,205,210-211,231-232
Isomorphic, 88, 410 Binary, 16,37,178
Code, 414
k-bit serial, 136, 243-249 Defmitions, 10,37, 101
Division, 17
Least common multiple, 64 Error locator, 147-148, 151-152, 157, 164-170, 196,
Integers, 8 351,354-357,370
Polynomials, 10,281 Irreducible, 10, 16,37-42,62-64,135,210,231,410
Linear feedback shift register, 18, 403 Monic, 9-10, 15
Linear function, 3, 10,410 Multiplication, 16,281-282,304
Linear sequential circuit, 18, 403 Non-Primitive, 136,210
Linear shift register, 18-19, 403 Period,37,39-40,62,82,136,414
Linearly dependent, 411 Primitive,37,41-48,62,101,286,372,384,415
Linearly independent, 411 Reciprocal, 37-38, 48, 82, 136, 247, 281, 293, 304,
Logaritlun,9~91, 103, 128, 132,351,358,360,362 306,324,347-349,372,416
Longitudinal Redundancy Check (LRC), 403, 411 Self-reciprocal,37, 136,416
Power sum symmetric functions, 414.
Magnetic disk, 205, 224, 230-239, 241, 274-349, 372 Prime fields, 414
Majority logic, 411 Prime subfields, 414
Majority logic decodable code, 41 I Probability
Mass storage devices, 35~363 Miscorrcction, 5~, 16, 65-71, 135-136, 140,
Minimum function, 41 I 2~204, 222, 231-235, 242, 258, 275, 281, 286,
Minimum polynomial of ai, 412 294,366,371,373
- 464 -
Misdetection, 6, 54, 241, 250, 278, 281, 286, 371
Undetected erroneous data, 230, 233, 236, 239, 240,
250,256,370,418
Weight, 418
- 465 -
Neal Glover is widely recognized as
one of the world's leading experts
on the practical application of
error correcting codes and holds
several patents in the field.
Cirrus Logic - Colorado was origiDally founded in 1979 as Data System Technology
(DS1) and was sold to Cirrus Logic, Inc., of Milpitas, California, on January 18, 1990.
Cirrus Logic - Colorado provides .error detection and correction (EDAC) products and
services to the electronics industries.. We specializes in the practical implementation of
EDAC, recording and data compression codes to enhance the reliability and efficiency of
data storage and transmission in computer and communications systems, and all aspects
of error tolerance, including framing, synchronization, data formats, and error manage-
ment.
Cirrus Logic - Colorado also develops innovative VLSI products that perform
complex peripheral control functions in high-performance personal computers, worksta-
tions and other office automation products. The company develops advanced standard
and semi-standard VLSI controllers for data communications, graphics and mass storage.
ISBN 0-927239-00-0
ABOUT CIRRUS LOGIC - COLORADO
Cirrus Logic - Colorado was originally f011llded in 1979 as Data System Technology
(DST) and was sold to -·Cirrus Logic, Inc., of Milpitas, California, on January 18, 1990.
Cirrus Logic - Colorado provides error detection and correction (EDAC) products and
services to the electronics industries. We specializes in the practical implementation qf
EDAC, recording and data compression codes to enhance the reliability and efficiency./of
data storage and transmission in computer and communications systems, and all aspects
of error tolerance, including framing, synchronization, data formats, and error manage-
ment.
Cirrus Logic - Colorado also develops innovative VLSI products that perform
complex peripheral control functions in high-performance personal computers, worksta-
tions and other office automation products. The company develops advanced standard
and semi-standard VLSI controllers for data communications, graphics and mass storage.
ISBN 0-927239-00-0