0% found this document useful (0 votes)
5 views304 pages

Itc

Uploaded by

sreerajgptc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views304 pages

Itc

Uploaded by

sreerajgptc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 304

z

INFORMATION
THEORY &
CODING
Father of Digital Communication
z

 The roots of modern digital communication stem from the


paper “A Mathematical Theory of Communication” by Claude
Elwood Shannon in 1948.
Model of a Digital Communication System
z

Message Encoder
e.g. English symbols e.g. English to 0,1 sequence

Information
Coding
Source

Communicat
ion Channe
l
Destination Decoding

Can have noise


or distortion
Decoder
e.g. 0,1 sequence to English
Shannon’s Definition of Communication
z

 “The fundamental problem of communication is that of


reproducing at one point either exactly or approximately a
message selected at another point.”

 Shannon wants to find a way for “reliably” transmitting data


throughout the channel at “maximal” possible rate.
INFORMATION THEORY
z

 provides a quantitative measure of the information contained


in message signals and allows us to determine the capacity
of a communication system to transfer this information from
source to destination
DIGITAL COMMUNICATIONS MODEL
z
DIGITAL COMMUNICATIONS MODEL
z

 The source is an object that produces an event, the outcome


of which is selected at random according to a probability
distribution.
 A practical source in a communication system is a device that
produces messages, and it can be either analog or discrete.
 A discrete information source is a source that has only a finite
set of symbols as possible outputs.
 The set of source symbols is called the source alphabet, and
the elements of the set are called symbols or letters.
DIGITAL COMMUNICATIONS MODEL
z

 The source encoder serves the purpose of removing as much


redundancy as possible from the data. This is the data
compression portion.
 The channel coder puts a modest amount of redundancy
back in order to do error detection or correction.
 The channel is what the data passes through, possibly
becoming corrupted along the way.
DIGITAL COMMUNICATIONS MODEL
z

 The channel decoder performs error correction or detection

 The source decoder undoes what is necessary to get the


data back.
 There are also other possible blocks that could be inserted
into this model like modulator/demodulator block.
INFORMATION CONTAIN OF A DISCRETE
z MEMORYLESS SOURCE

 The amount of information contained in an event is closely


related to its uncertainty.
 Messages containing knowledge of high probability of
occurrence convey relatively little information.
 If an event is certain (that is, the event occurs with probability
1), it conveys zero information.
INFORMATION CONTAIN OF A DISCRETE
z MEMORYLESS SOURCE

 Mathematical measure of information should be a function of


the probability of the outcome and should satisfy the following
axioms:
1. Information should be proportional to the uncertainty of an
outcome.
2. Information contained in independent outcomes should add.
Information Content – (Self information)
z
Information Content – (Self information)
z

 The binary symbols ‘0’ and ‘1’ are transmitted with


probabilities ¼ and ¾ respectively. Find the corresponding
information.
Information Content – (Self information)
z

 1 Hartley = 2.303 nats

 1 Hartley = 3.32 bits

 1 nat = 1.443 bits


Information Content – (Self information)
z

 Why logarithmic expression chosen for measuring information

 Self information of any message cannot be negative.

 Each message most contain certain amount of information.

 Lowest possible self information is zero which occurs for a


sure event since probability of sure event is one.
 More information carried by less likely message.

 When independent symbols are transmitted, total self


information is equal to sum of individual self informations.
Amount of information –I(sk)
z
Amount of information –I(sk)
z

 If the probability pk = 1 and pi = 0 for all ,then there is no


“surprise” and, therefore, no “information” when symbol sk is
emitted words uncertainty, surprise, and information are related.
 Before the event S = sk occurs, there is an amount of uncertainty.

 When the event S = sk occurs, there is an amount of surprise.

 After the occurrence of the event S = sk, there is gain in the


amount of information,
Amount of information
z
Amount of information
z

 One bit is the amount of information that we gain when one of


two possible and equally likely (i.e., equiprobable) events
occurs.
Average Information Or Entropy
z

 transmit long sequences of symbols from an information


source
 interested in the average information that a source produces
AVERAGE INFORMATION OR ENTROPY
z

 The quantity 𝐻 (𝑋) is called the entropy of source X

 It is a measure of the average information content per source


symbol.
 The source entropy H(X) can be considered as the average
amount of uncertainty within source X
Entropy
z
AVERAGE INFORMATION OR ENTROPY
z
 binary source X that generates independent symbols 0 and 1 with equal
probability, the source entropy H(X) is

 where m is the size (number of symbols) of the alphabet of source X.

 The lower bound corresponds to no uncertainty which occurs when one symbol
has probability 𝑃(𝑥𝑖)=1)=0 for j ≠I
 X emits the same symbol xi all the time

 The upper bound corresponds to the maximum uncertainty which occurs when
𝑃(𝑥𝑖)=1/m for all 𝑖, that is, when all symbols are equally likely to be emitted by X.
Consider a source alphabet S={s1, s2} with probabilities
z
p={1/256, 255/256}. Find entropy.
Consider a source alphabet S={s1, s2} with probabilities p={7/16,
z
9/16}. Find entropy.
Consider a source alphabet S={s1, s2} with probabilities p={1/2,
z
1/2}. Find entropy.
Consider a source alphabet S={s1, s2} with probabilities p={1/2,
z
1/2}. Find entropy.
Source Entropy Rate (Information Rate)
z

 If the time at which source X emits symbols is r (symbol/sec),


the information rate R of the source is given by
Consider a source alphabet S={s1, s2, s3} with probabilities
z
p={1/2, ¼, ¼ }. Find (a) self information (b) entropy.
The collector voltage of a certain circuit is to lie between -5 and
-12 volts.
z The voltage reading with respective probability is

V(i) V1 V2 V3 V4 V5 V6
P(i) 1/6 1/3 1/12 1/12 1/6 1/6
A discrete source emits one of the six symbols once every msec. The symbol
probabilities
z are ½, ¼, 1/8, 1/16, 1/32 and 1/32 respectively. Find source entropy
and information rate.
A card is drawn from a deck.
a) How much
z information did you receive if you are told it is a spade.
b) Repeat a if it is an ace.
c) Repeat a it is an ace of spade
d) Verify that the information obtained in c is sum of information obtained in a and b
An analog signal is bandlimited to 500 Hz and is sampled at
Nyquist zrate. The samples are quantized into 4 levels. The
quantization levels are assumed to be independent and occur
with probability p1 = p4 = 1/8, p2 = p3 = 3/8. Find information rate.
A zero memory source has a source alphabet S={s 1, s2, s3} with
z
p={1/2, ¼, 1/4}. Find entropy of second extension and verify
H(s2) = 2H(s)
Entropy
z
Communication Channel
z

 A channel is defined as the medium through which the coded


signals are transmitted.
 The maximum rate at which data is transferred across the
channel with an arbitrary small probability of error is called
channel capacity.
Channel representation
z

 Channel may be represented by a set of input alphabet A={a 1,


a2, …, ar} consisting or r symbols.
 Output alphabet B={b1, b2, …, bs} consisting of s symbols

 Conditional probabilities P(bj/ai) with i=1,2,…,r and j=1,2,…,s


CHANNEL MATRIX
z

 Totally we have rxs number of conditional probabilities, which


are represented in a matrix form called Channel matrix or
Noise matrix P(B/A)
CHANNEL DIAGRAM
z
z

 p11 = p(b1/a1) = conditional probability of receiving b 1 given a1


is transmitted with no noise
 p12 = p(b2/a1) = conditional probability of receiving b 2 given a1
is transmitted with noise affecting the second ‘0’.
 p13 = p(b3/a1) = conditional probability of receiving b 3 given a1
is transmitted with noise affecting the first ‘0’.
 p14 = p(b4/a1) = conditional probability of receiving b 4 given a1
is transmitted with noise affecting both the symbols.
 p11 + p21 + p31 + p41 = 1
z
z

 The sum of all elements in any row of the channel matrix is


equal to unity.
 The sum of input probability is equal to unity.
JOINT PROBABILITY MATRIX
z
z

 By adding all the elements of first column of JPM, we get


probability of first output symbol b 1.
 Similarly for all output symbols.

 By adding the elements of JPM row wise we can obtain the


probability of input symbols
z

 The sum of all elements of JPM is unity


CONDITIONAL ENTROPY
z
 The entropy of input symbols after their transmission and
reception of a particular output symbol is defined as
conditional entropy H(A/bj)

 Average value of all conditional entropies is called


equivocation
CONDITIONAL ENTROPY
z

 H(A/B) is a measure of uncertainty when symbols are


transmitted over the channel and hence represents the
amount of information lost due to noise w.r.t. any of output
symbols.
 Interchanging A and B
MUTUAL INFORMATION
z

 When an average information of H(A) is transmitted over the


channel, an average amount of information H(A/B) is lost in
the channel due to noise. The balance information received at
the receiver w.r.t. observed output symbol is mutual
information.
 I(A,B) = H(A) – H(A/B)
z

 Interchanging A and B

 Hence I(A,B) = I(B,A)


z

I(X;Y) H(X|Y) H(Y|X)

I(X;Y) = I(Y;X) H(X|Y) ≠ H(Y|X)


MUTUAL INFORMATION PROPERTIES
z
 Mutual information is symmetric I ( X ;Y )  I (Y ; X )
 Mutual information is always positive I ( X ;Y )  0
I ( X ; X )  H ( X )
I ( X ;Y )  min{H ( X ), H (Y )}
 0  H ( X )  log 
 If Y  g( X ), then H (Y )  H ( X )
 Mutual information is related to joint entropy

I(A,B) = H(A) + H(B) – H(A,B)


JOINT ENTROPY
z

 H(A,B) = H(A) + H(B/A)


RATE OF INFORMATION
z

 The average rate at which information is passed into the channel


is given by
 Rin = H(A) . R

 At the receiver it is not possible to reconstruct the input symbol


due to errors introduced when signal pass through the channel.
 Some amount of information is lost called equivocation H(A/B).

 Mutual information is I(A,B) = H(A) – H(A/B)

 Rate of information Rt = I(A,B) . r


A transmitter has an alphabet consisting of 5 letters {a1, a2, a3,
a4, a5} and
z receiver has alphabet of four letters {b1, b2, b3, b4}
with joint probabilities

Compute the different entropies


z
z
A transmitter transmits 5 symbols with probabilities 0.2, 0.3, 0.2,
0.1 and z0.2. Given the channel matrix P(B/A), calculate H(B)
and H(A,B)
z
z
For the JPM given below, compute H(X), H(Y), H(X,Y), H(X/Y),
H(Y/X) and
z I(X,Y).
z
z
z
z
SOURCE CODING
z

 DMS has alphabet with K different symbols and that the kth
symbol sk occurs with probability pk, k = 0, 1, , K – 1.
SOURCE CODING
z

 The process of efficient representation of data generated by a discrete


source of information is called source encoding
 The device that performs the representation is called a source encoder

 Objective is to minimize the average bit rate by reducing the


redundancy
 Some source symbols are more probable than others

 Exploit this feature in the generation of a source code by assigning


short codewords to frequent source symbols, and long codewords to
rare source symbols.
 Such a source code is a variable-length code
SOURCE CODING
z

 Source encoder that satisfies two requirements:

1. The code words produced by the encoder are in binary form.

2. The source code is uniquely decodable, so that the original


source sequence can be reconstructed perfectly from the
encoded binary sequence.
 The second requirement is particularly important: it
constitutes the basis for a perfect source code.
SOURCE CODING – BLOCK CODES
z

 It maps each of the symbols of the source alphabet S into


some finite sequence of code symbols . These finite
sequences are called code word
Fixed length and variable length codes
z

 Fixed length codes:


 No of bits same for all codeword

 Efficient if all symbols have equal probability

 Variable length codes:


 No of bits not same

 Efficient for symbols with unequal probability

 Code length (L) – No of binary digits in the code


Non-singular codes
z

 In a block code if all the codewords are distinct and easily


distinguishable from one another, it is non-singular

 This is non-singular – second extension – s1s2 = s2s1 = 000


Uniquely decodable codes
z

 A non-singular code is said to be uniquely decodable if every


code word present in a long received sequence can be
uniquely identified.

 Let received sequence is 001100

 Table 1 is uniquely decodable, but table 2 is not


Instantaneous codes
z

 A uniquely decodable code is instantaneous if it is possible to


recognize the end of any code word in any received sequence
without reference to the succeeding symbols
Instantaneous codes
z

 Let received sequence is 001100

 Code C and D are instantaneous codes


Prefix Coding
z

 Prefix code, which not only is uniquely decodable, but also


offers the possibility of realizing an average codeword length
that can be made arbitrarily close to the source entropy
 Consider DMS of alphabet {s0, s1, …, sK – 1} and respective
probabilities {p0, p1, ..., pK – 1}
 Initial part of the codeword is called a prefix of the codeword

 A prefix code is defined as a code in which no codeword is


the prefix of any other codeword.
Prefix Coding
z

 A prefix code is defined as a code in which no codeword is


the prefix of any other codeword.
 Prefix codes are distinguished from other uniquely decodable
codes by the fact that the end of a codeword is always
recognizable.
 Decoding of a prefix can be accomplished as soon as the
binary sequence representing a source symbol is fully
received.
 For this reason, prefix codes are also referred to as
instantaneous codes.
Prefix Coding
z

 Code I & III – not prefix code


Prefix Coding
z

 To decode a sequence of codewords


generated from a prefix source code, the
source decoder simply starts at the
beginning of the sequence and decodes one
codeword at a time.
 It sets up what is equivalent to a decision
tree, which is a graphical portrayal of the
codewords in the particular source code.
 The tree has an initial state and four terminal
states corresponding to source symbols s0,
s1, s2, and s3
Kraft Inequality
z

 Consider a discrete memoryless source with source alphabet {s 0, s1, …,


sK – 1} and source probabilities {p0, p1, …, pK –1}, with the codeword of
symbol sk having length lk, k = 0, 1,…, K – 1.
 Then, according to the Kraft inequality, the codeword lengths always
satisfy the following inequality:

 where the factor 2 refers to the number of symbols in the binary


alphabet.
 Kraft inequality is a necessary but not sufficient condition for a source
code to be a prefix code or instantaneous code.
Kraft Inequality
z
Kraft Inequality
z
Kraft Inequality
z
Kraft Inequality
z

 Code I violates the Kraft inequality; it cannot, therefore, be a


prefix code.
 The Kraft inequality is satisfied by both codes II and III, but
only code II is a prefix code.
Construction of instantaneous codes with prefix
z property
 Construct an instantaneous binary code for a source
producing 5 symbols s1 to s5.

S1 0
S2 10
S3 110
S4 1110
S5 1111
Coding efficiency & Coding redundancy
z

 Coding redundancy = 1- coding efficiency


Kraft Inequality
z
Kraft Inequality
z
Source-coding Theorem
z

 How is Lmin determined?

 The answer to this fundamental question is Shannon’s first


theorem: the source-coding theorem
 Source-coding theorem
 Discrete memoryless source whose output is denoted by the
random variable S, the entropy H(S) imposes the following
bound on the average codeword length for any source
encoding scheme:
 Average code word length greater than or equal to H(S)
Source-coding Theorem
z

 According to this theorem, the entropy H(S) represents a


fundamental limit on the average number of bits per source
symbol necessary to represent a discrete memoryless
source, in that it can be made as small as but no smaller than
the entropy H(S).
 Lmin = H(S)

 Efficiency of a source encoder in terms of the entropy H(S)


A source having an alphabet S={s1, s2, s3, s4, s5} produces their symbols
with respective
z probabilities of ½,1/6,1/6,1/9,1/18. When these symbols
are coded as given below. Find efficiency and redundancy.
z
Noiseless coding theorem
z

 Given a code alphabet with r symbols and source alphabet of


q symbols, the average length of code words can be made as
close as H(s) as possible by increasing the extension

 Nth extension
Shannon Fano encoding algorithm
z

 List the symbols from top to bottom in order of decreasing probability.

 Divide the whole set of source symbols into two subsets, each one
containing only consecutive symbols of the list, in such way that the two
probabilities of the subsets are as close as possible.Then, assign “ 1”
(respectively “ 0” ) to the symbols of the top (respectively bottom) subset.
 Apply the process of the previous step to the subsets containing at least
two symbols.
 The algorithm ends when there are only subsets with one symbol left

 The successive binary digits assigned to the subsets have to be arranged


from left to right to form the codewords
 Shannon-Fano encoding scheme does not always provide the best code
Given message x1 to x6 with probabilities 0.4, 0.2, 0.2, 0.1, 0.07
z
and 0.03. Construct binary codes by applying Shannon-Fano
encoding process
z
You are given four messages with probabilities 0.4, 0.3, 0.2 and 0.1.
1. Device za code with prefix property using Shanon- Fano coding
2. Calculate efficiency and redundancy
3. Calculate the probabilities of 0s and 1s in the code
z
z
Consider a source S={s1,s2} with probabilities ¾ and ¼. Pbtain
z
Shannon-Fano code and its 2nd and 3rd extension. Calculate
efficiencies in each case.
z
z
z
Construct Shannon-Fano code for
Symbolz S0 S1 S2 S3 S4 S5 s6
Probability 0.25 0.25 0.125 0.125 0.125 0.0625 0.0625
z
Shannon – Fano ternary code
z

 Instead of two symbos 0 and 1, here we have three symbols


0, 1 and 2.
 The procedure remains the same.
Construct Shannon-Fano code for
z

Symbol S1 S2 S3 S4 S5 s6 s7
Probability 0.3 0.3 0.12 0.12 0.06 0.06 0.04
z
z
Huffman Coding
z

 The basic idea behind Huffman coding is the construction of


a simple algorithm that computes an optimal prefix code for a
given distribution
 Optimal in the sense that the code has the shortest expected
length
 The end result is a source code whose average codeword
length approaches the fundamental limit set by the entropy of
a discrete memoryless source, namely H(S).
Huffman encoding algorithm
z

Huffman encoding algorithm proceeds as follows:


 The source symbols are listed in order of decreasing probability.The two source
symbols of lowest probability are assigned 0 and 1. This part of the step is
referred to as the splitting stage.
 These two source symbols are then combined into a new source symbol with
probability equal to the sum of the two original probabilities. (The list of source
symbols, and, therefore, source statistics, is thereby reduced in size by one.) The
probability of the new symbol is placed in the list in accordance with its value.
 The procedure is repeated until we are left with a final list of source statistics
(symbols) of only two for which the symbols 0 and 1 are assigned.
 The code for each (original) source is found by working backward and tracing the
sequence of 0s and 1s assigned to that symbol as well as its successors
Huffman encoding algorithm
z
Given messages x1 to x4 with probabilities 0.4, 0.3, 0.2 and 0.1.
Construct
z binary code by applying Huffman coding algorithm
z
z
Given messages x1 to x6 with probabilities 0.4, 0.2, 0.2, 0.1,
0.07 andz 0.03. Construct binary code by applying Huffman
coding algorithm
z
z
Given messages x1 to x7 with probabilities 0.4, 0.2, 0.1, 0.1, 0.1 0.05
and 0.05.z Construct binary code by applying Huffman coding algorithm.
Compute the code by moving the symbol as high as possible.
z
z
Huffman ternary code
z

 For Huffman coding

 q = r + (r-1)α

 q = no. of source symbols

 r = no. of different symbols used in code alphabet

 For ternary code r = 3

 α = (q-3)/2

 α will be an integer when q=5,7,9,…

 So we should add a dummy variable with zero probability to


the message.
Given messages x1 to x8 with probabilities 0.22, 0.2, 0.18, 0.15, 0.1
0.08, 0.05,
z 0.02. Construct ternary code by applying Huffman coding
algorithm. Compute the code by moving the symbol as high as possible.
z
z
Huffman quarternary code
z

 For Huffman coding

 q = r + (r-1)α

 q = no. of source symbols

 r = no. of different symbols used in code alphabet

 For ternary code r = 4

 α = (q-4)/2

 α will be an integer when q=7, 10,13,…

 So we should add a dummy variable with zero probability to


the message.
Given messages x1 to x8 with probabilities 0.22, 0.2, 0.18, 0.15, 0.1 0.08,
0.05, 0.02.
z Construct quarternary code by applying Huffman coding
algorithm. Compute the code by moving the symbol as high as possible.
z
z
CHANNEL CAPACITY
z

 The capacity of a discrete memoryless noisy channel is


defined as the maximum possible rate of information
transmitted over the channel
C = max{Rt}
C = max[H(A)-H(A/B)] x r
 Average rate of information transmission

 Rt = I(A,B) x r = [H(A)-H(A/B)] x r = [H(B) – H(B/A)] x r


SHANNONS THEOREM ON CHANNEL CAPACITY
z

 When the rate of information transmission Rt ≤ C, then there


exists a coding technique which enables transmission over a
channel with a small probability of error as possible, even in
the presence of noise in the channel.
REDUNDANCY & EFFICIENCY OF CHANNEL
z
 Channel efficiency

 Where
SPECIAL CHANNELS
z

1. Symmetric uniform channel

2. Binary Symmetric Channel (BSC)

3. Binary Erasure Channel (BEC)

4. Noiseless channel

5. Deterministic channel

6. Cascade channel
BINARY SYMMETRIC CHANNEL
z

 A channel is said to be symmetric or uniform channel if the


second and subsequent rows of the channel matrix contains
the same elements as that of the first row but in a different
order

 p is probability of error
BINARY SYMMETRIC CHANNEL
z

 Let p(x1) = w and p(x2) = 1-w

 Symbol x1 is encoded as 0 and x2 as 1

 Channel matrix
BINARY SYMMETRIC CHANNEL
z

 For a binary symmetric channel, s=2

 Entropy of output symbol


BINARY SYMMETRIC CHANNEL
z

 Output probability can be calculated as

 On substitution
BINARY SYMMETRIC CHANNEL
z

 Mutual information of BSC


BINARY SYMMETRIC CHANNEL
z

 Since BSC is a symmetric channel, channel capacity with r=1


message symbol per second is
BINARY SYMMETRIC CHANNEL
z

 When symbols become equiprobable


BINARY SYMMETRIC CHANNEL
z

 When the input symbols become equiprobable the mutual


information maximizes and become equal to channel capacity
C.
A binary symmetric channel has the following noise matrix with source
probabilities
z p(x1)=2/3 and p(x2)=1/3.

Determine H(X), H(Y), H(X,Y), H(Y/X), X(X/Y) and I(X,Y)


Determine channel capacity
Find channel efficiency and redundancy
z
z
z
BINARY ERASURE CHANNEL
z

 In this channel whenever an error occurs, the symbol will be


received as y and no decision will be made about the
information, but an immediate request will be made through a
receiver channel, for retransmission of the transmitted signal
till a correct symbol is received at the output.
 Since the efficiency is 100% and error is totally erased, this
channel is called binary erasure channel.
BINARY ERASURE CHANNEL
z
BINARY ERASURE CHANNEL
z
BINARY ERASURE CHANNEL
z
BINARY ERASURE CHANNEL
z
BINARY ERASURE CHANNEL
z
SHANNON-HARTLEY THEOREM
z

 Capacity of a bandlimited Gaussian channel with Additive


White Gaussian Noise (AWGN) is given by
C = B log(1 + S/N) bits/sec
 B – channel BW in Hz

 S – Signal power in watts

 N – noise power in watts


BANDWIDTH – SNR TRADEOFF
z

 Suppose S1/N1 = 7 and B1 = 4 kHz

 Channel capacity C1 = B1 log(1 + S1/N1) = 12 x 103 bits/sec

 Increase SNR to 15 keeping C same

 C2 = C1 = 12 x 103 = B2 log(1 + S2/N2)

 B2 = 3 kHz

 Noise power is N = η B

 N1 = η B1 = η 4kHz N2 = η B2 = η 3kHz
 S2/S1 = 15N2/7N1 = 15 η 4kHz/7 η 3kHz = 1.6

 This means 60% increase in signal power is required for maintaining the
same channel capacity when BW reduced from 4 to 3kHz.
BANDWIDTH – SNR TRADEOFF
z

 There exists a threshold point at around S/N=10 to which the


exchange rate of BW to SNR is advantageous.
CAPACITY OF A CHANNEL OF INFINITE BW
z

 According to Shannon-Hartley law

C = B log(1 + S/N) bits/sec


 When B is increased C also increases.

 Since Rmax = C, rate can be increased to any value

 But C cannot be infinite


 When B increases, N which depends on B also increases
keeping S/N to reduce
 Hence B log(1 + S/N) can increase only upto a level
CAPACITY OF A CHANNEL OF INFINITE BW
z
SHANNON’S LIMIT
z
 An ideal system is the one that transmits data at a bit rate R t
equal to channel capacity C.
 Average transmitted power S = EbC

 Eb – transmitted energy per bit Joules

 The quantity C/B is called bandwidth efficiency


SHANNON’S LIMIT
z

 When Rt/B is plotted as a function of


Eb/η we get bandwidth efficiency
diagram
SHANNON’S LIMIT
 For infinite
z
BW, signal to energy ratio Eb/η approaches
limiting value
SHANNON’S LIMIT
z

 This value of -1.6dB is called Shannon’s Limit. The


corresponding value of channel capacity
An alpha numeric data is entered into a computer from a remote terminal through
a voice grade
z
telephone channel. The channel has a BW of 3.4 kHz and output
SNR of 20dB. The terminal has a total of 128 symbols. Assume that the symbols
are equiprobable and successive transmissions are statistically independent.
a) Calculate channel capacity
b) Find average informant content per character
c) Calculate the max symbol rate for which error-free transmission over the
channel is possible.
z
A black and white TV picture may be viewed as consisting of
approximately
z 3 x 105 elements, each one of which may occupy
one of 10 distinct brightness level with equal probability. Assume
a. The rate of transmission is 30 picture frames/sec

b. SNR is 30 dB.

Using channel capacity theorem calculate the min BW required to


support the transmission of resultant video signal
z
An analog signal has 4kHz BW. The signal is sampled at 2.5 times the
Nyquist rate
z and each sample quantized into 2256 equally likely levels.
Assume that the successive samples are statistically independent
a. Find information rate of the source
b. Can the output of this source be transmitted without errors over a
Gaussian channel of BW 50kHz and SNR of 20dB
c. If the output of this source is to be transmitted without errors over an
analog channel having SNR 10dB, compute BW requirement of channel
z
A voice grade channel of the telephone network has a BW of 3.4kHz
z are statistically independent. Find information rate of the
ve samples
source
a. Calculate channel capacity of the telephone channel for SNR of 30dB
b. Calculate minimum SNR required to support information transmission
through the telephone channel at the rate of 4800 bits/sec
z
A Gaussian channel has a 10 MHz BW. If SNR is 100, calculate
the channel
z capacity and maximum information rate.
DIFFERENTIAL ENTROPY
z

 The entropy of discrete message is

 Consider a continuous random variable X to be a limiting


function of discrete random variable which assumes a value
xi=x+Δx
DIFFERENTIAL ENTROPY
z

 Differential entropy
Consider a continuous random variable having distribution as
given below.
z Find differential entropy H(x)
A continuous random variable X is uniformly distributed in the
internal [0,4].
z Find differential entropy H(x). Suppose that X is a
voltage which is applied to an amplifier whose gain is 8. Find the
differential entropy of the output of the amplifier
z
z
INTRODUCTION TO ALGEBRA
z
GROUP
z
GROUP
z
GROUP
z
GROUP
z
FIELDS
z
FIELDS
z
FIELDS
z
FIELDS
z
FIELDS
z
FIELDS
z
FIELDS
z
FIELDS
z
CODES FOR ERROR DETECTION AND
z CORRECTION
 Three approaches can be used to cope with data
transmission errors.
1. Using codes to detect errors.

2. Using codes to correct errors – called Forward Error


Correction (FEC).
3. Mechanisms to automatically retransmit (ARQ) corrupted
packets.
Codes for error detection and correction (FEC)
z

 Error Control Coding (ECC)

 Extra bits are added to the data at the transmitter


(redundancy) to permit error detection or correction at the
receiver
 Done to prevent the output of erroneous bits despite noise
and other imperfections in the channel
 Two main types, namely block codes and convolutional
codes.
BLOCK CODES
z

 consider only binary data

 Data is grouped into blocks of length k bits (dataword)

 Each data word is coded into blocks of length n bits


(codeword), where in general n>k
 This is known as an (n,k) block code

 The n is called the block length of the code

 The channel encoder produces bits at the rate R 0 = (n/k)Rs,


where Rs is the bit rate of the information source.
BLOCK CODES
z

 A vector notation is used for the data words and codewords,

Dataword d = (d1,d2…,dk)

Codeword c = (c1, c2,...,cn)


 The redundancy introduced by the code is quantified by the
code rate,
Code rate = k/n
 i.e., the higher the redundancy, the lower the code rate
BLOCK CODES – EXAMPLE
z

 Dataword length k = 4

 Codeword length n = 7

 This is a (7,4) block code with code rate = 4/7

 For example, d = (1101), c = (1101001)


PARITY CODES
z

 Example of a simple block code – Single Parity Check Code

 In this case, n = k+1, i.e., the codeword is the dataword with


one additional bit
 For ‘even’ parity the additional bit is,

k
q i
d(mod
i 2)
 1

 For ‘odd’ parity the additional bit is 1-q

 That is, the additional bit ensures that there are an ‘even’ or
‘odd’ number of ‘1’s in the codeword
PARITY CODES – EXAMPLE
z

 Even parity

 (i) d=(10110) so, c=(101101)

 (ii) d=(11011) so, c=(110110)


PARITY CODES – EXAMPLE
z

 Coding table for (4,3) even parity code

Dataw Codew
ord ord
0 0 0 0 0 0 0
0 0 1 0 0 1 1
0 1 0 0 1 0 1
0 1 1 0 1 1 0
1 0 0 1 0 0 1
1 0 1 1 0 1 0
1 1 0 1 1 0 0
1 1 1 1 1 1 1
PARITY CODES
z

To decode
 Calculate sum of received bits in block (mod 2)

 If sum is 0 (1) for even (odd) parity then the dataword is the first k
bits of the received codeword Otherwise error
 Code can detect single errors

 But cannot correct error since the error could be in any bit

 For example, if the received dataword is (100000) the transmitted


dataword could have been (000000) or (110000) with the error
being in the first or second place respectively
 Note error could also lie in other positions including the parity bit
PARITY CODES
z

 Known as a single error detecting code (SED). Only useful if


probability of getting 2 errors is small since parity will become
correct again
 Used in serial communications

 Low overhead but not very powerful

 Decoder can be implemented efficiently using a tree of XOR


gates
VERTICAL AND HORIZONTAL PARITY CHECKS
z
z
In a single parity check code, the parity bit b1 is given by the
rule m1⊕ z m2⊕…⊕mk⊕b1 for k=3. Determine all possible code

words
HAMMING DISTANCE
z

 Error control capability is determined by the Hamming


distance
 The Hamming distance between two codewords is equal to
the number of differences between them, e.g.,
10011011
11010010 have a Hamming distance = 3
 Alternatively, can compute by adding codewords (mod 2)

=01001001 (now count up the ones)


HAMMING DISTANCE
z

 The Hamming distance of a code is equal to the minimum


Hamming distance between two codewords
 If Hamming distance is:
 1 – no error control capability; i.e., a single error in a received
codeword yields another valid codeword
 If Hamming distance is:
 2 – can detect single errors (SED); i.e., a single error will yield
an invalid codeword
 2 errors will yield a valid (but incorrect) codeword
HAMMING DISTANCE
z

 If Hamming distance is:


 3 – can correct single errors (SEC) or can detect double errors
(DED)
 3 errors will yield a valid but incorrect codeword
HAMMING DISTANCE
z

 The maximum number of detectable errors is

dmin - 1
 That is the maximum number of correctable errors is given
by,

 where dmin is the minimum Hamming distance between 2


codewords and means the smallest integer
LINEAR BLOCK CODES
z

 By definition: A code is said to be linear if any two codewords


in the code can be added in modulo-2 arithmetic to produce a
third codeword in the code.
 Consider an (n,k) linear block code- k bits of the n code bits
are always identical to the message sequence to be
transmitted.
 The (n – k) bits are computed from the message bits in
accordance with a prescribed encoding rule that determines
the mathematical structure of the code
 (n – k) bits are referred to as parity-check bits
LINEAR BLOCK CODES
z

 Block codes in which the message bits are transmitted in


unaltered form are called systematic codes
 For applications requiring both error detection and error
correction, the use of systematic block codes simplifies
implementation of the decoder.
 Let m0, m1, …., mk – 1 constitute a block of k arbitrary message
bits. ,2k distinct message blocks
 Encoder producing an n-bit codeword (c0, c1, …, cn – 1).

 (b0, b1, …., bn – k – 1 )denote the (n – k) parity-check bits in the


codeword.
LINEAR BLOCK CODES
z

 For the code to possess a systematic structure, a codeword is


divided into two parts, message bits and parity-check bits
 Message bits of a codeword before the parity-check bits, or vice
versa.
 (n – k) leftmost bits of a codeword are identical to the
corresponding parity-check bits and the k rightmost bits of the
codeword are identical to the corresponding message bits
LINEAR BLOCK CODES
z

 The coefficients pij are chosen in such a way that the rows of
the generator matrix are linearly independent and the parity-
check equations are unique
LINEAR BLOCK CODES
z

 Equations may be rewritten in a compact form using matrix


notation
 1-by-k message vector m

 1-by-(n – k) parity-check vector b

 1-by-n code vector c

 Parity check-bits in the compact matrix form

b = mP
LINEAR BLOCK CODES
z
LINEAR BLOCK CODES
z

 The generator matrix G is in the canonical form, in that its k rows are
linearly independent
 It is not possible to express any row of the matrix G as a linear
combination of the remaining rows.
 The full set of codewords, referred to simply as the code, is generated
as
C = mG
 by passing the message vector m range through the set of all 2k binary
k-tuples (1-by-k vectors)
 Sum of any two codewords in the code is another codeword

 This basic property of linear block codes is called closure.


LINEAR BLOCK CODES
z

 To prove its validity, consider a pair of code vectors ci and cj


corresponding to a pair of message vectors mi and mj

 The modulo-2 sum of mi and mj represents a new message


vector.
 Correspondingly, the modulo-2 sum of c i and cj represents a
new code vector.
LINEAR BLOCK CODES
z
 There is another way of expressing the relationship between
the message bits and parity-check bits of a linear block code.
 Let H denote an (n – k)-by-n matrix, defined as
LINEAR BLOCK CODES
z
 The matrix H is called the parity-check matrix of the code and the
equations specified by (10.16) are called parity-check equations.
 The generator equation (10.13) and the parity-check detector equation
(10.16) are basic to the description and operation of a linear block code.
 These two equations are depicted in the form of block diagrams in
Figure 10.5a and b, respectively.
The (7,4) linear code has the generator matrix given by

[ ][ ]
z 𝑔0 1 1 0 1 0 0 0
𝑔1
𝐺= = 0 1 1 0 1 0 0
𝑔2 1 1 1 0 0 1 0
𝑔3 1 0 1 0 0 0 1

Write all the combinations of message and corresponding cod words


z
The (7,4) linear code has the generator matrix given by find equations c0, c1, c2, c3,
c4, c5 and zc6, Also find codeword corresponding to message m = (1011) and (1100)

[ ][ ]
𝑔0 1 1 0 1 0 0 0
𝑔1
𝐺= = 0 1 1 0 1 0 0
𝑔2 1 1 1 0 0 1 0
𝑔3 1 0 1 0 0 0 1

Write all the combinations of message and corresponding cod words


z
DECODING
z

 The generator matrix G is used in the encoding operation at


the transmitter.
 Parity-check matrix H is used in the decoding operation at the
receiver
 r denote the 1-by-n received vector that results from sending
the code vector c over a noisy binary channel
 vector r as the sum of the original code vector c and a new
vector e
r = c+e
ERROR VECTOR OR ERROR PATTERN
z

 The vector e is called the error vector or error pattern

 The ith element of e equals 0 if the corresponding element of r


is the same as that of c.
 ith element of e equals 1 if the corresponding element of r is
different from that of c, an error is occurred in the ith location
 for i = 1, 2,…, n, we have
DECODING
z

 Let c be transmitted and r be received, where


c + r
 r=c+e e

 e = error pattern = e1e2..... en, where

 The weight of e determines the number of errors. If the error


pattern can be determined, decoding can be achieved by:
c=r+e
 Consider the (7,4) code.

 (1) Let 1101000 be transmitted and 1100000 be received.


Then: e = 0001000 ( an error in the fourth location)
SYNDROME: DEFINITION
z

 The receiver has to decode the code vector c from the


received vector r.
 The algorithm to perform decoding operation starts with the
computation of a 1-by-(n – k) vector called the error-
syndrome vector or syndrome.
 syndrome depends only upon the error pattern.

 1-by-n received vector r,

Syndrome , s= rHT
SYNDROME: PROPERTIES
z

 Property3: For a linear block code, the syndrome s is equal to


the sum of those rows of the transposed parity-check matrix
HT where errors have occurred due to channel noise.
SYNDROME: PROPERTIES
z

 Property4: Each coset of the code is characterized by a


unique syndrome
 Binary block code has the following generator and parity matrix
z

 What is the rate of the code?

 How many redundant bits are there in each code word?


 Construct HT from G matrix for a (6,3) code
z
MINIMUM DISTANCE CONSIDERATIONS
z

 The Hamming weight w(c) of a code vector c is defined as


the number of nonzero elements in the code vector.

 Hamming weight of a code vector is the distance between the


code vector and the all-zero code vector.

 The minimum distance dmin of a linear block code is the


smallest Hamming distance between any pair of codewords.
MINIMUM DISTANCE CONSIDERATIONS
z

 Minimum distance is the same as the smallest Hamming


weight of the difference between any pair of code vectors

 From the closure property of linear block codes, the sum (or
difference) of two code vectors is another code vector.

 The minimum distance of a linear block code is the smallest


Hamming weight of the nonzero code vectors in the code.
MINIMUM DISTANCE CONSIDERATIONS
z

 Hamming distance satisfies triangular inequality


MINIMUM DISTANCE CONSIDERATIONS
z

 dmin is related to the structure of the parity-check matrix H of


the code
 cHT = 0, where HT is the transpose of the parity-check matrix
H
 Let the matrix H be expressed in terms of its columns as H =
[h1, h2,… hn]
 For a code vector c to satisfy the condition cH T = 0, the vector
c must have ones in such positions that the corresponding
rows of HT sum to the zero vector 0.
MINIMUM DISTANCE CONSIDERATIONS
z

 By definition, the number of ones in a code vector is the


Hamming weight of the code vector.
 Smallest Hamming weight of the nonzero code vectors in a
linear block code equals the minimum distance of the code
 The minimum distance of a linear block code is defined by
the minimum number of rows of the matrix H T whose sum is
equal to the zero vector.
MINIMUM DISTANCE CONSIDERATIONS
z

 dmin determines the error-correcting capability of the code

 (n,k) linear block code is required to detect and correct all


error patterns over a binary symmetric channel, and whose
Hamming weight is less than or equal to t.
 That is, if a code vector ci in the code is transmitted and the
received vector is r = ci + e, we require that the decoder
output whenever the error pattern e has a
 Hamming weight

w(e) ≤ t
MINIMUM DISTANCE CONSIDERATIONS
z

 Assume : 2k code vectors in the code are transmitted with


equal probability
 Best strategy for the decoder to pick the code vector closest
to the received vector r; that is, the one for which the
Hamming distance d(ci,r) is the smallest.
 Decoder will be able to detect and correct all error patterns of
Hamming weight w(e), provided that the minimum distance of
the code is equal to or greater than 2t + 1.
MINIMUM DISTANCE CONSIDERATIONS
z
 Demonstrate the validity of this requirement by adopting a
geometric interpretation :
 Transmitted 1-by-n code vector and the 1-by-n received
vector are represented as points in an n-dimensional space.
 Construct two spheres, each of radius t, around the points
that represent code vectors ci and cj under two different
conditions:
MINIMUM DISTANCE CONSIDERATIONS
z

1. Let these two spheres be disjoint, d(c i,cj) ≥2t + 1


 If, then, the code vector ci is transmitted and the Hamming
distance d(ci,r) ≤ t, it is clear that the decoder will pick ci, as it is
the code vector closest to the received vector r.
2. If, on the other hand, the Hamming distance d(c i,cj) ≤ 2t, the
two spheres around ci and cj intersect
MINIMUM DISTANCE CONSIDERATIONS
z

 if ci is transmitted, there exists a received vector r such that


the Hamming distance d(ci,r) ≤ t, yet r is as close to c j as it is
to ci.
 There is now the possibility of the decoder picking the vector
cj, which is wrong.
 An (n,k) linear block code has the power to correct all error
patterns of weight t or less if, and only if, d(c i,cj) ≥ 2t + 1, for
all ci and cj.
MINIMUM DISTANCE CONSIDERATIONS
z
DECODING
z

syndrome-based decoding scheme for linear block codes.


 2k code vectors of an (n, k) linear block code.

 r : received vector, which may have one of 2n possible values.

 The receiver has the task of partitioning the 2n possible received vectors
into 2k disjoint subsets in such a way that the ith subset Di corresponds to
code vector ci for 1 < i < 2k.
 The received vector r is decoded into ci if it is in the ith subset.

 For the decoding to be correct, r must be in the subset that belongs to


the code vector ci that was actually sent.
 The 2k subsets described herein constitute a standard array of the linear
block code.
STANDARD ARRAY
z

 To construct

1. The 2k code vectors are placed in a row with the all-zero code
vector c1 as the leftmost element.

2. An error pattern e2 is picked and placed under c1, and a


second row is formed by adding e2 to each of the remaining
code vectors in the first row; it is important that the error
pattern chosen as the first element in a row has not previously
appeared in the standard array.
3. Step 2 is repeated until all the possible error patterns have
been accounted for.
STANDARD ARRAY
z
STANDARD ARRAY DECODING
z

 For an (n,k) linear code, standard array decoding is able to


correct exactly 2n-k error patterns, including the all-zero error
pattern.
 Illustration 1:The (7,4) Hamming code
 of correctable error patterns = 23 = 8

 of single-error patterns = 7

 Therefore, all single-error patterns, and only single-error


patterns can be corrected
SYNDROME DECODING
z

Decoding procedure for linear block codes:


1. For the received vector r, compute the
syndrome s = rHT.
2. Within the coset characterized by the
syndrome s, identify the coset leader (i.e.,
the error pattern with the largest
probability of occurrence); call it e0.

3. Compute the code vector c = r + e0 as the


decoded version of the received vector r.
 This procedure is called syndrome
decoding.
SYNDROME DECODING
z
PERFECT CODES & HAMMING BOUND
z
PERFECT CODES & HAMMING BOUND
z
PERFECT CODES & HAMMING BOUND
z
PERFECT CODES & HAMMING BOUND
z
H A M M I N G CODE S
z

 Hamming codes constitute a class of single-error correcting


linear block codes (n,k) defined as:
 n = 2m-1, k = n-m, m>2
 The minimum distance of the code d min = 3

 Hamming codes are perfect codes.

 Example : m = 3, yielding the (7, 4) Hamming code with n = 7


and k = 4.
H A M M I N G CODE S
z
H A M M I N G CODE S
z
H A M M I N G CODE S
z
H A M M I N G CODE S
z
For a (6,3) systematic linear block code the 3 parity check bits c4, c5 and c6 are formed
from the following
z
c4=d1⊕d3

c5=d1 ⊕d2 ⊕d3

c6=d1⊕d3
1. Write down the G matrix
2. Construct all possible code words
3. Suppose r=[010111], the find syndrome
z
z
A (7,4) linear code with n=7 and k=4 corresponding G matrix

1. Obtain code words and their weights


2. What is the minimum distance between code vectors
3. Find error detection and correction capability
4. Suppose [1110010] is sent and received vector is [1100010], find syndrome anc
corresponding codeword
z
z
MOTIVATION & PROPERTIES OF CYCLIC CODE
z

 Cyclic code is a linear code that any cyclic shift of a


codeword is still a codeword.
 Makes encoding/decoding much simpler, no need of matrix
multiplication.
MOTIVATION & PROPERTIES OF CYCLIC CODE
z

 An (n,k) linear code C is cyclic if every cyclic shift of a


codeword in C is also a codeword in C.
 If c0 c1 c2 … cn-2 cn-1 is a code word, then

 cn-1 c0 c1 c2 … cn-3 cn-2

 cn-2 cn-1 c0 c1 c2 … cn-4 cn-3

 c1 c2 c3 … cn-1 c0

Are all codewords


DEFINITION
z
z

 The (7,4) Hamming code discussed before is cyclic:

1010001 1110010
1101000 0111001
0110100 1011100
0011010 0101110
0001101 0010111
1000110 1001011
0100011 1100101
CYCLIC CODES
z
CYCLIC CODES
z

 Generator matrix of a non-systematic (n,k) cyclic codes

 The generator matrix will be in this form:

 notice that the row are merely cyclic shifts of the basis vector
CYCLIC CODES
z
CYCLIC CODES
z
CYCLIC CODES
z
CYCLIC CODES
z
CYCLIC CODES
z
CYCLIC CODES
z
CYCLIC CODES
z
CYCLIC CODES
z
z
z
 A (7,4) cyclic code generated by g(X)=1+X=X 3. Find the code
polynomial
z and codeword
SYSTEMATIC FORM OF GENERATOR MATRIX
z
SYSTEMATIC FORM OF GENERATOR MATRIX
z
SYSTEMATIC FORM OF GENERATOR MATRIX
z
 In a (7,4) code g(X) = 1+X+X3 if m=(1010) find c.
z
Converting to systemic form
z
 In a (7,4) code g(X) = 1+X+X3 and m(X)= 1+X3 . Find c(X) in
z
systematic form
ENCODER FOR CYCLIC CODES
z
ENCODER FOR CYCLIC CODES
z
z
 Draw encoder for (7,4) code. Find c if m=(1011) and
g(X)=1+X+X
z 3
CONVOLUTIONAL CODES
z

 In block coding encoder accepts k message blocks and generates n bit code
word.
 Thus code words are produced on block by block basis

 In some applications message bits come in serially rather than in large


blocks
 Convolutional coding is preferred

 It generates redundant bits by using modul0-2 convolution

 Here encoder contains memory and ‘n’ encoder output at any given time unit
depends not only on the ‘k’ inputs but also on the ‘m’ previous input blocks
 (n,k,m) convolutional codes can be implemented with a ‘k’ input, ‘n’ output,
linear sequential circuit with input memory ‘m’.
ENCODING OF CONVOLUTIONAL CODES –
z TIME DOMAIN REPRESENTATION
 Two categories

 Feed forward

 Feedback
 Systematic

 Non-systematic
GENERAL ENCODER
z

 m k-stage shift registers


 N modulo-2 adders
 At each unit of time, k bits are shifted into the first k-stage, all bits in registers are shifted
k-stage to right, and outputs of n-modulo 2 adders are sequentially sampled to obtain the
output
GENERAL ENCODER
z

 Consider n=2, k=1 and m=3

 Consider input sequence d=( 1 0 1)


IMPULSE RESPONSE
z
 The output sequence of the encoder to a single ‘1’ bit input that moves
through it is called impulse response

 The impulse response of v1 is (1 0 1) and v2 is (1 1 1)

 For input d=(1 0 1)


POLYNOMIAL REPRESENATION
z

 Write generator g1(x) and g2(x)

 The output sequence is c(x) = d(x) g 1(x) interlaced with d(x)


g2(x)
 We have d(x)=1 + x2

 c = (1 1 0 1 0 0 0 1 1 1)
STATE DIAGRAM REPRESENTATION
z

 State of an (n,1,m)
convolutional encoder is
defined as the contents of first
m-1 shift registers.
 Thus encoder can be
represented as (m-1) state
machine.
 The zero state is the state
when each of first m-1 shift
registers contain 0.
 Total 2m-1 possible states
BINARY NON-SYMMETRIC FEED FORWARD
z ENCODER

 A (2,1,3) binary convolutional encoder

 A R=1/2 non-symmetric feed forward

 Encoder consists of m=3 shift registers, n=2 modulo-2 adders and


a multiplier
 Modulo-2 adder can be implemented using XOR gate
BINARY NON-SYMMETRIC FEED FORWARD
z ENCODER
 The information sequency u={u0, u1, …} enters the encoder ine bit at a
time
 Two encoder output sequences

C(1) = (C0(1), C1(1) , C2(1) ,…)

C(2) = (C0(2), C1(2) , C2(2) ,…)


 It can be obtained as a convolution of input sequence with two encoder
impulse response.
 Impulse responses are given by

g(1) = (g0(1), g1(1) , ,…, gm(1))

g(2) = (g0(2), g1(2) , ,…, gm(2))


IMPULSE RESPONSE
z
z
Let u=(10111) and the generator sequence be g (1) = (1011) g(2) =
(1111). Find
z the output code word
z
z
GENERATOR MATRIX
z

 We have

 Arranging in matrix form


If u=(10111), find C using matrix method g (1) = (1011) g(2) =
(1111) z
For a (2,1,3) code, generator polynomials are g (1)(D) = (1+D2+D3), g(2)(D)
= (1+D+Dz2+D3), u(D) = (1+D2+D3+D4). Find C(D).
For a (3,2,1) code, , input sequences are u (1)(D) = (1+D2) and u(2)
(D) = (1+D).
z Find C(D).
z
z

THANK YOU

You might also like