0% found this document useful (0 votes)
41 views

Notes 1 Itc

Uploaded by

Yash Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
41 views

Notes 1 Itc

Uploaded by

Yash Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 12
Digits! Communications 1-2 sments of Digital Communication Systems Model of Digital Communication Systems * Fig. 1.1.1 shows the basic operations in digital communication system. The source and the destination are the. two physically separate points. ¢ When the signal travels in the communication channel, noise interferes with it. Because of this interference, the smeared or disturbed version of the input signal is teceived at the receiver. Therefore the signal received may not be correct. That is extors are introduced in the received signal. + Thus the effects of noise due to the communication channel limit the rate at which signal can be transmitted. The probability of error in the received signal and transmission rate are normally used as performance measures of the digital communication system. information source Electrical communication channel Source Channel Fig. 1.1.1 Basle digital communication system Noise information Source * The information source generates the message signal to be transmitted. In case of analog communication, the information source is analog. In case of digital communication, the information source produces a message signal which is not continuously varying with time. Rather the message signal is intermittent with respect to time. ‘ « The examples of discrete information sources are data from computers, teletype etc. Even the message containing text is also discrete. * The analog signal can ‘be converted to discrete signal by sampling and quantization. In sampling, the analog signal is chopped off at regular time intervals. Those chopped samples form a discrete signal. Elements of Digital Communication Systems Digital Communications important parameters ; 1) Source alphabet : These are the letters, digits or special characters available from the information source. 2) Symbol rate : It is the rate at which the information source generates source alphabets. It is normally represented in symbols/sec unit. 3) Source alphabet probabilities : Each source alphabet from the source has independent occurrence rate in the sequence. For example, letters A, E, I etc. occur frequently in the sequence. Thus probability of the occurrence of each soure alphabet can become one of the important property which is useful in digital communication. 4) Probabilistic dependence of symbols in a sequence : The information carrying capacity of each source alphabet is different in a particular sequence. This _ parameter defines average information content of the symbols. The entropy of a Source refers to the average information content per symbol in long messages, Entropy is defined in terms of bits per symbol. Bit is the abbreviation for binay digit. The source information rate is thus the product of symbol rate-and soune entropy i.e. Information rate = Symbol rate x Source entropy (Bits/sec) (Symbols/sec) (Bits /Symbol) * The information rate represents minimum average data rate required to transmit information from source to the destination. /a Source Encoder and Decoder * The symbols produced by the information source are given to the source encode. These symbols cannot be transmitted directly. They are first converted into digital form (ie. Binary sequence of 1's and 0's) by the source encoder. Every binary and ‘0’ is called a bit. The group of bits is called a codeword. * The source encoder assigns codewords to the symbols. For every distinct symisi there is a unique codeword. The codeword can be of 4, 8, 16 or 32 bits length. the number of bits are increased in each codeword, the symbols that can ge tepresented are increased. For example, 8 bits will have 28 = 256 distinct codewords. Therefore 8 bits canie used to represent 256 symbols, 16 bits can represent 216 = 65536 symbols ands» on. * In both of the above examples the number of bits in every codeword is sam throughout. That is 8 in first case and 16 in next case respectively. This is calli fixed length coding. Fixed length coding is efficient only if all the symbols oawr with equal probabilities in a statistically independent sequence. Digital Communications 1-4 Elements of Digitel Communication Systems * In the practical situations, the symbols in the sequence are statistically dependent and they have unequal" probabilities of occurrence. For example, let us assume that the symbol sequence represents the percentage marks of the students. The 02%, 08%, 20%, 98%, 99% etc. symbols will have minimum probability of occurrence. But 60%, 55%, 70%, 75% will have more probability. For such symbols normally variable length codewords are assigned. * More bits (More length) are assigned to rarely occurring symbols and less bits are assigned to frequently occurring symbols. Typical source encoders are pulse code modulators, delta modulators, vector quantizers etc. We will come across these codewords in detail in the subsequent chapters. Important parameters 1) Block size : This gives the maximum number of distinct codewords that’ can be represented by the source encoder. It depends upon maximum number of bits in the codeword. For example, the block size of 8 bits source encoder will ‘have 28 = 256 codewords, 1 2) Codeword length : This is the number of bits used to represent each codeword. For example, if 8 bits are assigned to every codeword, then codeword length is 8 bits. 3) Average data rate : It is the output bits per second from the source encoder. The source encoder assigns multiple number of bits to every input symbol. Therefore the data rate is normally higher than the symbol rate. For example let us consider that the symbols are given to the source encoder at the rate of 10 symbols/sec and the length of codeword is 8 bits. Then the output data rate from the source encoder will be, Date rate = Symbol rate x Codeword length = 10 x 8 = 80 bits/sec Information rate is the minimum number of bits per second needed to convey information from source to destination as stated earlier. Therefore optimum data rate is equal to information rate. But because of practical limitations, designing such source encoder is difficult. Hence average data rate is higher than information rate and hence symbol rate also. 4) Efficiency of the encoder This is the ratio of minimum source information rate to the actual output data rate of the source encoder. ceiver, some decoder is used to perform the reverse operation to that of onverts the binary output of the channel decuder into a symbol Both variable length and fixed length decoders are possible. Some ital Communications 7-5 Elements of Digital Communication Systems Digi decoders use memory to store codewords. The decoders and encoders can be synchronous or asynchronous. Channel Encoder and Decoder + At this stage we know that the message or information signal is converted in the form of binary sequence (ie. 1’s and 0's). The communication channel adds noise and interference to the signal being transmitted. e Therefore errors are introduced in the binary sequence received at the receiver. Hence errors are also introduced in the symbols generated from these binary codewords. To avoid these errors, channel coding is done. * The channel encoder adds some redundant binary bits to the input sequence. These redundant bits are added with some properly defined logic. For example . . consider that the codeword from the source encoder is three bits long and one redundant bit is added to make it 4-bit long. This 4'* bit is added (either 1 or 0) such that number of 1’s in the encoded word remain even (also called even parity). Following table gives output of source encoder, the 4’ bit depending upon the parity, and output of channel encoder. Bit to be added by channel :Out Table 1.1.1 Even parity coding * Observe in the above table that every codeword at the output of channel encoder contains “even” number of 1’s. At the receiver, if odd number of 1's are detected, then receiver comes to know that there is an error in the received signal. ¢ The channel decoder at the receiver is thus able to detect error in the bit sequence, and reduce the effects of channel noise and distortion. The channel encoder and decoder thus serve to increase the reliability of the received signal. « The extra bits which are added by the channel encoders carry no information, rather, they are used by the channel decoder to detect and correct errors if any. These error correcting bits may be added recurrently after the block of few symbols or added in every symbol as shown in Table 1.1.1. The example of par le Digital Communications 1-6 Elements of Digital Communication Systems "coding given above is just illustrative. There are many advanced and efficient coding techniques available. We will discuss them in the book. * The coding and decoding operation at encoder and decoder needs the memory (storage) and processing of binary data. Because of microcontrollers and computers, the complexity of encoders and decoders is nowadays very much reduced. Important parameters 1) The method of coding used. 2) Coding rate, which depends upon the redundant bits added by the: channel encoder. i 3) Coding efficiency, which is the ratio of data rate at the input to the data rate’ at the output of encoder. : 4) Error control capabilities, ie. detecting and correcting errors 5) Feasibility or complexity of the encoder and decoder. The time delay involved in the decoding is also an important parameter for channel decoder. EERE Digital Modulators and Demodulators ¢ Whenever the modulating signal is discrete (ie. binary’ codewords),’ then digital modulation techniques are used. The carrier signal used by digital modulators is always continuous sinusoidal wave of high frequency. * The digital modulators maps the input binary sequence of 1's and 0's to analog signal waveforms. If one bit at a time is to be transmitted, then digital modulator signal is s;(f) to transmit binary ‘0’ and s(t) to transmit binary ‘1’, For example consider the output of digital modulator shown in Fig. 1.1.2. Te i eT : : sa) salt 5,0) ty mol Len I ai “| that aed, 1 a Fig, 1.1.2 Frequency modulated output of a digital modulator my ¢ The signal s1() has low frequency compared to signal sp(0, It is frequency modulation (FM) in two steps corresponding to binary symbols ‘0’ and “’. Thus Digital Communications 1-7 Elements of Digital Communication Systems even though the modulated signal appears to be continuous, the modulation is discrete (or in steps). Single carrier is. converted into two waveforms $;(t) and s2(t) because of digital modulation. If the codeword contains two bits and they are to be transmitted at a .time, then there will be M=2=4 distinct symbols (or codewords). These four codewords will require four distinct waveforms for transmission. Such modulators are called M-ary modulators. Frequency Shift Keying (FSK), Phase Shift Keying (PSK), Amplitude Shift Keying (ASK), Differential Phase Shift Keying (DPSK), Minimum Shift Keying (MSK) are * the examples of various digital modulators, Since these modulators use continuous carrier wave, they are also called digital CW modulators. In the receiver, the digital-demodulator converts the-input modulated signal to the sequence of binary bits. The most important parameter for the demodulator is the method of demodulation. Important parameters a) Probability of symbol or bit error. b) Bandwidth needed to transmit the signal. c) Synchronous or asynchronous method of detection and d) Complexity of implementation. iE Communication Channel ¢ As we have seen in the preceding sections, the connection between transmitter ad receiver is established through communication channel. We have seen that the communication can take place through wirelines, wireless or fiber optic channes * The other media such as optical disks, magnetic tapes and disks etc. can alsshe called as communication channel, because they can also carry data through them, Problems associated with communication channels 1) Additive noise interference : This noise is generated due to internal solid ‘te devices and resistors etc. used to implement the communication system. 2) Signal attenuation : It occurs due to internal resistance of the channel and fala of the signal.» ~ 3) Amplitude and phase distortion : The signal is distorted in amplitude and ple because of non-linear characteristics of the channel. 4) Multipath distortion : This distortion occurs mostly in wireless communicsn channels. Signals coming from different paths tend to interfere with each other. { Ligie! Communications 1-8 Elements of Digital Communication Sysioms Resources Available with Communication Channels * 3) Channel bandwidth : This is the maximum possible range of frequencies that can be used for transmission. For example, the bandwidth offered by wireline channels is less compared to fiber optic channels. 2) Power in the transmitted signal : This is the power that can be put in the signal being transmitted. The effect of noise can be minimized by increasing the power. But this.connot be increased to very high’ value because of the equipment and other constraints. For example, the power in the wireline channel is limited because of the cables. » ‘The power and bandwicith limit the data rate of the communication channel. As we know, the fiber optic channel transports light signals from one place tp another just like a metallic wire carriers an electric signal. There is no current or metallic conductor in optical fiber. aA ASC} | 1. Explain the term quantization, Digital! Representation of Analog Signals + For digital processing, the analog signals need to be: sampled, quantized and | binary encoded. These three operations combinely represent the analog signal in digital form. Sampling : An analog signal is sampled at regular intervals. The sampling , frequency is such that there is no aliasing. In Fig, 1.2.1 observe that the signal is sampled at T,, 215, 375, 4T;...... and so on. Quantization : The total | |_| Te amplitude level of the { signal is divided into fixed number of amplitude levels. They are cailed quantization |1| levels. Fig. 1.2.1 shows the quantization of analog signals. In this figure observe that total analog signal range is divided into 8 voltage of 1 volt each. . t —- ‘Tevols | Quantization’ steps Digital Communications 1-26 Elements of Digital Conmunication Syste pS rheReiae rte [ Example 1.4.2 : Assume a typical binary sequence and show that if the corresponding polar ‘ NRZ signal and unipolar NRZ signal have the same peak to peak amplitude, the polar signal has less power than the unipolar signal. [Hint : Consider sequence 1010, then He ipo NRZ the power is | | i whereas for bipolar NRZ the power is Ea 1. Compare the power spectra for various line codes. é Advantages of Digital Communication Systems ((EtSUBIEISt! Presently most of the communication is digital. For example cellular (mobile phone) communication, satellite communication, radar and sonar signals, Facsimile, data transmission over internet etc all use digital communication. Paractically, after 20 years, analog communication will be totally replaced by digital communication. Why digital communication Is so popular ? ‘There are few reasons due to which people are prefering digital communication over analeg communication. 1. Due to advancements in VLSI technology, it is possible to manufacture very high speed embedded circuits. Such circuits are used in digital communications. High speed computers and powerful software design tools are available: They | make the development of digital communication systems feasible. Lo 3. Internet is spread almost in every city and towns. The compatibility of digital communication systems with internet has opened new area of applications. Adyantages 1. Because of the advances in digital IC technologies and high speed computers, digital communication systems are simpler and cheaper compared to analog systems. 2, Using data encryption, only permitted receivers can be allowed to detect the transmitted data. This is-very useful in military applications. 3, Wide dynamic range is possible since the data is converted to the digital form... TECHNICAL PUBLICATIONS™- An up thrust for knoiwiedge Digital Communications 1-27 Elements of Digital Communication Systemy 4, Using multiplexing, the speech, video and other data can be merged and transmitted over cornmon channel. 5. Since the transmission is digital and channel encoding is used, the noise does Not accumulate from repeater to repeater in long distance communication. 6. Since the transmitted signal is digital, a large amount of noise interferertce can be tolerated. 7. Since channel coding is used, the errors can be detected and corrected in the receivers. 8. Digital communication is adaptive to other advanced branches of data Processing. such as digital signal processing, image processing, data compression etc. Ja Disadvantages . Eventhough digital connmunication offer many advantages as given above, it I vSome “drawbacks also. But the advantages of digital communication outweig disadvantages. They are as follows - ‘ 1. Because of analog to digital conversion, the data rate becomes high. Hence map transmission bandwidth is required for digital communication. 2, Digital communication needs synchronization in case of synchronous modulation. mae Review Question 1. List the adoantages and disadvantages of digital communication over analog communication. EEG Shannon Hartley Law * The information rate 'R’ and the channel capacity 'C are inter-related by Shannag theorems on channel capacity. * There are two theorems on channel capacity : 1) Channel coding theory (Shannon's: second theorem) and 2) Shannon Hartley theorem for continugy channel. Ee Channel Coding Theorem (Shannon's Second Theorem) * Shannon’s theorem states that it is possible to transmit information withe arbitrarily small probability of error provided that information rate 'R’ is less tka or equal to a rate ’C’, called channel capacity. TECHNICAL PUBLICATIONS”. An up thrust for knowlegge Digital Corarunications 1-28 Elements of Digital Communication Systems “© Thus channel capacity is the maximum information rate with which the error probability is within the tolerable limits. © Statement of the theorem : Given'a source of M equally likely messages, with M>>1, which is generating information at 2 rate R. Given channel with channel capacity C. Then if, Rec ws (16.1) there exists a coding technique such that the output of the source may. be transmitted over the channel with a probability of error in the received message which may be made arbitrarily small. © Explanation : This theorem says that if R>1, which is generating information at a rate R ; then if R>CG i : the probability of error is close to unity for every possible set of M transmitter signals, : Thus the negative statement of Shannon's theorem says that if R>C, then every message will be in error. ” Shannon Hartley Theorem for Gaussian Channels (Continuous Channel Capacity Theorem) * When Shannon’s theorem of channel capacity is applied specifically to a channel in which the noise is gaussian is known as Shannon-Hartley theorem. It is also called information capacity theorem. Statement of theorem : The channel capacity of a white bandlimited gaussian channel is, w= (1.6.2) B is the channel bandwidth, Here S is the signal power, and Nis the total noise power within the channel bandwidth, «We know that signal power is given as, Ss . B Som i : i 4 j i g g b Power P = | Power spectral density -3 crn niin namioW. Anu thst for knowedge 1-29 Elements of Digital Communication Systems Digital Communications aa “ Here B is bandwidth. And power spectral density of white noise is > Hence noise power N becomes, B Noise power N = f No gy -8 N = NoB vas (1.6.3) Eee Tradeoff between Bandwidth and Signal to Noise Ratio ¢ Channel capacity of the gaussian channel is given as, C = Blog, (+n) wu (1.64) ‘Above equation shows that the channel capacity depends on two factors : i) Bandwidth (B) of the channel. ii) Signal to noise ratio (a): © Noiseless channel has infinite capacity : If there is no noise in the channel, then N=0, Hence Se, Such channel is called noiseless channel. Then capacity of such channel will be C = Blogs (l+~)=2 Thus noiseless channel has infinite capacity. Infinite bandwidth channel has limited capacity : Now if bandwidth ‘B' is infinite, the channel capacity is limited. This is because, as bandwidth increases, noise power (N) also increases. Noise power is given by equation (1.6.3) as, N = NoB Due to this increase in noise power, signal to noise (S/N) ratio decreases. Hence even if B appraches infinity, capacity does not approach infinity. As B >, capacity approaches an upper limit. This upper limit is given as, s C. = lim C=144 —— Boye No This equation is proved in next examples. TECHNICAL PUBLICATIONS”, An up thrust for knowledge Digital Communications 1-30 Etements of Digital Communication Systems : PCs) mlm uC UL This. example: explains ‘tradeoff between 'B' anid S / N. ¥ ha “The: data is ob travsmitted at ihe rate of 10000 bits/sec ‘over:a. channel having bandwidth . 5B =:3000 Ha: Detrmine” the. signal to ‘noise ratio required. If the bandwidth; 18 increased :to 10000 Hiz, then’ determiine the signal fo noise ratio. Solution : The data is to be transmitted at the rate of 10,000 bits/sec. Hence channel capacity. must be at least 10000 bits/sec for errorfree transmission. Hence from equation 1.6.2, s c= Blog,( 1+} Putting the values, 10000 = 3000 i6:( 1+) s =—=#9 N Now if the bandwidth is B = 10000 Hz, then, s 10000 tsa{ 1+} 10000 we 0 Ss S 3s Ss ‘Above results show that as bandwidth is increased to 10,000 Hz, the signal to noise ratio is reduced by nine times. ‘This means the required signal power is reduced, if bandwidth is increased.

You might also like