Chapter 1
Chapter 1
Data communications are the exchange of data between two devices via some form of
transmission medium such as a wire cable
I. Delivery. The system must deliver data to the correct destination. Data must be
received by the intended device or user and only by that device or user.
2Accuracy. The system must deliver the data accurately. Data that have been
altered in transmission and left uncorrected are unusable.
3. Timeliness. The system must deliver data in a timely manner. Data delivered late are
useless. In the case of video and audio, timely delivery means delivering data as
they are produced, in the same order that they are produced, and without significant
delay. This kind of delivery is called real-time transmission.
4. Jitter. Jitter refers to the variation in the packet arrival time.
Standards Organizations
o International Organization for Standardization (ISO). The ISO is a multinational
body whose membership is drawn mainly from the standards creation committees
of various governments throughout the world. The ISO is active in developing
cooperation in the realms of scientific, technological, and economic activity.
o International Telecommunication Union-Telecommunication Standards
Sector (ITU-T) This committee was devoted to the research and establishment of
standards for telecommunications in general and for phone and data systems in
particular
o American National Standards Institute (ANSI). the American
National Standards Institute is a completely private, nonprofit corporation not affiliated
with the U.S. federal government.
o Institute of Electrical and Electronics Engineers (IEEE). The Institute of
Electrical and Electronics Engineers is the largest professional engineering society in
the world. International in scope, it aims to advance theory, creativity, and product
quality in the fields of electrical engineering, electronics, and radio as well as in all
related branches of engineering.
o Electronic Industries Association (EIA). Aligned with ANSI, the Electronic
Industries Association is a nonprofit organization devoted to the promotion of
electronics manufacturing concerns.
For digital devices, the bandwidth is usually expressed in bits per second(bps) or bytes per
second. For analog devices, the bandwidth is expressed in cycles per second, or Hertz
(Hz).
Mode of Communication
Copyright © www.www.flexiprep.com
Transmission mode means transferring of data between two devices. It is also called
communication mode. These modes direct the direction of flow of information. There
are three types of transmission mode
Copyright © www.www.flexiprep.com
There are three ways for transmitting data:
1. Simplex: In simplex mode the communication can take place in one direction. The
receiver receives the signal from the transmitting device. In this mode the flow of
information is Uni-directional. Hence it is rarely used for data communication.
2. Half-duplex: In half-duplex mode the communication channel is used in both
directions, but only in one direction at a time. Thus a half-duplex line can alternately
send and receive data.
3. Full-duplex: In full duplex the communication channel is used in both directions at
the same time. Use of full-duplex line improves the efficiency as the line turnaround
time required in half-duplex arrangement is eliminated. Example of this mode of
transmission is the telephone line.
Digital Signal: A digital signal is a signal that represents data as a sequence of discrete
values; at any given time it can only take on one of a finite number of values.
Analog Signal: An analog signal is any continuous signal for which the time varying
feature of the signal is a representation of some other time varying quantity i.e.,
analogous to another time varying signal.
Both analog and digital signals can take one of two forms: periodic or nonperiodic
Period refers to the amount of time, in seconds, a signal needs to complete 1 cycle.
Phase
The term phase describes the position of the waveform relative to time 0.
Amplitude
Throughput
The throughput is a measure of how fast we can actually send data through a network
The most common technique to change an analog signal to digital data (digitization)
is called pulse code modulation (PCM). A PCM encoder has three processes, as shown
in Figure
The most common technique to change an analog signal to digital data is called pulse
code modulation (PCM). A PCM encoder has the following three processes:
a. Sampling
b. Quantization
c. Encoding
b. Quantization –
The result of sampling is a series of pulses with amplitude values between the
maximum and minimum amplitudes of the signal. The set of amplitudes can be
infinite with non integral values between two limits.
1. We assume that the signal has amplitudes between Vmax and Vmin
2. We divide it into L zones each of height d where,
d= (Vmax- Vmin)/ L
3. The value at the top of each sample in the graph shows the actual
amplitude.
4. The normalized pulse amplitude modulation(PAM) value is calculated
using the formula amplitude/d.
5. After this we calculate the quantized value which the process selects from
the middle of each zone.
6. The Quantized error is given by the difference between quantised value
and normalised PAM value.
7. The Quantization code for each sample based on quantization levels at the
left of the graph.
c. Encoding –
The digitization of analog signal is done by the encoder. After each sample is
quantized and the number of bits per sample is decided, each sample can be
changed to an n bit code.Encoding also minimizes the bandwidth used.
b. DELTA MODULATION :
Since PCM is a very complex technique, other techniques have been developed to reduce
the complexity of PCM. The simplest is delta Modulation. Delta Modulation finds the
change from the previous value.
Modulator – The modulator is used at the sender site to create a stream of bits from an
analog signal. The process records a small positive change called delta. If the delta is
positive, the process records a 1 else the process records a 0. The modulator builds a
second signal that resembles a staircase. The input signal is then compared with this
gradually made staircase signal.
1. If the input analog signal is higher than the last value of the staircase signal,
increase delta by 1, and the bit in the digital data is 1.
2. If the input analog signal is lower than the last value of the staircase signal,
decrease delta by 1, and the bit in the digital data is 0.
c. ADAPTIVE DELTA MODULATION:
The performance of a delta modulator can be improved significantly by making the step
size of the modulator assume a time-varying form.A larger step-size is needed where the
message has a steep slope of modulating signal and a smaller step-size is needed where
the message has a small slope.The size is adapted according to the level of the input
signal. This method is known as adaptive delta modulation (ADM).
The following techniques can be used for Digital to Analog Conversion:
1. Amplitude Shift keying – Amplitude Shift Keying is a technique in which carrier signal
is analog and data to be modulated is digital. The amplitude of analog carrier signal is
modified to reflect binary data.
The binary signal when modulated gives a zero value when the binary data represents 0
while gives the carrier output when data is 1. The frequency and phase of the carrier
signal remain constant.
This is regarded as the most robust digital modulation technique and is used for long
distance wireless communication.
2. Quadrature phase shift keying:
This technique is used to increase the bit rate i.e we can code two bits onto one
single element. It uses four phases to encode two bits per symbol. QPSK uses
phase shifts of multiples of 90 degrees.
It has double data rate carrying capacity compare to BPSK as two bits are mapped on
each constellation points.
Advantages of phase shift Keying –
It is a more power efficient modulation technique as compared to ASK and FSK.
It has lower chances of an error.
It allows data to be carried along a communication signal much more efficiently
as compared to FSK.
Disadvantages of phase shift Keying –
It offers low bandwidth efficiency.
The detection and recovery algorithms of binary data is very complex.
It is a non coherent reference signal.
Advantages
(I) RESOURCE SHARING. The aim is to make all programs, data and
peripherals available to anyone on the network irrespective of the physical
location of the resources and the user.
(ii) RELIABILITY. A file can have copies on two or three different machines,
so if one of . them is unavailable (hardware crash), the other copies could be
used. For military, banking, air reservation and many other applications it is of
great importance.
(iii) COST FACTOR. Personal computers have better price/performance ratio
than micro computers. So it is better to have PC's, one per user, with data
stored on one shared file server machine.
(iv) COMMUNICATION MEDIUM. Using a network, it is possible for
managers, working far apart, to prepare financial report of the company. The
changes at one end can be immediately noticed at another and hence it speeds
up co-operation among them.
There are several different types of computer networks. Computer networks can be
characterized by their size as well as their purpose.
Local Area Network (LAN) LAN is a computer network that spans a relatively small
area. Most LANs are confined to a single building or group of buildings. There are many
different types of LANs-token-ring networks, Ethernents, and ARCnets being
Categories of Networks
Local Area Networks:
Local area networks, generally called LANs, are privately-owned networks within a
single building or campus of up to a few kilometres in size. They are widely used to
connect personal computers and workstations in company offices and factories to share
resources (e.g., printers) and exchange information. LANs are distinguished from other
kinds of networks by three
characteristics:
(1) Their size,
(2) Their transmission technology, and
(3) Their topology.
Network Architecture
There are several ways in which a computer network can be designed. Network
architecture refers to how computers are organized in a system and how tasks are
allocated between these computers. Two of the most widely used types of network
architecture are peer-to-peer and client/server. Client/server architecture is also
called 'tiered' because it uses multiple levels.
Client/Server Network
In this network, a dedicate computer known as server provides sharing resources. All
other computers know as clients are used to access the shared resources. This type of
network is commonly used in company environment. It provides great security features
but requires special hardware and software to setup.
In two tier client/server architectures, the user interface is placed at user's desktop
environment and the database management system services are usually in a server that is a
more powerful machine that provides services to the many clients. Information
processing is split between the user system interface environment and the database
management server environment.
2) Three tier architectures The three tier architecture is introduced to overcome the
drawbacks of the two tier architecture. In the three tier architecture, a middleware is
used between the user system interface client environment and the database
management server environment.
1) Combination of a client or front-end portion that interacts with the user, and a
server or back-end portion that interacts with the shared resource. The client process
contains solution-specific logic and provides the interface between the user and the
rest of the application system. The server process acts as a software engine that manages
shared resources such as databases, printers, modems, or high powered processors.
2) The front-end task and back-end task have fundamentally different requirements for
computing resources such as processor speeds, memory, disk speeds and capacities, and
input/output devices.
In this network, all computers are equal. Any computer can provide and access shared
resources. This type of network is usually used in small office or home network. It is
easy to setup and does not require any special hardware and software. The downside of
this network is that it provides very less security.
In a peer-to-peer or P2P network, the tasks are allocated among all the members of
the network. There is no real hierarchy among the computers, and all of them are
considered equal. This is also referred to as a distributed architecture or workgroup
without hierarchy. A peer-to-peer network does not use a central computer server that
controls network activity. Instead, every computer on the network has a special software
running that allows for communications between all the computers.