Computer Network-Merged
Computer Network-Merged
1. Communication system:
Communication system is a system model describes a communication exchange between two stations, transmitter and
receiver. Signals or information’s passes from source to distention through what is called channel, which represents a
way that signal use it to move from source toward destination.
• Source: The source is the originator of the information or message to be communicated. It could be a person, a
computer, a microphone, a sensor, or any device that generates the initial data.
• Sender and Receiver Nodes: Devices such as computers, servers, routers, and switches play the roles of
senders and receivers within a network. These nodes generate, transmit, and process data to be exchanged.
• Protocols: Communication protocols define the rules that use how data is formatted, transmitted, received,
and interpreted. Examples include TCP/IP ,HTTP .
• Data Encoding and Decoding: Data encoding methods convert digital information into a format suitable for
transmission over the chosen medium. Decoding reverses this process at the receiver end.
• Transmitter: The transmitter encodes the information from the source into a suitable format for transmission.
It prepares the signal for efficient and reliable transmission over the communication channel. In digital
communication, this involves converting analog signals into digital form.
• Communication Channel: The channel is the medium through which the encoded signal travels from the
transmitter to the receiver. It could be a physical medium like a cable, optical fiber, or wireless airwaves.
• Receiver: The receiver captures the transmitted signal from the channel and decodes it back into a format
understandable by the recipient. In digital communication, this might involve converting digital signals back
into analog form.
• Destination: The destination is the intended recipient of the message. It could be a person, a computer, a
display device, or any entity that needs to interpret and use the information..
2. Data: Data is a collection of facts statistics measurement observations or information in raw or unorganised form. In the
context of computer and technology data typically refers to digital information stored and process by computer and
electronic devices.
3. Signal: In a computer network, a signal refers to an electromagnetic or optical waveform that carries data from one
point to another. Signals are used to transmit information, such as text, images, audio, and video, between devices,
nodes, or components within a network.
Signals can travel over various types of transmission media, including wired (e.g., copper cables, fiber optics) and wireless
(e.g., radio waves, microwaves) channels.
Types of Transmission---
Analog signal: Analogue signal is a continuous wave form in nature and represent by continuous electronic waves. Ex:-
Radio waves, television, sound waves.
ADV-
-It is Easier in processing.
-Analog Signals are best fitted to audio and video transmission.
DISADV-
-Analog tends to possess a lower quality signal than digital.
-The cables are sensitive to external influences.
-Analog wire is expensive and not easily portable.
Digital signal: A digital signal is discrete in nature and represent sequence of voltage. A digital signal encodes data as a
sequence of discrete values. A digital signal can only have one value from a finite set of possible values at any given
time. Many physical quantities can be used to represent information in digital signals like variable electric current or
voltage.
Transmission Mode:
Transmission modes in computer networks refer to the ways in which data is transmitted between tow devices. These
modes define the direction of data flow and the relationship between the two devices during communication. There are
three primary transmission modes: simplex, half-duplex, and full duplex.
Simplex Mode:
Simplex mode is a one-way communication mode where data can flow unidirectional direction, In this mode, the sender
can transmit data, but the receiver cannot send any data back to the sender. This mode is similar to a one-way street,
where traffic can only flow in one direction. Examples: radio, television broadcast
ADV-
-Easy to implement.
-(Unidirectional flow) Since data flows in only one direction, there is no need to manage or coordinate bidirectional
traffic. This can simplify network design and reduce the potential for collisions or conflicts.
-Cost efficiency.
-Reliability.
DISADV-
-Limited Applicability.
Half-Duplex Mode:
Half-duplex mode is a two-way communication mode where data can flow in both directions, but not simultaneously.
Devices can take turns transmitting and receiving data, but only one device can transmit at a time. While one device is
transmitting, the other device(s) must wait until the transmission is complete before sending their own data. Walkie-
talkies and older Ethernet networks using a shared medium are examples of half-duplex communication.
ADV-
-Simplicity.
-Reduce complexity.
-Predictable Timings.
DISADV-
-Latency.
-Complexity in coordination.
Full-Duplex Mode:
Full-duplex mode is a two-way communication mode where data can flow in both directions simultaneously. Both the
sender and receiver can transmit and receive data independently without having to wait for each other. This mode
provides efficient and simultaneous bidirectional communication. Modern Ethernet networks, telephone conversations,
and most wireless communication systems operate in full-duplex mode.
ADV-
DISADV-
-Limited Scalability.
A computer network consists of various components that work together to enable communication and data exchange
between devices. These components can be categorized into hardware, software, and communication protocols. Here
are the key components of a computer network:
1. Devices:
Computers: These include personal computers, servers, workstations, laptops, and other devices that are part of the
network.
Network Devices: Routers, switches, hubs, access points, gateways, and modems are used to manage and route network
traffic.
2. Network Media:
Wired Media: This includes Ethernet cables (Cat5e, Cat6, etc.) and fiber optic cables used to transmit data physically.
Wireless Media: Technologies like Wi-Fi, Bluetooth, and cellular networks provide wireless connectivity.
3. Network Interfaces:
Network Interface Cards (NICs): These hardware components enable devices to connect to the network by providing a
physical interface for data transmission.
4. Network Software:
Network Operating System (NOS): Specialized operating systems that manage network resources, security, and
communication between devices. Examples include Windows Server, Linux, and Cisco IOS.
5. Protocols:
Communication Protocols: Standards that define how data is formatted, transmitted, received, and interpreted.
Examples include TCP/IP, HTTP, FTP, and SMTP.
Routing Protocols: Algorithms used by routers to determine the best path for data packets to travel through the
network.
File Sharing: Services like network file systems (NFS) or SMB (Server Message Block) enable sharing files and resources
across the network.
Email and Messaging: Services like SMTP (Simple Mail Transfer Protocol) and IMAP (Internet Message Access Protocol)
enable email communication.
Web Services: HTTP (Hypertext Transfer Protocol) enables web browsing, and HTTPS adds security for online
transactions.
7. Network Security:
Firewalls: Hardware or software-based security measures that filter and control incoming and outgoing network traffic.
Encryption: Techniques like SSL (Secure Sockets Layer) or TLS (Transport Layer Security) provide data encryption for
secure communication.
4. Transmission impairments:
When a signal transmission from one midis to other media , the signal is received may be difference from the signal that
is transmission due to various impairment . The main types of transmission impairments include:
Attenuation: Attenuation refers to the loss of signal strength as it travels through a medium like a cable or fiber optic
line. It occurs due to factors like resistance, scattering, and absorption. Attenuation can result in a weaker signal at the
receiving end, leading to errors and degraded signal quality.
Distortion: It means changes in the form or shape of the signal. This is generally seen in composite signals made up with
different frequencies. Each frequency component has its own propagation speed travelling through a medium. And
that’s why it delays in arriving at the final destination Every component arrive at different time which leads to distortion.
Noise: Noise is unwanted interference that disrupts the signal. It can be caused by external factors such as
electromagnetic interference (EMI), radio frequency interference (RFI), or crosstalk between adjacent cables. Noise can
introduce errors and reduce the signal-to-noise ratio (SNR), impacting data integrity.
Data Rate or Throughput: Data rate refers to the speed at which data is transmitted through the communication
system. It is usually measured in bits per second (bps) or other suitable units. Higher data rates indicate faster and more
efficient communication.
Bit Error Rate (BER): BER measures the accuracy of data transmission by calculating the ratio of erroneous bits to the
total number of transmitted bits. Lower BER values indicate better signal quality and reliability.
Latency: Latency, also known as delay, is the time it takes for a data packet to travel from the sender to the receiver. Low
latency is important for real-time applications, such as video conferencing and online gaming.
Bandwidth: Bandwidth refers to the range of frequencies that a communication channel can carry. Higher bandwidth
allows for faster data transmission and supports larger amounts of information.
Capacity: Capacity refers to the maximum amount of data that a communication system can handle within a given time
period. It depends on factors such as channel bandwidth, modulation techniques, and network congestion.
Reliability: Reliability assesses the consistency and dependability of the communication system. It involves factors like
uptime, error rates, and the system's ability to maintain communication under varying conditions.
Availability: Availability measures the proportion of time that a communication system is operational and accessible.
High availability is crucial for critical applications and services.
Scalability: Scalability refers to the system's ability to accommodate increasing numbers of users, devices, or data traffic
without significant degradation in performance.
Security: Security criteria evaluate the measures in place to protect data from unauthorized access, interception, or
tampering. This includes encryption, authentication, and data integrity mechanisms.
Efficiency: Efficiency considers how effectively the system uses available resources to achieve its goals. It may involve
minimizing energy consumption, optimizing bandwidth usage, or reducing overhead.
Ease of Use and User Experience: For user-centric communication systems, criteria related to ease of use, user interface
design, and overall user experience are important.
Cost: Cost-related criteria assess the economic feasibility of the communication system, including factors such as
equipment costs, maintenance expenses, and overall return on investment (ROI).
6. Goals of Computer Network:
Communication: Computer networks enable seamless and timely communication between devices, users, and
applications. They allow users to exchange messages, share information, and collaborate in real time.
Data Sharing and Resource Access: Networks facilitate the sharing of files, documents, databases, and other resources
among connected devices. Users can access and retrieve data from centralized locations, enhancing productivity and
efficiency.
Cost Efficiency: Sharing resources over a network can reduce costs associated with hardware, software, and
infrastructure. Centralized management and resource sharing lead to cost savings.
Centralized Data Management: Networks support centralized data storage and management, making it easier to back
up, secure, and manage data centrally. This enhances data integrity and simplifies administration.
Data Backup and Recovery: Network-based data backup and recovery solutions help safeguard critical data and ensure
business continuity in case of data loss or hardware failure.
Security and Access Control: Networks offer security mechanisms such as firewalls, encryption, and access control to
protect data from unauthorized access, cyberattacks, and breaches.
Collaboration and Remote Work: Networks facilitate collaboration by allowing users to work together on projects,
share documents, and communicate, regardless of their physical locations.
Resource Optimization: Networks help optimize the use of resources by enabling load balancing, which distributes
network traffic evenly across devices and servers to prevent congestion and improve performance.
Global Connectivity: Computer networks, especially the internet, provide global connectivity, enabling communication
and interaction on a worldwide scale.
Innovation and Technological Advancement: Networks foster innovation by facilitating the development and
deployment of new applications, services, and technologies.
7. Network: ---Classification---
A computer network is a group of computers linked to each other that enables the computer to communicate with
another computer and share their resources, data, and applications.
A computer network can be categorized by their size. A computer network is mainly of four types:
o Local Area Network is a group of computers connected to each other in a small area such as building, office.
o LAN is used for connecting two or more personal computers through a communication medium such as twisted pair, coaxial
cable, etc.
o It is less costly as it is built with inexpensive hardware such as hubs, network adapters, and ethernet cables.
o The data is transferred at an extremely faster rate in Local Area Network.
o Local Area Network provides higher security.
o Personal Area Network is a network arranged within an individual person, typically within a range of 10 meters.
o Personal Area Network is used for connecting the computer devices of personal use is known as Personal Area
Network.
o Thomas Zimmerman was the first research scientist to bring the idea of the Personal Area Network.
o Personal Area Network covers an area of 30 feet.
Wireless Personal Area Network: Wireless Personal Area Network is developed by simply using wireless technologies
such as WiFi, Bluetooth. It is a low range network.
Wired Personal Area Network: Wired Personal Area Network is created by using the USB.
o A metropolitan area network is a network that covers a larger geographic area by interconnecting a different
LAN to form a larger network.
o Government agencies use MAN to connect to the citizens and private industries.
o In MAN, various LANs are connected to each other through a telephone exchange line.
o It has a higher range than Local Area Network(LAN).
o A Wide Area Network is a network that extends over a large geographical area such as states or countries.
o A Wide Area Network is quite bigger network than the LAN.
o A Wide Area Network is not limited to a single location, but it spans over a large geographical area through a
telephone line, fibre optic cable or satellite links.
o The internet is one of the biggest WAN in the world.
o A Wide Area Network is widely used in the field of Business, government, and education.
o Geographical area: A Wide Area Network provides a large geographical area. Suppose if the branch of our
office is in a different city then we can connect with them through WAN. The internet provides a leased line
through which we can connect with another branch.
o Get updated files: Software companies work on the live server. Therefore, the programmers get the updated
files within seconds.
o Exchange messages: In a WAN network, messages are transmitted fast. The web application like Facebook,
Whatsapp, Skype allows you to communicate with friends.
o Sharing of software and resources: In WAN network, we can share the software and other resources like a
hard drive, RAM.
o Global business: We can do the business over the internet globally.
o High bandwidth: If we use the leased lines for our company then this gives the high bandwidth. The high
bandwidth increases the data transfer rate which in turn increases the productivity of our company.
o Security issue: A WAN network has more security issues as compared to LAN and MAN network as all the
technologies are combined together that creates the security problem.
o Needs Firewall & antivirus software: The data is transferred on the internet which can be changed or hacked
by the hackers, so the firewall needs to be used. Some people can inject the virus in our system so antivirus is
needed to protect from such a virus.
o High Setup cost: An installation cost of the WAN network is high as it involves the purchasing of routers,
switches.
o Troubleshooting problems: It covers a large area so fixing the problem is difficult.
8. Network: ---Topology---
Topology defines the structure of the network of how all the components are interconnected to each other. There are
two types of topology: physical and logical topology.
---Physical topology is the geometric representation of all the nodes in a network.
Bus Topology: Devices are connected in a linear fashion along a single communication line, with each device having a
unique address. Data is broadcast to all devices on the bus.
Star Topology: All devices are connected to a central hub or switch. Data traffic flows through the hub, and devices
communicate indirectly through it.
Ring Topology: Devices are connected in a circular manner, with data flowing in one direction. Each device receives and
forwards data until it reaches its destination.
Mesh Topology: Every device is directly connected to every other device, providing redundant paths for data transmission
and increased reliability.
Tree Topology: This topology is the variation of the Star topology. This topology has a hierarchical flow of data. In Tree
Topology, protocols like DHCP and SAC (Standard Automatic Configuration) are used.
Hybrid Topology: This topological technology is the combination of all the various types of topologies we have studied
above. Hybrid Topology is used when the nodes are free to take any form. It means these can be individuals such as Ring
or Star topology or can be a combination of various types of topologies seen above.
The history of the internet dates back to the 1960s when the United States Department of Defense's Advanced Research
Projects Agency (ARPA) initiated the development of a robust communication network. The goal was to create a
decentralized network that could withstand partial failures and continue functioning even if parts of the network were
damaged or disabled. This concept led to the creation of ARPANET, which is considered the precursor to the modern
internet.
1969: The first successful message transmission between two computers on ARPANET, marking the birth of the internet.
1971: Ray Tomlinson develops the first networked email system, using the "@" symbol to separate user names from host
names.
1973: The development of the TCP/IP protocol suite by Vint Cerf and Bob Kahn, which forms the foundation of the modern
internet.
1983: The deployment of the Domain Name System (DNS), allowing human-readable domain names to be used instead
of numeric IP addresses.
1989: Tim Berners-Lee proposes the World Wide Web (WWW), laying the groundwork for the creation of websites and
web browsers.
1990s: The World Wide Web gains popularity, leading to the proliferation of websites, online content, and e-commerce.
1993: The graphical web browser Mosaic is released, making the web more accessible and user-friendly.
Late 1990s: The dot-com bubble sees rapid growth and investment in internet-related businesses.
2000s: The internet becomes an integral part of daily life, with widespread adoption of email, search engines, social
media, and online services.
The internet has evolved into a global phenomenon that profoundly impacts various aspects of society, economy,
communication, and culture. Here are some key aspects of the internet today:
Global Connectivity: The internet connects billions of devices worldwide, including computers, smartphones, tablets,
smart TVs, and IoT devices.
Communication: Email, instant messaging, social media platforms, and video conferencing have revolutionized how
people communicate and interact.
Information Access: The internet provides instant access to a vast amount of information, resources, educational
content, and research materials.
E-Commerce: Online shopping and e-commerce have become significant drivers of the economy, with platforms like
Amazon, eBay, and Alibaba transforming retail.
Social Media: Platforms like Facebook, Twitter, Instagram, and TikTok enable users to connect, share content, and engage
with others globally.
Streaming and Entertainment: Video streaming services like Netflix, YouTube, and Spotify have transformed how people
consume entertainment and media.
Cloud Computing: Cloud services provide scalable and flexible computing resources, storage, and software-as-a-service
solutions.
Internet of Things (IoT): The IoT connects everyday objects and devices to the internet, enabling remote control, data
collection, and automation.
Cybersecurity and Privacy: Internet security and privacy concerns have grown, leading to increased focus on data
protection, encryption, and online safety.
Digital Transformation: Businesses and industries have undergone digital transformation, leveraging the internet for
operations, marketing, and customer engagement.
Search Engines: Google and other search engines play a crucial role in information discovery and online research.
Open Source and Collaboration: The open-source movement and collaborative platforms like Wikipedia and GitHub
foster global cooperation and knowledge sharing.
TCP/IP (Transmission Control Protocol/Internet Protocol): TCP/IP is the foundational protocol suite of the Internet. It
provides a set of rules for how data packets should be addressed, transmitted, routed, and received across networks.
TCP handles reliable and ordered delivery of data, while IP is responsible for addressing and routing packets.
HTTP (Hypertext Transfer Protocol): HTTP is the protocol used for transferring hypertext and multimedia documents on
the World Wide Web. It defines how web browsers request and retrieve web pages from web servers.
HTTPS (Hypertext Transfer Protocol Secure): HTTPS is a secure version of HTTP that encrypts data between a user's
browser and a web server using SSL/TLS protocols. It ensures secure and encrypted communication for sensitive
information, such as online transactions.
SMTP (Simple Mail Transfer Protocol): SMTP is used for sending and receiving email messages. It defines how email
clients and servers communicate to route and deliver messages.
POP3 (Post Office Protocol 3) and IMAP (Internet Message Access Protocol): These protocols are used by email clients
to retrieve email messages from a mail server. POP3 downloads messages to the client, while IMAP allows messages to
be stored on the server and accessed from multiple devices.
DNS (Domain Name System): DNS is responsible for translating human-readable domain names (like www.example.com)
into IP addresses that computers use to locate servers on the Internet.
FTP (File Transfer Protocol): FTP is used for transferring files between a client and a server on a network. It defines how
files are authenticated, transferred, and managed.
SSH (Secure Shell): SSH provides secure remote access to a networked device, enabling encrypted communication
between a client and a server. It is commonly used for secure command-line access to servers.
Telnet: Telnet allows remote access to a computer or device over a network. However, it is less secure than SSH and is
often replaced by SSH for secure remote access.
BGP (Border Gateway Protocol): BGP is a routing protocol used to exchange routing and reachability information
between autonomous systems on the Internet. It helps routers determine the most efficient paths for data transmission.
ICMP (Internet Control Message Protocol): ICMP is used for sending error messages and operational information about
network conditions. It is commonly associated with the "ping" command used to test network connectivity.
ARP (Address Resolution Protocol): ARP resolves IP addresses to MAC addresses in a local network, allowing devices to
communicate directly.
CIDR (Classless Inter-Domain Routing): CIDR is a method for allocating IP addresses and routing Internet Protocol
packets.
IPv6 (Internet Protocol version 6): IPv6 is the next-generation IP addressing scheme designed to replace IPv4 due to the
exhaustion of available IPv4 addresses.
OSI Model:
The OSI model was developed by the International Organization for Standardization (ISO) in the 1980s. It divides the
networking process into seven distinct layers, each responsible for a specific set of functions. The layers, from the bottom
to the top, are:
Physical Layer: Deals with the physical transmission of raw bits over a physical medium such as cables or wireless signals.
It defines characteristics like voltage levels, data rates, and modulation techniques.
Data Link Layer: Responsible for creating a reliable link between two directly connected nodes, ensuring error detection,
flow control, and framing of data.
Network Layer: Focuses on routing and forwarding data packets between different networks, addressing, and logical
path determination. IP (Internet Protocol) operates at this layer.
Transport Layer: Manages end-to-end communication and provides error detection, segmentation, flow control, and
reassembly of data. TCP and UDP (User Datagram Protocol) operate at this layer.
Session Layer: Establishes, manages, and terminates communication sessions between applications. It also handles
synchronization and recovery.
Presentation Layer: Translates, encrypts, or compresses data to ensure that data formats are understood by both sender
and receiver. It deals with data representation and encoding.
Application Layer: Provides application services and interfaces directly with end-user applications. Protocols like HTTP,
SMTP, and FTP operate at this layer.
TCP/IP Model:
The TCP/IP model, also known as the Internet protocol suite, is a more practical and widely used model that serves as
the foundation of the Internet itself. It consists of four layers, which are often aligned with some of the layers in the OSI
model:
Network Interface Layer: This layer corresponds to parts of both the OSI's Physical and Data Link layers. It handles the
physical connection to the network medium and data link protocols.
Transport Layer: This layer is similar to the OSI's Transport Layer and provides end-to-end communication services. TCP
and UDP operate here.
Internet Layer: Corresponding to the OSI's Network Layer, this layer is responsible for routing data packets between
different networks using IP. It includes protocols like ICMP.
Application Layer: Comparable to the OSI's top three layers, this layer provides various application services directly to
end-users. It includes protocols like HTTP, FTP, and SMTP.
Data link Layer
Error Detection and Correction in Data link Layer
Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data, are transmitted from the
source to the destination with a certain extent of accuracy.
Errors
When bits are transmitted over the computer network, they are subject to get corrupted due to interference
and network problems. The corrupted bits leads to spurious data being received by the destination and are
called errors.
Types of Errors
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
• Single bit error − In the received frame, only one bit has been corrupted, i.e. either changed
from 0 to 1 or from 1 to 0.
Multiple bits error − In the received frame, more than one bits are corrupted.
Burst error − In the received frame, more than one consecutive bits are corrupted.
Framing
Framing is a point-to-point connection between two computers or devices consisting of a wire where data
is transmitted as a stream of bits. However, these bits must be framed into discernible blocks of information.
Framing is a function of the data link layer..
The two types of variables - sized framing are −
• Character-oriented framing
• Bit - oriented framing
Character - Oriented Framing
Character stuffing is a process of adding 1 extra charater whenever there is flag or escape character in the text
• Frame Header − It contains the source and the destination addresses of the frame in form of
bytes.
• Payload field − It contains the message to be delivered. It is a variable sequence of data bytes.
• Trailer − It contains the bytes for error detection and error correction.
• Flags − Flags are the frame delimiters signalling the start and end of the frame. It is of 1- byte
denoting a protocol - dependent special character.
Character - oriented protocols are suited for transmission of texts. The flag is chosen as a character that is
not used for text encoding. However, if the protocol is used for transmitting multimedia messages, there
are chances that the pattern of the flag byte is present in the message byte sequence. In order that the
receiver does not consider the pattern as the end of the frame, byte stuffing mechanism is used. Here, a
special byte called the escape character (ESC) is stuffed before every byte in the message with the same
pattern as the flag byte. If the ESC sequence is found in the message byte, then another ESC byte is stuffed
before it.
A problem with character - oriented framing is that it adds too much overhead on the message, thus
increasing the total size of the frame. Another problem is that the coding system used in recent times have
16-bit or 32-bit characters that conflicts with the 8-bit encoding.
Bit stuffing
Bit staffing is the process of adding one extra zero whenever data have 5 continuous 1s so that the receiver does not
mistake the pattern 01111110 for a flag.
Error Control
Error control can be done in two ways
• Error detection − Error detection involves checking whether any error has occurred or not.
The number of error bits and the type of error does not matter.
• Error correction − Error correction involves ascertaining the exact number of bits that has
been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits along with the
data bits. The receiver performs necessary checks based upon the additional redundant bits. If it finds that
the data is free from errors, it removes the redundant bits before passing the message to the upper layers.
Parity Check
The parity check is done by adding an extra bit, called parity bit to the data to make a number of 1s either
even in case of even parity or odd in case of odd parity.
• In case of even parity: If a number of 1s is even then parity bit value is 0. If the number of 1s
is odd, then parity bit value is 1.
• In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s is
even, then parity bit value is 1.
On receiving a frame, the receiver counts the number of 1s in it. In case of even parity check, if the count of
1s is even, and extra bit is 0 then the frame is accepted, otherwise, it is rejected. A similar rule is adopted
for odd parity check.
The parity check is suitable for single bit error detection only. Ex note
Checksum
In this error detection scheme, the following procedure is applied.
• Data is divided into fixed sized frames or segments.
• The sender adds the segments using 1’s complement arithmetic to get the sum. It then
complements the sum to get the checksum and sends it along with the data frames.
• The receiver adds the incoming segments along with the checksum using 1’s complement
arithmetic to get the sum and then complements it.
• If the result is zero, the received frames are accepted; otherwise, they are discarded.
Cyclic Redundancy Check (CRC)
Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent by a predetermined
divisor agreed upon by the communicating system. The divisor is generated using polynomials.
• Here, the sender performs binary division of the data segment by the divisor. It then appends
the remainder called CRC bits to the end of the data segment. This makes the resulting data
unit exactly divisible by the divisor.
• The receiver divides the incoming data unit by the divisor. If there is no remainder, the data
unit is assumed to be correct and is accepted. Otherwise, it is understood that the data is
corrupted and is therefore rejected.
Error Correction Techniques
Error correction techniques find out the exact number of bits that have been corrupted and as well as their
locations. There are two principle ways
• Backward Error Correction (Retransmission) − If the receiver detects an error in the incoming
frame, it requests the sender to retransmit the frame. It is a relatively simple technique. But
it can be efficiently used only where retransmitting is not expensive as in fiber optics and the
time for retransmission is low relative to the requirements of the application.
• Forward Error Correction − If the receiver detects some error in the incoming frame, it
executes error-correcting code that generates the actual frame. This saves bandwidth
required for retransmission. It is inevitable in real-time systems. However, if there are too
many errors, the frames need to be retransmitted.
The four main error correction codes are
• Hamming Codes
• Binary Convolution Code
• Reed – Solomon Code
• Low-Density Parity-Check Code
Flow control
Flow control is a technique that allows two stations working at different speeds to communicate with each
other. It is a set of measures taken to regulate the amount of data that a sender sends so that a fast sender
does not overwhelm a slow receiver. In data link layer, flow control restricts the number of frames the
sender can send before it waits for an acknowledgment from the receiver.
2) Receiver B, after receiving the data frame, sends an acknowledgement with sequence number 1 (the
sequence number of the next expected data frame or packet)
There is only a one-bit sequence number that implies that both sender and receiver have a buffer for one
frame or packet only.
Constraints:
Stop and Wait ARQ has very less efficiency , it can be improved by increasing the window size. Also , for
better efficiency , Go back N and Selective Repeat Protocols are used.
The Stop and Wait ARQ solves the main three problems but may cause big performance issues as the sender
always waits for acknowledgement even if it has the next packet ready to send. Consider a situation where
you have a high bandwidth connection and propagation delay is also high (you are connected to some server
in some other country through a high-speed connection). To solve this problem, we can send more than one
packet at a time with a larger sequence number. We will be discussing these protocols in the next articles.
So Stop and Wait ARQ may work fine where propagation delay is very less for example LAN connections but
performs badly for distant connections like satellite connections.
• Simple Implementation: Stop and Wait ARQ is a simple protocol that is easy to implement in both
hardware and software. It does not require complex algorithms or hardware components, making it
an inexpensive and efficient option.
• Error Detection: Stop and Wait ARQ detects errors in the transmitted data by using checksums or
cyclic redundancy checks (CRC). If an error is detected, the receiver sends a negative
acknowledgment (NAK) to the sender, indicating that the data needs to be retransmitted.
• Reliable: Stop and Wait ARQ ensures that the data is transmitted reliably and in order. The receiver
cannot move on to the next data packet until it receives the current one. This ensures that the data
is received in the correct order and eliminates the possibility of data corruption.
• Flow Control: Stop and Wait ARQ can be used for flow control, where the receiver can control the
rate at which the sender transmits data. This is useful in situations where the receiver has limited
buffer space or processing power.
• Backward Compatibility: Stop and Wait ARQ is compatible with many existing systems and protocols,
making it a popular choice for communication over unreliable channels.
• Low Efficiency: Stop and Wait ARQ has low efficiency as it requires the sender to wait for an
acknowledgment from the receiver before sending the next data packet. This results in a low data
transmission rate, especially for large data sets.
• High Latency: Stop and Wait ARQ introduces additional latency in the transmission of data, as the
sender must wait for an acknowledgment before sending the next packet. This can be a problem for
real-time applications such as video streaming or online gaming.
• Limited Bandwidth Utilization: Stop and Wait ARQ does not utilize the available bandwidth
efficiently, as the sender can transmit only one data packet at a time. This results in underutilization
of the channel, which can be a problem in situations where the available bandwidth is limited.
• Limited Error Recovery: Stop and Wait ARQ has limited error recovery capabilities. If a data packet
is lost or corrupted, the sender must retransmit the entire packet, which can be time-consuming and
can result in further delays.
• Vulnerable to Channel Noise: Stop and Wait ARQ is vulnerable to channel noise, which can cause
errors in the transmitted data. This can result in frequent retransmissions and can impact the overall
efficiency of the protocol.
Medium access sub layer
Point-to-Point Protocol (PPP) is a data link layer protocol used to establish a direct connection between two nodes or devices
over a serial interface, such as a telephone line, serial cable, or DSL modem. PPP is a widely-used protocol for establishing and
managing internet connections.
PPP provides a way to encapsulate data packets and transmit them over the communication link, ensuring reliable and secure
data transfer. It can work with various network layer protocols, such as IP (Internet Protocol), IPv6 (Internet Protocol version 6),
IPX (Internetwork Packet Exchange), and more.
PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of one or more bytes. The fields of a PPP frame are −
• Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern of the flag is 01111110.
• Address − 1 byte which is set to 11111111 in case of broadcast.
• Control − 1 byte set to a constant value of 11000000.
• Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
• Payload − This carries the data from the network layer. The maximum length of the payload field is 1500 bytes.
However, this may be negotiated between the endpoints of communication.
• FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code used is CRC (cyclic
redundancy code)
Byte Stuffing in PPP Frame − Byte stuffing is used is PPP payload field whenever the flag sequence appears in the message, so
that the receiver does not consider it as the end of the frame. The escape byte, 01111101, is stuffed before every byte that
contains the same byte as the flag byte or the escape byte. The receiver on receiving the message removes the escape byte before
passing it onto the network layer.
Advantages of Point-to-Point Protocol (PPP):
• Widely Supported: PPP is a well-established and widely supported protocol. It is implemented in various operating
systems and networking devices.
• Secure Authentication: PPP supports multiple authentication methods, such as PAP and CHAP, providing a level of
security during the connection establishment phase.
• Lack of Scalability: PPP is primarily designed for point-to-point connections, which may not be scalable for larger
networks.
• Limited Support for Broadcast Traffic: PPP is a point-to-point protocol and does not inherently support broadcast traffic.
• Overhead: PPP introduces some overhead due to encapsulation, error correction, and authentication processes. While
these are necessary for reliable communication, they can consume additional bandwidth.
• Slower Connection Establishment: The process of authentication and negotiation required for establishing a PPP
connection can introduce some delays compared to other protocols.
FDDI
FDDI stands for Fiber Distributed Data Interface. It is a set of ANSI and ISO guidelines for information transmission on fiber-optic
lines in Local Area Network (LAN) that can expand in run up to 200 km (124 miles). The FDDI convention is based on the token
ring protocol.
In expansion to being expansive geographically, an FDDI neighbourhood region arranges can support thousands of clients. FDDI is
habitually utilized on the spine for a Wide Area Network(WAN).
An FDDI network contains two token rings, one for possible backup in case the essential ring falls flat.
The primary ring offers up to 100 Mbps capacity. In case the secondary ring isn’t required for backup, it can also carry
information, amplifying capacity to 200 Mbps. The single ring can amplify the most extreme remove; a double ring can expand
100 km (62 miles).
Characteristics of FDDI
Allows all stations to have broken even with the sum of time to transmit information.
Advantages of FDDI
Fiber optic cables transmit signals over more noteworthy separations of approximately 200 km.
It offers a higher transmission capacity (up to 250 Gbps). Thus, it can handle information rates up to 100 Mbps.
Fiber optic cable does not break as effectively as other sorts of cables.
Disadvantages of FDDI
FDDI is complex. Thus establishment and support require an incredible bargain of expertise.
FDDI is expensive. Typically since fiber optic cable, connectors and concentrators are exceptionally costly.
Token Ring
Token Ring
Token ring (IEEE 802.5) is a communication protocol in a local area network (LAN) where all stations are
connected in a ring topology and pass one or more tokens for channel acquisition. A token is a special frame
of 3 bytes that circulates along the ring of stations. A station can send data frames only if it holds a token.
The tokens are released on successful receipt of the data frame.
Token Passing Mechanism in Token Ring
If a station has a frame to transmit when it receives a token, it sends the frame and then passes the token
to the next station; otherwise it simply passes the token to the next station. Passing the token means
receiving the token from the preceding station and transmitting to the successor station. The data flow is
unidirectional in the token passing. This is shown in the following diagram
• Deterministic Access: Token Ring offers deterministic access to the network, meaning each device has a predictable and
guaranteed time slot to transmit data.
• Fairness: Every device on the Token Ring network has an equal opportunity to access the communication medium and
transmit data. This fairness is achieved through the token passing mechanism.
• Stability: Token Ring networks tend to be more stable and less prone to congestion compared to Ethernet networks,
especially in smaller LAN environments.
• Complexity: Token Ring networks are generally more complex to set up and maintain than Ethernet networks. The token
passing mechanism and the requirement for devices to be synchronized can add to the complexity.
• Lower Adoption: Token Ring had lower adoption compared to Ethernet, which became the dominant LAN technology
due to its simplicity, scalability, and lower cost.
• Speed Limitations: Token Ring networks faced speed limitations, with typical speeds ranging from 4 Mbps to 16 Mbps.
Ethernet, on the other hand, evolved to support higher speeds, making it more suitable for modern high-bandwidth
applications.
Token Bus
Token Bus (IEEE 802.4) is a standard for implementing token ring over virtual ring in LANs. The physical media has a
bus or a tree topology and uses coaxial cables. A virtual ring is created with the nodes/stations and the token is
passed from one node to the next in a sequence along this virtual ring. Each node knows the address of its preceding
station and its succeeding station. A station can only transmit data when it has the token. The working principle of
token bus is similar to Token Ring.
Token Passing Mechanism in Token Bus
A token is a small message that circulates among the stations. If a station has data to transmit when it
receives a token, it sends the data and then passes the token to the next station; otherwise, it simply passes
the token to the next station. This is depicted in the following diagram –
• Deterministic Access: Token Bus, like Token Ring, offers deterministic access to the network. Devices have a predictable
time slot to transmit data, reducing the chance of collisions and improving network performance.
• Fairness: Similar to Token Ring, Token Bus provides fair access to the communication medium, ensuring that each device
has an equal opportunity to transmit data.
• Simplicity: Token Bus networks have a simpler topology compared to some other LAN technologies, making them
relatively easy to set up and manage.
• Limited Adoption: Token Bus never gained widespread popularity and adoption, and it remained less common than
Ethernet and Token Ring. As a result, finding compatible network equipment and support for Token Bus can be
challenging.
• Performance: Token Bus networks suffer from similar performance limitations as Token Ring networks. As technology
advanced, Ethernet became more popular due to its higher speeds and scalability.
• Single Point of Failure: In a Token Bus network, if the main bus communication line fails, the entire network can become
inaccessible.
Reservation protocols are the class of protocols in which the stations wishing to transmit data broadcast
themselves before actual transmission. These protocols operate in the medium access control (MAC) layer
and transport layer of the OSI model.
In these protocols, there is a contention period prior to transmission. In the contention period, each station
broadcasts its desire for transmission. Once each station announces itself, one of them gets the desired
network resources based upon any agreed criteria. Since each station has complete knowledge whether
every other station wants to transmit or not before actual transmission, all possibilities of collisions are
eliminated.
Advantages of Reservation:
The main advantage of reservation is high rates and low rates of data accessing time of the respective channel can be
predicated easily. Here time and rates are fixed.
Predictable network performance: Reservation-based access methods can provide predictable network performance,
which is important in applications where latency and jitter must be minimized, such as in real-time video or audio
streaming.
Reduced contention: Reservation-based access methods can reduce contention for network resources, as access to
the network is pre-allocated based on reservation requests. This can improve network efficiency and reduce packet
loss.
Quality of Service (QoS) support: Reservation-based access methods can support QoS requirements, by providing
different reservation types for different types of traffic, such as voice, video, or data. This can ensure that high-priority
traffic is given preferential treatment over lower-priority traffic.
Efficient use of bandwidth: Reservation-based access methods can enable more efficient use of available bandwidth,
as they allow for time and frequency multiplexing of different reservation requests on the same channel.
Support for multimedia applications: Reservation-based access methods are well-suited to support multimedia
applications that require guaranteed network resources, such as bandwidth and latency, to ensure high-quality
performance.
Disadvantages of Reservation:
Decrease in capacity and channel data rate under light loads; increase in turn-around time.
controlled access
In controlled access, the stations seek information from one another to find which station has the right to send. It allows only one
node to send at a time, to avoid the collision of messages on a shared medium. The three controlled-access methods are
Reservation
Polling
Token Passing
Polling
Polling process is similar to the roll-call performed in class. Just like the teacher, a controller sends a message to each
node in turn.
In this, one acts as a primary station(controller) and the others are secondary stations. All data exchanges must be
made through the controller.
The message sent by the controller contains the address of the node being selected for granting access.
Although all nodes receive the message the addressed one responds to it and sends data if any. If there is no data,
usually a “poll reject”(NAK) message is sent back.
Problems include high overhead of the polling messages and high dependence on the reliability of the controller.
Advantages of Polling:
The maximum and minimum access time and data rates on the channel are fixed predictable.
Disadvantages of Polling:
Since every station has an equal chance of winning in every round, link sharing is biased.
An increase in the turnaround time leads to a drop in the data rates of the channel under low loads.
Efficiency Let Tpoll be the time for polling and Tt be the time required for transmission of data. Then,
concentration
In the context of computer networks, "concentration" refers to a networking technique used to consolidate multiple lower-speed connections
into a single higher-speed connection. The primary goal of concentration is to optimize the utilization of network resources and improve overall
network performance and efficiency.
Concentration is often used in scenarios where there is a significant difference in data rates between the devices or networks that need to
communicate. By aggregating multiple lower-speed connections, concentration allows them to share a higher-speed link, thus achieving better
utilization and reducing potential bottlenecks.
• Time Division Multiplexing (TDM): TDM is a technique where multiple lower-speed channels are combined into a single high-speed
channel by dividing the time into discrete slots. Each lower-speed channel is allocated a specific time slot during which it can transmit
data. The high-speed channel cycles through these time slots in a predetermined order, allowing each lower-speed channel to take
turns transmitting its data. This approach is often used in traditional circuit-switched networks.
• Statistical Time Division Multiplexing (STDM): STDM is a more dynamic version of TDM that allows channels to use time slots flexibly
based on demand. Instead of fixed time slots, time slots are allocated on a needs basis. If a lower-speed channel has data to transmit,
it is assigned a time slot. If a channel has no data to send, its time slot remains idle, and other channels can use it if needed.
Concentration is particularly useful in scenarios where the network has varying levels of traffic demand across different connections or when
there is a need to interconnect networks with different data rates. For example:
• In telecommunications, multiple lower-speed phone lines might be concentrated into a high-speed digital trunk for efficient data
transmission.
• In computer networking, multiple lower-speed Ethernet connections might be aggregated into a higher-speed link, a technique known
as link aggregation or port-channeling.
By concentrating lower-speed connections into a higher-speed link, overall network performance can be improved, and more cost-effective
utilization of resources can be achieved. Concentration techniques help avoid waste and maximize the efficiency of network infrastructure.
Multiple access protocols are a set of protocols operating in the Medium Access Control sublayer (MAC sublayer) of the Open Systems
Interconnection (OSI) model. These protocols allow a number of nodes or users to access a shared network channel. Several data streams
originating from several nodes are transferred through the multi-point transmission channel.
The objectives of multiple access protocols are optimization of transmission time, minimization of collisions and avoidance of crosstalks.
• ALOHA
• Carrier sense multiple access (CMSA)
• Carrier sense multiple access with collision detection (CMSA/CD)
• Carrier sense multiple access with collision avoidance (CMSA/CA)
Controlled Access Protocols
Controlled access protocols allow only one node to send data at a given time. Before initiating transmission, a node seeks
information from other nodes to determine which station has the right to send. This avoids collision of messages on the shared
channel.
ALOHA is a multiple access protocol for transmission of data via a shared network channel. It operates in the medium access control
sublayer (MAC sublayer) of the open systems interconnection (OSI) model. Using this protocol, several data streams originating from
multiple nodes are transferred through a multi-point transmission channel.
Aloha Rules
3. Collision and data frames may be lost during the transmission of data through multiple stations.
Pure Aloha
Pure ALOHA: In Pure ALOHA, devices can transmit data whenever they have information to send. If two devices transmit at
the same time and collide, they both detect the collision and wait for a random period of time before retransmit their data.
This random backoff mechanism helps to reduce the collisions in subsequent transmission attempts.
Slotted Aloha
Slotted ALOHA: Slotted ALOHA improves the efficiency of the protocol by dividing time into discrete slots or time intervals. Devices
are only allowed to transmit at the beginning of a time slot. If a collision occurs, devices wait until the next time slot to retransmit.
Slotted ALOHA reduces the probability of collisions, as all devices are synchronized to the time slots.
2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G * e ^ - 2 G.
For example, if station A wants to send data, it will first sense the medium. If it finds the channel idle, it will start sending data. However, by
the time the first bit of data is transmitted (delayed due to propagation delay) from station A, if station B requests to send data and senses the
medium it will also find it idle and will also send data. This will result in collision of data from station A and B.
1-persistent: The node senses the channel, if idle it sends the data, otherwise it continuously keeps on checking the medium for being idle and
transmits unconditionally (with 1 probability) as soon as the channel gets idle.
0/Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it checks the medium after a random amount of time (not
continuously) and transmits when found idle.
P-persistent: The node senses the medium, if idle it sends the data with p probability. If the data is not transmitted ((1-p) probability) then it
waits for some time and checks the medium again, now if it is found idle then it send with p probability. This repeat continues until the frame
is sent. It is used in Wi-Fi and packet radio systems.
(c) CSMA/CD – CSMA/CD is one such technique where different stations that follow this protocol agree on some terms and collision detection
measures for effective transmission. This protocol decides which station will transmit when so that data reaches the destination without
corruption.
• When a frame is ready, the transmitting station checks whether the channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station starts transmitting and continually monitors the channel to detect collision.
• If a collision is detected, the station starts the collision resolution algorithm.
• The station resets the retransmission counters and completes frame transmission.
• The station continues transmission of the current frame for a specified time along with a jam signal, to ensure that all the other stations
detect collision.
• The station increments the retransmission counter.
• If the maximum number of retransmission attempts is reached, then the station aborts transmission.
• Otherwise, the station waits for a backoff period which is generally a function of the number of collisions and restart main algorithm.
(d) CSMA/CA – Carrier sense multiple access with collision avoidance. The process of collisions detection involves sender receiving
acknowledgement signals. If there is just one signal(its own) then the data is successfully sent but if there are two signals(its own and the one
with which it has collided) then it means a collision has occurred. To distinguish between these two cases, collision must have a lot of impact
on received signal. However, it is not so in wired networks, so CSMA/CA is used in this case.
Interframe space – Station waits for medium to become idle and if found idle it does not immediately send data (to avoid collision due to
propagation delay) rather it waits for a period of time called Interframe space or IFS. After this time it again checks the medium for being idle.
The IFS duration depends on the priority of station.
Contention Window – It is the amount of time divided into slots. If the sender is ready to send data, it chooses a random number of slots as
wait time which doubles every time medium is not found idle. If the medium is found busy it does not restart the entire process, rather it
restarts the timer when the channel is found idle again.
Acknowledgement – The sender re-transmits the data if acknowledgement is not received before time-out.
Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code to multiple stations
to access channel simultaneously.
FDMA:
The available bandwidth is divided into equal bands so that each station can be allocated its
own band. Guard bands are also added so that no two bands overlap to avoid crosstalk and
noise.
3. FDMD is a data link layer protocol that uses FDA at the physical layer
Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between multiple stations. To avoid collision time is
divided into slots and stations are allotted these slots to transmit data. However, there is a overhead of synchronization as each
station needs to know its time slot. This is resolved by adding synchronization bits to each slot. Another issue with TDMA is
propagation delay which is resolved by addition of guard bands.
Code Division Multiple Access (CDMA) – One channel carries all transmissions simultaneously. There is neither division of
bandwidth nor division of time. For example, if there are many people in a room all speaking at the same time, then also perfect
reception of data is possible if only two person speak the same language. Similarly, data from different stations can be transmitted
simultaneously in different code languages.
What is Ethernet?
Ethernet is the most widely used LAN technology and is defined under IEEE standards 802.3. The reason behind its wide usability
is that Ethernet is easy to understand, implement, and maintain, and allows low-cost network implementation. Also, Ethernet
offers flexibility in terms of the topologies that are allowed. Ethernet generally uses a bus topology. Ethernet operates in two
layers of the OSI model, the physical layer and the data link layer. For Ethernet, the protocol data unit is a frame since we mainly
deal with DLLs. In order to handle collisions, the Access control mechanism used in Ethernet is CSMA/CD.
Although Ethernet has been largely replaced by wireless networks, wired networking still uses Ethernet more frequently. Wi-Fi
eliminates the need for cables by enabling users to connect their smartphones or laptops to a network wirelessly. The 802.11ac
Wi-Fi standard offers faster maximum data transfer rates when compared to Gigabit Ethernet. However, wired connections are
more secure and less susceptible to interference than wireless networks. This is the main justification for why so many
companies and organizations continue to use Ethernet.
Network Layer
Network Devices: Network devices, also known as networking hardware, are physical devices that allow
hardware on a computer network to communicate and interact with one another. Ex- Repeater, Hub, Bridge, Switch,
Routers, Gateway, Brouter, and NIC, etc.
1. Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal over the same network
before the signal becomes too weak or corrupted to extend the length to which the signal can be transmitted over
the same network. An important point to be noted about repeaters is that they not only amplify the signal but also
regenerate it. When the signal becomes weak, they copy it bit by bit and regenerate it at its star topology connectors
connecting following the original strength. It is a 2-port device.
Features of Repeaters:
• It strengthens the system signals by transmitting signals to the weaker locations.
• The Repeaters can continuously monitor the signals generated between the two LANs.
• Repeaters can help with networking flexibility.
• All of the Repeaters are linked together using an IP site connection network. Any problem in the repeater
network can be quickly resolved by using that IP network.
• Repeaters do not necessitate any additional processing. The only time they need to be investigated is
when performance suffers.
Types of Hubs:
Active Hub:- These are the hubs that have their power supply and can clean, boost, and relay the signal along
with the network. It serves both as a repeater as well as a wiring centre. These are used to extend the
maximum distance between nodes.
Passive Hub:- These are the hubs that collect wiring from nodes and power supply from the active hub.
These hubs relay signals onto the network without cleaning and boosting them and can’t be used to extend
the distance between nodes.
Intelligent Hub:- It works like an active hub and includes remote management capabilities. They also provide
flexible data rates to network devices. It also enables an administrator to monitor the traffic passing through
the hub and to configure each port in the hub.
• Advantages / Features:
Centralization − Centralization in the context refers to the fact that all the communication between devices
on a network is funneled through a single point. And hubs provide it
.
Easy to use − Hubs are simple devices that do not require any configuration or software installation to
manage them.
Cost-effectiveness − Hubs are relatively less expensive compared to other networking devices such as
switches and routers.
Basic network connectivity − A hub provides basic network connectivity by allowing devices to communicate
with each other in a LAN. This is useful for small-size networks, where only a few devices need to be
connected.
Widely available − Hubs are widely available in the market and they can be found in many different
configurations. To suit different network requirements.
Compatibility − Hubs are compatible with most devices and operating systems, making them a wide-use
network solution.
Easy to install − Hubs are easy to install compared to routers and switches.
Limited network traffic − One advantage of hubs is that they limit network traffic in small networks with low
traffic.
• Disadvantages:
Limited Bandwidth − Hubs have a limited amount of bandwidth, which is shared among all the devices
connected to them.
Single Point of Failure − A hub is a central point in a network where all the devices connect. If the hub fails,
the entire network becomes down.
Security Vulnerabilities − hubs work at the physical layer of the ISO/OSI model, which means they don’t have
any security features. This makes them vulnerable to attacks.
Broadcast Storms − where too many messages are sent at once, clogging up the network and slowing down
communication.
3. Bridge – A bridge operates at the data link layer. A bridge is a repeater, with add on the functionality of filtering
content by reading the MAC addresses of the source and destination. It is also used for interconnecting two LANs
working on the same protocol. It has a single input and single output port, thus making it a 2 port device.
Types of Bridges:
Transparent Bridges:- These are the bridge in which the stations are completely unaware of the bridge’s
existence i.e., whether or not a bridge is added or deleted from the network, reconfiguration of the stations
is unnecessary. These bridges make use of two processes i.e., bridge forwarding and bridge learning.
Source Routing Bridges:- In these bridges, routing operation is performed by the source station and the
frame specifies which route to follow. The host can discover the frame by sending a special frame called the
discovery frame, which spreads through the entire network using all possible paths to the destination.
• Advantages of Bridges / Features:
Segmentation and Reduced Collision Domains: Bridges help in segmenting a large network into smaller,
more manageable segments. Each segment operates as its own collision domain, reducing the chances of
collisions and improving overall network performance.
Isolation of Network Traffic: Bridges can isolate traffic within segments, preventing unnecessary broadcast
traffic from affecting the entire network. This isolation improves network efficiency and reduces congestion.
Improved Performance: Bridges can lead to better network performance and reduced latency.
Interconnection of Different Network Types: Bridges can connect different types of network media or
technologies, such as Ethernet and Wi-Fi, allowing seamless communication between different parts of a
network.
Filtering: Bridges can filter and control traffic based on MAC addresses.
• Disadvantages of Bridges:
Limited Scalability: As the network grows, managing bridges and maintaining efficient communication can
become complex.
Complex Network Management: The more bridges you have in a network, the more complex the network
management becomes. Configuring and troubleshooting bridges can require specialized knowledge.
Limited Intelligence: Bridges primarily operate based on MAC addresses and do not have the intelligence to
make decisions based on higher-level information such as IP addresses.
Propagation of Network Issues: While bridges can isolate traffic to some extent, network issues or errors in
one segment can potentially affect other segments connected by the bridge.
Lack of Advanced Features: Bridges are relatively simple devices compared to modern network devices like
switches and routers. They lack features like Quality of Service (QoS) management, VLAN support, and
advanced routing capabilities.
4. Switch – A switch is a multiport bridge with a buffer and a design that can boost its efficiency(a large number of
ports imply less traffic) and performance. A switch is a data link layer device. The switch can perform error checking
before forwarding data, which makes it very efficient as it does not forward packets that have errors and forward
good packets selectively to the correct port only.
Types of Switches:
i. Unmanaged switches: These switches have a simple plug-and-play design and do not offer advanced
configuration options
ii. Smart switches: These switches have features similar to managed switches but are typically easier to
set up and manage. They are suitable for small- to medium-sized networks.
iii. Layer 2 switches: These switches operate at the Data Link layer of the OSI model and are responsible
for forwarding data between devices on the same network segment.
iv. Layer 3 switches: These switches operate at the Network layer of the OSI model and can route data
between different network segments. They are more advanced than Layer 2 switches and are often
used in larger, more complex networks.
v. PoE switches: These switches have Power over Ethernet capabilities, which allows them to supply
power to network devices over the same cable that carries data.
vi. Gigabit switches: These switches support Gigabit Ethernet speeds, which are faster than traditional
Ethernet speeds.
vii. Rack-mounted switches: These switches are designed to be mounted in a server rack and are
suitable for use in data centers or other large networks.
viii. Desktop switches: These switches are designed for use on a desktop or in a small office environment
and are typically smaller in size than rack-mounted switches.
ix. Modular switches: These switches have modular design, which allows for easy expansion or
customization. They are suitable for large networks and data centers.
5. Routers – The first true IP Router was developed by Ginny Strazisar at BBN during 1975-1976. And the wireless
Router was invented by Vic Hayes at 1997. A router is a device in computer networking that forwards data packets to
their destinations , based on their addresses. The work a router does it called routing ,it is bits similar as switching,
but a router is different from a switch.
• Routers work with IP packets, meaning that it works at the level of the IP protocol. A router works on the
3rd layer(Network Layer) of the OSI Model.
• A router uses an internal routing table —It’s a list of paths to various network destinations.
Features of Router:
• A router provides high-speed internet connectivity with the different types of ports like gigabit, fast-
Ethernet, and STM link port.
• It allows the users to configure the port as per their requirements in the network.
• Routers provide the redundancy as it always works in master and slave mode. It allows the users to
connect several LAN and WAN.
Types of routers:
i. Broadband Routers: Broadband routers can be used to do several different types of things. They can be
used to connect two different computers or to connect two computers to the internet.
ii. Wireless Routers: Wireless routers connect to your modem and create a wireless signal in your home or
office. So, any computer within range can connect to your wireless router and use your broadband
internet for free.
Advantage of Routers:
i. Router limits the collision domain.
ii. Router can function on LAN & WAN.
iii. Router can connect different media & architectures.
iv. Router can determine best path/route for data to reach the destination.
v. Router can filter the broadcasts.
Disadvantage of Routers:
i. Router is more expensive than Hub, Bridge & Switch.
ii. Router only works with routable protocol.
iii. Routing updates consume bandwidth.
iv. Increase latency due to greater degree of packet filtering.
6. Gateway – A gateway is a device or software component that acts as an entry or exit point between different
networks, facilitating communication, data exchange, and protocol translation. It plays a crucial role in connecting
networks with varying technologies, addressing schemes, and communication protocols.
Advantages of Gateways:
i. Protocol Translation: Gateways enable communication between networks that use different protocols.
ii. Network Bridging: They bridge the gap between diverse networks, connecting local networks to the broader
Internet or other remote networks.
iii. Address Mapping: Gateways perform address translation, ensuring compatibility between differing
addressing schemes.
iv. Enhanced Security: Many gateways incorporate firewall features, providing an extra layer of protection
against unauthorized access and malicious activities.
v. Legacy System Integration: They facilitate the integration of legacy systems with modern networks,
extending the lifespan of older infrastructure.
vi. Application-Specific Services: Certain gateways operate at the application layer, offering specialized services
such as content filtering or protocol-specific enhancements.
vii. Network Optimization: By routing traffic effectively, gateways enhance network performance, reducing
latency and improving overall responsiveness.
Disadvantages of Gateways:
i. Complex Configuration: Gateway setup and management can be intricate due to the diverse tasks they
perform, requiring skilled personnel.
ii. Single Point of Failure: In some designs, a gateway failure can disrupt communication between networks,
emphasizing the need for redundancy.
iii. Latency and Overhead: Gateways may introduce additional latency and processing overhead due to protocol
translation and routing, affecting network performance.
iv. Costly Implementation: Deploying and maintaining gateways can be costly, involving hardware, software,
and personnel expenses.
v. Compatibility Issues: Ensuring compatibility between various protocols and technologies can be challenging,
leading to potential interoperability problems.
vi. Performance Bottlenecks: In scenarios with heavy traffic, gateways can become performance bottlenecks,
affecting data transfer rates.
vii. Security Concerns: While gateways enhance security, misconfigurations or vulnerabilities can expose
networks to risks, requiring careful monitoring.
viii. Limited Intelligence: Gateways may lack advanced routing capabilities compared to dedicated routers,
potentially impacting complex network designs.
Addressing: network addressing is one of the major tasks of Network Layer. Network Addresses are always logical
i.e., these are software-based addresses which can be changed by appropriate configurations.
A network address always points to host / node / server or it can represent a whole network. Network address is
always configured on network interface card and is generally mapped by system with the MAC address (hardware
address or layer-2 address) of the machine for Layer-2 communication.
There are different kinds of network addresses in existence: IP, IPX, AppleTalk.
• Internet address: In computer networks, an "Internet address" generally refers to an IP (Internet Protocol)
address. An IP address is a numerical label assigned to each device connected to a computer network that
uses the Internet Protocol for communication. IP addresses serve two main purposes:
Device Identification: IP addresses uniquely identify devices within a network. They are similar to phone
numbers in a telephone network, helping computers and other devices locate and communicate with each
other.
Routing: IP addresses play a crucial role in routing data packets across networks. When you send data over
the internet, it gets broken down into packets, and these packets are routed from one network node to
another based on the destination IP address.
IPv4 (Internet Protocol version 4): This is the most widely used version of IP addresses. IPv4 addresses are
32-bit numerical addresses expressed as four sets of numbers separated by dots (e.g., 192.168.1.1).
However, due to the limited number of available IPv4 addresses, the world is transitioning to IPv6.
IPv6 (Internet Protocol version 6): IPv6 was introduced to address the shortage of IPv4 addresses. IPv6 uses
128-bit addresses and is represented in hexadecimal format with colons (e.g.,
2001:0db8:85a3:0000:0000:8a2e:0370:7334). IPv6 provides a significantly larger pool of addresses, ensuring
the continued growth of the internet.
Private IP Address: Private IP addresses are used within private networks, such as a home or office network.
Devices within the same private network can communicate with each other using these addresses, but they
are not directly accessible from the public internet.
• Classful address: An IP address is an address having information about how to reach a specific host, especially
outside the LAN. An IP address is a 32-bit unique address having an address space of 232. Generally, there are
two notations in which the IP address is written, dotted decimal notation and hexadecimal notation.
i. The value of any segment (byte) is between 0 and 255 (both included).
ii. No zeroes are preceding the value in any segment (054 is wrong, 54 is correct).
Class A
Class B
Class C
Class D
Class E
Each of these classes has a valid range of IP addresses. Classes D and E are reserved for multicast and experimental
purposes respectively. The order of bits in the first octet determines the classes of the IP address.
The IPv4 address is divided into two parts:
Network ID
Host ID
The class of IP address is used to determine the bits used for network ID and host ID and the number of total
networks and hosts possible in that particular class. Each ISP or network administrator assigns an IP address to each
device that is connected to its network.
Class A
IP addresses belonging to class A are assigned to the networks that contain a large number of hosts.
2^7-2= 126 network ID(Here 2 address is subtracted because 0.0.0.0 and 127.x.y.z are special address. )
2^24 – 2 = 16,777,214 host ID
IP addresses belonging to class A ranges from 1.x.x.x – 126.x.x.x
Class B
IP address belonging to class B is assigned to networks that range from medium-sized to large-sized networks.
Class C
IP addresses belonging to class C are assigned to small-sized networks.
Class D
IP address belonging to class D is reserved for multi-casting. The higher-order bits of the first octet of IP addresses
belonging to class D is always set to 1110. The remaining bits are for the address that interested hosts recognize.
Class D does not possess any subnet mask. IP addresses belonging to class D range from 224.0.0.0 –
239.255.255.255.
Class E
IP addresses belonging to class E are reserved for experimental and research purposes. IP addresses of class E range
from 240.0.0.0 – 255.255.255.254. This class doesn’t have any subnet mask. The higher-order bits of the first octet of
class E are always set to 1111.
➢ Routing:
Routing in computer networks refers to the process of determining the optimal path for data packets to travel
from a source to a destination across a network. It involves making decisions about the most efficient route that
data should take through the network in order to reach its intended destination. Routing ensures that data
packets are delivered accurately and efficiently while taking into account factors such as network topology,
available paths, traffic conditions, and routing protocols.
Routing Techniques:
Routing techniques are methods and algorithms used to determine the paths that data packets should follow in
order to reach their destinations. Various routing techniques are employed in computer networks to achieve
efficient and reliable data transmission. Here are some common routing techniques:
Static Routing: In static routing, network administrators manually configure the routing tables of routers to
define specific paths for data. This approach is suitable for small networks with stable topologies but requires
manual intervention to update routes in case of network changes.
Dynamic Routing: Dynamic routing involves routers automatically exchanging information about network
conditions and updating their routing tables accordingly. Dynamic routing protocols, such as OSPF (Open Shortest
Path First) and RIP (Routing Information Protocol), enable routers to adapt to changes in the network, making it
suitable for larger and more dynamic networks.
➢ Protocols: A network protocol is a set of rules for formatting data so that all connected devices can process
it.
• IP: IP stands for "Internet Protocol," and it is a fundamental set of rules and conventions that govern how
data is sent and received across computer networks, including the internet. IP is a core component of the
TCP/IP (Transmission Control Protocol/Internet Protocol) suite, which is the basis for modern networking and
data communication.
Addressing: IP addresses uniquely identify devices on a network. Just as postal addresses help identify the
location of a recipient, IP addresses identify the location of a device in a network. There are two main versions
of IP addresses: IPv4 and IPv6.
i. IPv4 (Internet Protocol version 4): IPv4 addresses are 32-bit numerical labels separated by dots (e.g.,
192.168.1.1). Due to the limited number of IPv4 addresses, the world is transitioning to IPv6.
ii. IPv6 (Internet Protocol version 6): IPv6 addresses are 128-bit hexadecimal labels separated by colons
(e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334). IPv6 provides a much larger address space to
Routing and Forwarding: When a device wants to send data to another device, it creates a data packet,
encapsulating the data along with the destination IP address. Routers play a crucial role in IP-based networks.
They examine the destination IP address of incoming packets and make decisions about how to forward them
to their next destination. Routers maintain routing tables that help them determine the best path for data to
reach its destination.
Subnetting: IP addresses are often grouped into subnets to manage network resources efficiently. Subnetting
involves dividing a larger network into smaller segments, each with its own range of IP addresses. Subnet
masks define which portion of the IP address represents the network and which portion represents the host
within that network.
Network Address Translation (NAT): NAT is a technique used to allow multiple devices within a private
network to share a single public IP address when accessing the internet. NAT gateways map internal private IP
addresses to a single public IP address.
Data Delivery: Data packets travel from source to destination by passing through multiple routers and
network devices. Each device examines the packet's destination IP address, determines the next hop, and
forwards the packet accordingly. This process continues until the packet reaches its final destination.
Error Handling: IP provides minimal error detection and correction mechanisms. If a packet is lost or
corrupted during transmission, higher-level protocols (such as TCP) may handle retransmission and error
recovery.
• IPV6: Pv6 (Internet Protocol version 6) is the latest version of the Internet Protocol, which is used for
identifying and locating devices on a network and enabling data communication across the internet. IPv6 was
developed to address the limitations of its predecessor, IPv4 (Internet Protocol version 4), and to ensure the
continued growth of the internet as the number of connected devices expands.
i. Larger Address Space: One of the primary motivations for developing IPv6 was to expand the
available pool of IP addresses. IPv4 uses 32-bit addresses, limiting the total number of unique
addresses to around 4.3 billion. In contrast, IPv6 uses 128-bit addresses, providing an astronomically
larger address space that can accommodate trillions upon trillions of devices.
ii. Hexadecimal Notation: IPv6 addresses are represented in hexadecimal format, making them longer
but more flexible. This format allows for more efficient allocation of addresses and makes it easier to
manage large address spaces.
iii. Address Notation: IPv6 addresses are typically written in eight groups of four hexadecimal digits
separated by colons, such as 2001:0db8:85a3:0000:0000:8a2e:0370:7334. Leading zeros within each
group can be omitted, and consecutive groups of zeros can be represented as "::".
iv. Autoconfiguration: IPv6 includes features that simplify the configuration of devices on a network.
Devices can automatically generate their own unique IP addresses, reducing the need for manual
configuration or reliance on external services like Dynamic Host Configuration Protocol (DHCP).
v. Security and Privacy: IPv6 introduces improvements to network security and privacy. For example, it
includes support for IPsec (IP Security), which provides encryption and authentication for network
communications.
vi. End-to-End Connectivity: IPv6 promotes end-to-end connectivity by allowing devices to have globally
routable IP addresses. This facilitates direct communication between devices on different networks
without the need for Network Address Translation (NAT).
vii. Multicast Support: IPv6 enhances support for multicast communication, allowing data to be sent to
multiple recipients simultaneously.
viii. Transition Mechanisms: IPv6 transition mechanisms enable the gradual adoption of IPv6 alongside
existing IPv4 networks. These mechanisms facilitate coexistence and interoperability between
devices using different versions of the protocol.
ix. Future-Proofing: IPv6 was designed with the future in mind, anticipating the growth of the internet
and the proliferation of internet-connected devices, including the Internet of Things (IoT).
Transport layer
The transport Layer is the second layer in the TCP/IP model and the fourth layer in the OSI model. It is an end-to-end
layer used to deliver messages to a host. It is termed an end-to-end layer because it provides a point-to-point
connection rather than hop-to-hop, between the source host and destination host to deliver the services reliably. The
unit of data encapsulation in the Transport Layer is a segment.
The transport layer takes services from the Application layer and provides services to the Network layer.
At the sender’s side: The transport layer receives data (message) from the Application layer and then performs
Segmentation, divides the actual message into segments, adds the source and destination’s port numbers into the
header of the segment, and transfers the message to the Network layer.
At the receiver’s side: The transport layer receives data from the Network layer, reassembles the segmented data,
reads its header, identifies the port number, and forwards the message to the appropriate port in the Application
layer.
Congestion Control
Flow control
TCP :
TCP (Transmission Control Protocol) is one of the main protocols of the Internet protocol suite. It lies between the
Application and Network Layers which are used in providing reliable delivery services. It is a connection-oriented
protocol for communications that helps in the exchange of messages between different devices over a network. The
Internet Protocol (IP), which establishes the technique for sending data packets between computers, works with TCP.
Working of TCP
To make sure that each message reaches its target location intact, the TCP/IP model breaks down the data into small
bundles and afterward reassembles the bundles into the original message on the opposite end. Sending the
information in little bundles of information makes it simpler to maintain efficiency as opposed to sending everything
in one go.
After a particular message is broken down into bundles, these bundles may travel along multiple routes if one route
is jammed but the destination remains the same.
For example, When a user requests a web page on the internet, somewhere in the world, the server processes that
request and sends back an HTML Page to that user. The server makes use of a protocol called the HTTP Protocol. The
HTTP then requests the TCP layer to set the required connection and send the HTML file.
Now, the TCP breaks the data into small packets and forwards it toward the Internet Protocol (IP) layer. The packets
are then sent to the destination through different routes.
The TCP layer in the user’s system waits for the transmission to get finished and acknowledges once all packets have
been received.
Features of TCP/IP
Some of the most prominent features of Transmission control protocol are
• TCP keeps track of the segments being transmitted or received by assigning numbers to each and every single
one of them.
• A specific Byte Number is assigned to data bytes that are to be transferred while segments are assigned
sequence numbers.
• Acknowledgment Numbers are assigned to received segments.
2. Connection Oriented
• It means sender and receiver are connected to each other till the completion of the process.
• The order of the data is maintained i.e. order remains same before and after transmission.
• 3. Full Duplex
• In TCP data can be transmitted from receiver to the sender or vice – versa at the same time.
• It increases efficiency of data flow between sender and receiver.
4. Flow Control
• Flow control limits the rate at which a sender transfers data. This is done to ensure reliable delivery.
• The receiver continually hints to the sender on how much data can be received (using a sliding window)
5. Error Control
6. Congestion Control
Advantages
1. It is a reliable protocol.
2. It provides an error-checking mechanism as well as one for recovery.
3. It gives flow control.
4. It makes sure that the data reaches the proper destination in the exact order that it was sent.
5. Open Protocol, not owned by any organization or individual.
Disadvantages
1. TCP is made for Wide Area Networks, thus its size can become an issue for small networks with low
resources.
2. TCP runs several layers so it can slow down the speed of the network.
3. It is not generic in nature. Meaning, it cannot represent any protocol stack other than the TCP/IP suite. E.g., it
cannot work with a Bluetooth connection.
4. No modifications since their development around 30 years ago.
o Source port: It defines the port of the application, which is sending the data. So, this field
contains the source port address, which is 16 bits.
o Destination port: It defines the port of the application on the receiving side. So, this field
contains the destination port address, which is 16 bits.
o Sequence number: This field contains the sequence number of data bytes in a particular
session.
o Acknowledgment number: When the ACK flag is set, then this contains the next sequence
number of the data byte and works as an acknowledgment for the previous data received.
For example, if the receiver receives the segment number 'x', then it responds 'x+1' as an
acknowledgment number.
o HLEN: It specifies the length of the header indicated by the 4-byte words in the header. The
size of the header lies between 20 and 60 bytes. Therefore, the value of this field would lie
between 5 and 15.
o Reserved: It is a 4-bit field reserved for future use, and by default, all are set to zero.
o Flags
There are six control bits or flags:
1. URG: It represents an urgent pointer. If it is set, then the data is processed urgently.
2. ACK: If the ACK is set to 0, then it means that the data packet does not contain an
acknowledgment.
3. PSH: If this field is set, then it requests the receiving device to push the data to the
receiving application without buffering it.
4. RST: If it is set, then it requests to restart a connection.
5. SYN: It is used to establish a connection between the hosts.
6. FIN: It is used to release a connection, and no further data exchange will happen.
Windowiiisize
It is a 16-bit field. It contains the size of data that the receiver can accept. This field is used
for the flow control between the sender and receiver and also determines the amount of
buffer allocated by the receiver for a segment. The value of this field is determined by the
receiver.
o Checksum
It is a 16-bit field. This field is optional in UDP, but in the case of TCP/IP, this field is mandatory.
o Urgentiipointer
It is a pointer that points to the urgent data byte if the URG flag is set to 1. It defines a value
that will be added to the sequence number to get the sequence number of the last urgent
byte.
o Options
It provides additional options. The optional field is represented in 32-bits. If this field contains
the data less than 32-bit, then padding is required to obtain the remaining bits.
UDP:
In computer networking, the UDP stands for User Datagram Protocol. The David P. Reed developed the UDP
protocol in 1980. It is defined in RFC 768, and it is a part of the TCP/IP protocol, so it is a standard protocol
over the internet. The UDP protocol allows the computer applications to send the messages in the form of
datagrams from one machine to another machine over the Internet Protocol (IP) network.
In UDP, the receiver does not generate an acknowledgement of packet received and in turn, the sender does not wait
for any acknowledgement of packet sent. This shortcoming makes this protocol unreliable as well as easier on
processing.
Features
• UDP is used when acknowledgement of data does not hold any significance.
• UDP is good protocol for data flowing in one direction.
• UDP is simple and suitable for query based communications.
• UDP is not connection oriented.
• UDP does not provide congestion control mechanism.
• UDP does not guarantee ordered delivery of data.
• UDP is stateless.
• UDP is suitable protocol for streaming applications such as VoIP, multimedia streaming.
UDP Header
UDP header is as simple as its function.
Limitations
o It provides an unreliable connection delivery service. It does not provide any services of IP except that
it provides process-to-process communication.
o The UDP message can be lost, delayed, duplicated, or can be out of order.
o It does not provide a reliable transport delivery service. It does not provide any acknowledgment or
flow control mechanism. However, it does provide error control to some extent.
Advantages
o It produces a minimal number of overheads.
What is congestion
A state occurring in network layer when the message traffic is so heavy that it slows down network
response time.
Effects of Congestion
• Congestion Control is a mechanism that controls the entry of data packets into the network,
enabling a better use of a shared network infrastructure and avoiding congestive collapse.
• Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the mechanism
to avoid congestive collapse in a network.
• The leaky bucket algorithm discovers its use in the context of network traffic shaping or rate-
limiting.
• A leaky bucket execution and a token bucket execution are predominantly used for traffic
shaping algorithms.
• This algorithm is used to control the rate at which traffic is sent to the network and shape the
burst traffic to a steady traffic stream.
• The disadvantages compared with the leaky-bucket algorithm are the inefficient use of
available network resources.
• The large area of network resources such as bandwidth is not being used effectively
Imagine a bucket with a small hole in the bottom. No matter at what rate water enters the bucket,
the outflow is at constant rate. When the bucket is full with water additional water entering spills
over the sides and is lost
Similarly, each network interface contains a leaky bucket and the following steps are involved in leaky bucket
algorithm:
• When host wants to send packet, packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits packets at a constant rate.
• Bursty traffic is converted to a uniform traffic by the leaky bucket.
• In practice the bucket is a finite queue that outputs at a finite rate.
The outflow is considered constant when there is any packet in the bucket and zero when it is empty. This
defines that if data flows into the bucket faster than data flows out through the hole, the bucket overflows.
The disadvantages compared with the leaky-bucket algorithm are the inefficient use of available network
resources. The leak rate is a fixed parameter. In the case of the traffic, volume is deficient, the large area of
network resources such as bandwidth is not being used effectively.
The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty the traffic is.
So in order to deal with the bursty traffic we need a flexible algorithm so that the data is not lost. One such
algorithm is token bucket algorithm.
In figure (A) we see a bucket holding three tokens, with five packets waiting to be transmitted. For a packet
to be transmitted, it must capture and destroy one token. In figure (B) We see that three of the five packets
have gotten through, but the other two are stuck waiting for more tokens to be generated.
QOS
Quality of Service (QoS) refers to a set of techniques and mechanisms used in computer
networks and telecommunications to manage and prioritize the delivery of different types
of traffic based on their specific requirements. QoS ensures that the network can meet the
needs of various applications and users by allocating resources appropriately and
maintaining consistent performance levels.
Here are some key aspects and components of Quality of Service:
1. Bandwidth Management: QoS allows administrators to allocate specific amounts of
bandwidth to different types of traffic. This helps ensure that critical applications get
the necessary resources, even during periods of high network utilization.
2. Latency Control: QoS techniques aim to reduce latency (delay) for time-sensitive
applications. This is crucial for real-time communication applications, where even
small delays can impact user experience.
3. Packet Loss Prevention: QoS mechanisms can reduce packet loss, which is essential
for maintaining the integrity of voice and video streams.
4. Jitter Control: Jitter is the variation in packet arrival times. QoS can mitigate jitter,
which is important for real-time applications to ensure a consistent flow of data
packets.
5. Traffic Prioritization: Different types of traffic have different requirements. Real-time
applications like VoIP and video conferencing need low latency, while bulk data
transfer can tolerate higher delays. QoS mechanisms prioritize time-sensitive traffic
over less critical traffic.
6. Congestion Management: QoS helps manage network congestion by dropping less
important packets first or using algorithms like Random Early Detection (RED) to
control congestion before it becomes severe.
7. Multimedia Applications: QoS is crucial for multimedia applications, where
consistent performance is essential for good user experience in streaming, online
gaming, video conferencing, and more.
Importance of QoS:
1. Diverse Applications: Networks carry a variety of traffic types, each with specific
requirements.
2. User Expectations: Users expect seamless performance for applications like video
conferencing and real-time communication.
3. Business Criticality: Certain applications are critical for business operations and
require consistent performance.
4. Network Congestion: QoS prevents network congestion and ensures efficient
resource utilization.
s
Challenges of QoS (3 marks):
1. Implementation Complexity: Configuring QoS mechanisms requires technical
expertise.
2. Misconfiguration Risks: Incorrect settings could lead to unintended degradation of
service.
3. Increased Overhead: Additional processing for QoS mechanisms might lead to
increased resource usage.
4. Heterogeneous Networks: Variability in QoS implementation across vendors and
technologies
Physical layer
1.Analog Data Transmission:
Analog data transmission involves the continuous representation of information using analog signals. Analog signals
can vary smoothly and infinitely over a range of values. Here are some key points about analog data transmission:
• Signal Representation: Analog signals represent data as continuous waveforms, such as sine waves. The
amplitude, frequency, and phase of the waveform carry the information.
• Mediums: Analog transmission can occur over various mediums, including copper wires, coaxial cables, and
radio frequency (RF) waves.
• Examples: Analog telephony, analog radio broadcasting, and analog television broadcasting are examples of
analog data transmission.
• Advantages: Analog signals can convey information in a more natural way for certain applications, such as
voice communication. They can also cover long distances, although with potential signal degradation.
• Disadvantages: Analog signals are susceptible to noise and interference, which can degrade signal quality.
Also, analog signals are less efficient for data compression and error correction.
Digital data transmission involves representing information as discrete, binary signals (0s and 1s). Digital signals are
more robust against noise and interference, making them well-suited for modern communication systems. Here's an
overview of digital data transmission:
• Signal Representation: Digital signals represent data as discrete voltage levels or pulses. Each level or pulse
corresponds to a specific binary value (0 or 1).
• Mediums: Digital transmission can occur over various mediums, including copper wires, optical fibers, and
wireless channels.
• Examples: Digital telephony (VoIP), digital radio broadcasting, internet communication, and digital television
broadcasting are examples of digital data transmission.
• Advantages: Digital signals are highly resistant to noise and interference, leading to improved signal quality
and data integrity. They can also be easily compressed, encrypted, and error-corrected.
• Disadvantages: Digital signals require more complex encoding and decoding processes compared to analog
signals. Additionally, certain applications may require digital-to-analog conversion (DAC) and analog-to-digital
conversion (ADC) steps.
1. Transmission: Transmission is the process of sending and propagating analog or signals of digital information.
Transmission technology generally refers to physical layer protocol duties like modulation, demodulation, line
coding, and many more.
Analog Transmission:
Analog transmission involves the continuous representation of data using analog signals. Analog signals are
continuous waveforms that can take any value within a range. Examples of analog transmission include:
• Analog Telephone Lines: Traditional landline telephones use analog transmission to carry voice signals over
copper wires.
• Analog Radio Broadcasting: AM (Amplitude Modulation) and FM (Frequency Modulation) radio signals are
transmitted using analog modulation techniques.
• Analog Television Broadcasting: Older television systems used analog transmission to transmit video and
audio signals over the airwaves.
Digital Transmission:
Digital transmission involves representing data using discrete, binary signals (0s and 1s). Digital signals are more
robust against noise and interference, making them suitable for long-distance communication. Examples of digital
transmission include:
• Digital Telephone Lines: Many modern telephone systems use digital transmission, such as ISDN (Integrated
Services Digital Network) or VoIP (Voice over Internet Protocol).
• Digital Radio and TV Broadcasting: Digital broadcasting (e.g., DAB for radio and DVB-T for television)
provides higher quality and more efficient use of spectrum compared to analog broadcasting.
• Internet and Networking: Data transmission over the internet and computer networks primarily uses digital
signals.
• Digital Satellite TV: Signals from satellite TV providers are transmitted digitally, allowing for higher quality
video and audio.
Guided transmission media involve using physical cables or wires to transmit signals. Here are some common types
of guided transmission media:
• Twisted Pair Cable: This is a type of cable that consists of pairs of insulated copper wires twisted together. It
is commonly used for telephone lines and Ethernet networks.
• Coaxial Cable: Coaxial cables have a central conductor surrounded by insulation, a metal shield, and an outer
insulating layer. They are used for cable television and high-speed data transmission.
• Optical Fiber: Optical fibers are thin strands of glass or plastic that carry signals using light pulses. They
provide high data rates and are used in long-distance telecommunications and high-speed internet
connections.
Unguided transmission media involve transmitting signals through the air or space without the use of physical cables.
Here are some common types of unguided transmission media:
• Radio Waves: Radio waves are electromagnetic waves used for wireless communication, including AM and
FM radio, Wi-Fi, and cellular networks.
• Microwaves: Microwaves have higher frequencies than radio waves and are used for point-to-point
communication, such as microwave relay links and satellite communication.
• Infrared: Infrared signals use light waves in the infrared spectrum for short-range communication, often used
for remote control devices and short-range data transfer.
• Light Waves: Visible light or laser beams can be used for communication in free space, such as in free-space
optical communication (FSO) systems.
Circuit switching: time division & space division switch
Circuit switching is a method of establishing a dedicated communication path between two devices for the duration
of their conversation. It was widely used in traditional telephone networks. Within circuit switching, there are
variations such as time division switching and space division switching, which determine how the circuit is
established and managed.
Time division switching involves dividing the available communication channel into time slots and allocating these
slots to different conversations. Each conversation uses its allocated time slot to transmit data. Time division
switching is often used in digital networks. Here's how it works:
• Time Slots: The communication channel is divided into discrete time slots.
• Time Division Multiplexing (TDM): Multiple conversations share the same channel by taking turns
transmitting in their assigned time slots.
• Synchronization: All devices must be synchronized to ensure that they transmit and receive data in the
correct time slots.
• Example: T1 and E1 digital lines used in telecommunication networks utilize time division switching.
Space division switching involves physically separating different communication paths to establish circuits. This
method is often used in analog networks and requires multiple physical paths for each circuit. Here's how it works:
Switching Matrix: A switch is used to establish connections between incoming and outgoing communication lines.
Physical Paths: Separate paths (e.g., wires) are dedicated to each conversation. The switch connects the appropriate
paths to establish a circuit.
Dedicated Paths: Once the connection is established, the paths remain dedicated to that circuit for the duration of
the communication.
Example: Crossbar switches used in older telephone networks are an example of space division switching
TDM
TDM, which stands for Time Division Multiplexing. TDM is a technique used in telecommunications and digital
communication to transmit multiple signals or data streams over a single communication channel.
In TDM, the available transmission time is divided into fixed or variable time slots. Each time slot is assigned to a
different data source or device. These devices take turns transmitting their data within their designated time slots.
This allows multiple signals to share the same physical channel without interfering with each other.
Telephony: TDM is often used in traditional telephone systems to combine multiple voice calls onto a single physical
line.
Digital Communication: TDM is used in digital networks to combine data from multiple sources, such as computers
or sensors, into a single data stream for transmission.
Multiplexing: TDM is a form of multiplexing, where multiple signals are combined into a single signal for transmission
and then demultiplexed at the receiving end to separate the original signals
Video and Audio Broadcasting: TDM is employed in broadcasting to combine video and audio signals for
transmission over radio or television channels.
Networking: TDM can be used in networking to allocate bandwidth among different devices or users.
Embedded Systems: TDM can be used in embedded systems where multiple devices share a common
communication bus.
Major Components of Telephone Network: There are three major components of the telephone network:
• Local loops
• Trunks
• Switching Offices
Local Loops: Local Loops are the twisted pair cables that are used to connect a subscriber telephone to the nearest
end office or local central office. For voice purposes, its bandwidth is 4000 Hz.
Trunks: It is a type of transmission medium used to handle the communication between offices. Through
multiplexing, trunks can handle hundreds or thousands of connections. Mainly transmission is performed through
optical fibers or satellite links.
Switching Offices: As there is a permanent physical link between any two subscribers. To avoid this, the telephone
company uses switches that are located in switching offices. A switch is able to connect various loops or trunks and
allows a connection between different subscribes.