CHP 2 CNND
CHP 2 CNND
Ethernet Protocol:
At the physical layer, Ethernet specifies the hardware components and signaling methods
used to transmit data over the network medium, such as twisted pair copper cables or fiber
optic cables.
At the data link layer, Ethernet defines the frame format, addressing scheme, and rules for
accessing the network medium. Each device connected to an Ethernet network has a unique
MAC (Media Access Control) address, which is used to identify the source and destination of
data packets.
Ethernet uses CSMA/CD (Carrier Sense Multiple Access with Collision Detection) as its
access method, allowing devices to share the network medium by listening for traffic before
transmitting data and detecting collisions if multiple devices attempt to transmit
simultaneously.
Overall, the Ethernet protocol provides a reliable and efficient means of communication
within a LAN environment.
Ethernet Standards:
Each Ethernet standard builds upon its predecessors, offering faster speeds, improved
performance, and backward compatibility with older standards to accommodate the evolving
needs of network users.
SLOTTED ALOHA
• IT was developed just to improve the efficiency of pure aloha as the chances of
collision in pure aloha is high.
• The time of shared channel is divided into time slots.
• Sending of data is allowed only at the beginning of these time slots.
• If the station misses the allowed time slots it has to wait for another time slots. This
reduces the chances of collision.
b)Token Passing
• A station is authorized to send data when it receives a special frame called a token.
• Here there is no master node.
• A small, special purpose frame known as node is exchanged among the nodes in
some fixed order.
• When a node receives the token, it holds onto the token only if it has some frames
to transmit, otherwise it immediately forwards the token to next node.
• If the node does have the frames to transmit when it receives the token, it sends up
to maximum number of frames and then forwards the token to next node.
• Token passing is decentralized and highly efficient.
3)CHANNELIZATION
Channelization protocols are methods used in telecommunications to divide a transmission
medium, such as a radio frequency spectrum or a network cable, into multiple channels.
These channels allow for the simultaneous transmission of multiple signals, increasing the
efficiency of communication systems. There are several channelization protocols used in
various communication technologies:
Q)HDLC
Packet Switching:
Circuit Switching:
User Datagram Protocol (UDP) is a simpler, connectionless transport protocol that offers
fewer services compared to TCP. Here are the main services provided by UDP:
1. Quick Messaging: UDP lets you send messages without needing to establish a
connection first. It's like sending a letter without waiting for confirmation that it's
been received.
2. Speedy Delivery: UDP has less "extra stuff" in its messages compared to other
protocols, so it's faster to send and receive data.
3. No Waiting: With UDP, you can keep sending data without waiting for the receiver
to say "got it". This can be handy for things like live video streaming or online
gaming where speed matters more than making sure every piece of data arrives.
4. No Guarantees: Unlike some other protocols, UDP doesn't promise that your data
will always arrive or arrive in order. It's more like shouting across a room – you hope
the other person hears you, but you can't be sure.
5. Broadcasting: UDP allows you to send messages to multiple people at once, which is
useful for things like sending out updates to a group of devices all at once.
6. Simple and Lightweight: UDP is easy to use and doesn't have a lot of complicated
rules, which makes it good for simple tasks where you don't need all the extra features
of other protocols.
UDP Applications:
User Datagram Protocol (UDP) is a connectionless transport protocol that offers minimal
functionality compared to TCP. Its simplicity and low overhead make it suitable for various
applications where reliability and congestion control are less critical. Here are some common
applications of UDP:
1. Streaming Media: Live video and audio streaming services often use UDP for its
low-latency transmission, where a small amount of packet loss is acceptable.
2. Online Gaming: Multiplayer online games rely on UDP for real-time communication
between players and servers, prioritizing speed over reliability.
3. VoIP (Voice over Internet Protocol): Voice and video calling applications utilize
UDP for its low overhead and reduced latency, making real-time conversations
smoother.
4. DNS (Domain Name System): UDP is employed for DNS queries, translating
domain names into IP addresses swiftly, crucial for web browsing.
5. DHCP (Dynamic Host Configuration Protocol): UDP facilitates DHCP for
assigning IP addresses and network configuration to devices on a network
dynamically.
6. SNMP (Simple Network Management Protocol): UDP is used for SNMP, allowing
network administrators to monitor and manage network devices efficiently.
7. TFTP (Trivial File Transfer Protocol): Simple file transfer tasks, like updating
firmware on network devices, use UDP due to its lightweight and straightforward
nature.
8. NTP (Network Time Protocol): UDP is utilized for time synchronization across
networked devices, ensuring accurate timekeeping for various applications.
9. Syslog: Syslog servers and clients use UDP for transmitting system log messages,
allowing for centralized logging and monitoring in network environments.
10. Network Audio/Video Communication: UDP is used in applications such as video
conferencing and IP-based intercom systems, where real-time audio and video
communication are essential, and small delays or packet loss can be tolerated.
TCP Services:
Transmission Control Protocol (TCP) provides several services to applications and users to
ensure reliable, ordered, and error-checked delivery of data across networks. Here are the
primary services offered by TCP:
1. Reliable Communication: TCP ensures that data sent from one computer reaches the
other computer without errors and in the right order.
2. Connection Setup: Before sending data, TCP establishes a connection between the
sender and receiver to ensure smooth communication.
3. Flow Control: It manages the speed of data transmission so that the sender doesn't
overwhelm the receiver, preventing data loss.
4. Congestion Control: TCP monitors the network to avoid traffic jams and ensures fair
sharing of network resources among users.
5. Error Handling: It checks for errors in transmitted data and asks for retransmission
if any errors are found, ensuring data integrity.
6. Two-Way Communication: TCP allows data to be sent and received simultaneously,
enabling real-time communication.
7. Multiple Connections: It supports multiple connections between computers, letting
different applications run smoothly on the same network.
8. Connection Termination: TCP ends connections gracefully, freeing up resources
and ensuring no data loss.
CONGESTION CONTROL
Causes of Congestion:
1. Network Overload: When the volume of data being transmitted exceeds the capacity
of the network infrastructure (routers, switches, links), congestion occurs. This can
happen during peak usage periods or when the network experiences sudden spikes in
traffic.
2. Slow Network Devices: Network devices such as routers or switches may become
bottlenecks if they're not capable of processing data quickly enough. Outdated or
misconfigured equipment can contribute to congestion.
3. Packet Loss and Retransmissions: When packets are lost or corrupted during
transmission, they need to be retransmitted. This can lead to increased traffic on the
network, contributing to congestion.
4. Buffer Overflow: Network devices use buffers to temporarily store incoming
packets. If these buffers become full, incoming packets are dropped, leading to
congestion and potential packet loss.
5. Network Topology: The layout and design of the network can also contribute to
congestion. For example, if multiple devices are connected to a single switch port
(known as "oversubscription"), it can lead to congestion at that point.
Leaky Bucket Algorithm:
The leaky bucket algorithm is a simple yet effective technique used in computer networks for
traffic shaping and congestion control. Imagine a bucket with a small hole at the bottom.
Water (or data packets) pours into the bucket, and if it fills up too quickly, excess water spills
out of the hole at a constant rate.
In networking, the "bucket" represents a buffer where incoming data packets are temporarily
stored. If packets arrive faster than the network can handle, they're placed in the bucket.
However, if the bucket fills up and overflows, excess packets are discarded or delayed.
1. Incoming Data: Data packets arrive at the network device (router, switch, etc.) and
are placed in the bucket (buffer).
2. Bucket Capacity: The bucket has a maximum capacity, representing the maximum
amount of data the network can handle at any given time.
3. Leaky Bucket Operation: If the bucket is full and incoming packets continue to
arrive, excess packets are discarded or delayed. This prevents the network from
becoming overwhelmed and helps regulate the flow of data.
4. Constant Rate: The leaky bucket empties at a constant rate, ensuring a steady output
of data from the buffer. This helps smooth out bursts of traffic and prevents
congestion.
The leaky bucket algorithm is commonly used for traffic shaping, where data transmission
rates are controlled to match the capacity of the network. It's also used for rate limiting,
ensuring that users or applications don't exceed a certain data rate.
The token bucket algorithm is another traffic shaping technique used in computer networks.
Instead of a leaky bucket with a constant drain rate, the token bucket contains tokens that
represent permission to send data packets. If there are tokens available, packets can be sent
immediately. If not, packets must wait until tokens become available.
1. Token Generation: Tokens are generated at a fixed rate and added to the token
bucket. Each token represents permission to send one data packet.
2. Packet Transmission: When a data packet needs to be sent, it must "spend" a token
from the bucket. If there are tokens available, the packet can be transmitted
immediately.
3. Token Consumption: If the token bucket is empty and no tokens are available,
packets must wait until tokens are replenished. This helps regulate the flow of data
and prevent congestion.
4. Rate Limiting: By controlling the rate at which tokens are generated and consumed,
the token bucket algorithm effectively limits the rate of data transmission, ensuring
that it doesn't exceed a predefined limit.
Both the leaky bucket and token bucket algorithms are used for traffic shaping and
congestion control in computer networks, helping to regulate the flow of data and prevent
network overload.
TCP HEADER
1. Source Port (16 bits): This field identifies the sending port of the sender. It indicates
the application or service on the sender's device that generated the TCP segment.
2. Destination Port (16 bits): Specifies the port number of the receiving device's
application or service that should handle the TCP segment.
3. Sequence Number (32 bits): Indicates the byte position of the first data byte in the
TCP segment within the entire stream of data being sent from the sender to the
receiver. It helps in ensuring data integrity and ordering during transmission.
4. Acknowledgment Number (32 bits): Used by the receiver to acknowledge receipt of
data from the sender. It contains the next sequence number that the sender expects to
receive, acknowledging all bytes with smaller sequence numbers.
5. Data Offset (4 bits): Specifies the size of the TCP header in 32-bit words. It indicates
where the data begins, allowing variable-length options to be included in the TCP
header.
6. Reserved (6 bits): Reserved for future use. Must be set to zero.
7. Flags (6 bits): Flags control the state and behavior of the TCP connection. Key flags
include:
• URG (Urgent): Indicates urgent data in the TCP segment.
• ACK (Acknowledgment): Acknowledges the receipt of data.
• PSH (Push): Indicates immediate delivery of data.
• RST (Reset): Resets the connection.
• SYN (Synchronize): Initiates a connection.
• FIN (Finish): Terminates the connection.
8. Window Size (16 bits): Specifies the size of the receive window, indicating the
amount of data the sender can transmit before requiring an acknowledgment from the
receiver.
9. Checksum (16 bits): Used for error detection, ensuring data integrity during
transmission. It covers the TCP header and data.
10. Urgent Pointer (16 bits): Only valid if the URG flag is set. Indicates the offset from
the sequence number of the last urgent data byte in the TCP segment.
11. Options: Optional fields that provide additional information or configuration
parameters for the TCP connection. Options can include maximum segment size,
timestamp, window scale factor, etc.
The TCP header, along with the TCP data, forms a TCP segment, which is encapsulated
within an IP packet for transmission over the network. This header provides the necessary
control and addressing information for reliable and ordered data delivery between devices.