0% found this document useful (0 votes)
10 views

CHP 2 CNND

CNND

Uploaded by

aditipatil2404
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

CHP 2 CNND

CNND

Uploaded by

aditipatil2404
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

ETHERNET PROTOCOLS:

Ethernet Protocol:

Ethernet is a family of computer networking technologies used in local area networks


(LANs). The Ethernet protocol defines the rules and procedures for how devices on the
network communicate with each other. It operates primarily at the physical layer (Layer 1)
and data link layer (Layer 2) of the OSI model.

At the physical layer, Ethernet specifies the hardware components and signaling methods
used to transmit data over the network medium, such as twisted pair copper cables or fiber
optic cables.

At the data link layer, Ethernet defines the frame format, addressing scheme, and rules for
accessing the network medium. Each device connected to an Ethernet network has a unique
MAC (Media Access Control) address, which is used to identify the source and destination of
data packets.

Ethernet uses CSMA/CD (Carrier Sense Multiple Access with Collision Detection) as its
access method, allowing devices to share the network medium by listening for traffic before
transmitting data and detecting collisions if multiple devices attempt to transmit
simultaneously.

Overall, the Ethernet protocol provides a reliable and efficient means of communication
within a LAN environment.

Ethernet Standards:

1. Standard Ethernet (10BASE-T):


• Data Transmission Speed: Up to 10 megabits per second (Mbps)
• Medium: Twisted pair copper cables
• Description: Standard Ethernet, also known as 10BASE-T, was one of the
earliest Ethernet standards. It supports data transmission speeds of up to 10
Mbps over twisted pair copper cables. Standard Ethernet is now largely
obsolete and has been replaced by faster Ethernet standards.
2. Fast Ethernet (100BASE-TX):
• Data Transmission Speed: Up to 100 megabits per second (Mbps)
• Medium: Twisted pair copper cables
• Description: Fast Ethernet, also known as 100BASE-TX, was introduced as an
improvement over Standard Ethernet. It offers data transmission speeds of up
to 100 Mbps, providing ten times the speed of Standard Ethernet. Fast
Ethernet became widely adopted and is commonly used in LAN environments.
3. Gigabit Ethernet (1000BASE-T):
• Data Transmission Speed: Up to 1 gigabit per second (Gbps)
• Medium: Twisted pair copper cables
• Description: Gigabit Ethernet, also known as 1000BASE-T, is the next
evolution of Ethernet standards after Fast Ethernet. It supports data
transmission speeds of up to 1 Gbps, offering a significant increase in speed
compared to Fast Ethernet. Gigabit Ethernet is commonly used in modern
LANs, providing high-speed connectivity for various applications.
4. 10-Gigabit Ethernet (10GBASE-T):
• Data Transmission Speed: Up to 10 gigabits per second (10 Gbps)
• Medium: Twisted pair copper cables
• Description: 10-Gigabit Ethernet, also known as 10GBASE-T, is the latest
Ethernet standard offering even faster data transmission speeds. It supports
speeds of up to 10 Gbps, making it suitable for high-performance computing
environments, data centers, and enterprise networks where ultra-fast
connectivity is required.

Each Ethernet standard builds upon its predecessors, offering faster speeds, improved
performance, and backward compatibility with older standards to accommodate the evolving
needs of network users.

MEDIUM ACCESS PROTOCOLS:

1) Random Access Protocols:


a) ALOHA
• Aloha is a random access protocol.
• It was actually designed for WLAN but is also applicable for shared medium.
• In this multiple stations can transmit data at same time and hence can lead to collision
and hence data being garbled.
Collision in ALOHA
• Suppose there are two stations A and B there is a common medium as shred medium
to share data.
• Let A has data A-frame and B has data B-frame if A and B share their data at the same
time then there occurs a collision.
• The frames are lost or corrupted.
Types of ALOHA
PURE ALOHA
• Pure ALOHA allows stations to transmit whenever they have data to transmit
• Whenever a station sends data it waits for an acknowledgement.
• If the acknowledge doesn’t come in allotted time then the station waits for a random
amount of time called as back-off time and re-sends the data.
• Since different stations wait for different amount of time the probability of further
collision decreases.
• The throughput of pure ALOHA is maximised when frames are of equal length.

SLOTTED ALOHA
• IT was developed just to improve the efficiency of pure aloha as the chances of
collision in pure aloha is high.
• The time of shared channel is divided into time slots.
• Sending of data is allowed only at the beginning of these time slots.
• If the station misses the allowed time slots it has to wait for another time slots. This
reduces the chances of collision.

b) CSMA (CARRIER SENSE MULTIPLE ACCESS)


• To minimize the chance of collision and therefore increase the performance CSMA
method was developed.
• Principle of CSMA: “sense before transmit” or “listen before talk”.
• Carrier may be busy – Transmission may be taking place.
• Carrier may be idle – Transmission is not taking place.
• The possibility of collision still exits because of propagation; a station may sense the
medium and find it idle only because the first bit send by another station is not yet
received.
1) CSMA/CD ( CSMA with Collision Detection)
• If two station sense the channel to be idle and start transmission simultaneously they
both will detect collision almost immediately.
• As soon as the stations detect collisions they should stop transmitting, as the corrupted
frames are of no use.
• Quickly terminating damaged frames saves time and bandwidth.
• This protocol known as CSMA/CD is widely used in LAN in the MAC sublayer.

• COLLISION HANDLING IN CSMA/CD

• AT a point t0 a station has finished transmitting its frames.


• If any other station having to send the frames may attempt to do so. If two are more
stations try to send frames there will be collision.
• Collisions can be detected by looking at the power or pulse of received signal and
comparing it with transmitted signal. Power of transmitted signal will be better than of
received signal.
• After a station detects collision, it aborts its transmission waits for a period of time,
and tries again assuming there is no other station transmitting in meantime.
• Therefore model for CSMA/CD will consist of alternation contention and
transmission periods, with idle periods occurring when all stations are quiet.

2) CSMA/CA(CSMA with Collision Avoidence)


• It is a carrier sensing method in which carrier sensing is used, but nodes attempt to
avoid collisions by beginning transmission only after the channel is sensed to be idle.
• It is particularly important for wireless networks, where the detection of CSMA/CD is
not possible due to wireless transmitters.
• CSMA/CA is reliable due to hidden node problem and exposed terminal problem.
• Solution: RTS/CTS exchange.
• CSMA/CA is a protocol that operates in the data link layer of OSI model.
• The Access method used by IEEE 802.11 Wi-Fi is CSMA/CA.
2)CONTROLLED ACCESS PROTOCOL
a) Reservation
• A station needs to make reservation before sending data.
• In each interval, a reservation frame precedes the data frame sent in that interval.
• If there are N stations in the system, there are exactly N reservation mini slots in
the system.
• Each minislot belong to a station.
• When a station needs to send a data frame, it makes a reservation in its own
minislot.
• The stations that have made reservations can send their data frames after the
reservation frame.

b)Token Passing
• A station is authorized to send data when it receives a special frame called a token.
• Here there is no master node.
• A small, special purpose frame known as node is exchanged among the nodes in
some fixed order.
• When a node receives the token, it holds onto the token only if it has some frames
to transmit, otherwise it immediately forwards the token to next node.
• If the node does have the frames to transmit when it receives the token, it sends up
to maximum number of frames and then forwards the token to next node.
• Token passing is decentralized and highly efficient.
3)CHANNELIZATION
Channelization protocols are methods used in telecommunications to divide a transmission
medium, such as a radio frequency spectrum or a network cable, into multiple channels.
These channels allow for the simultaneous transmission of multiple signals, increasing the
efficiency of communication systems. There are several channelization protocols used in
various communication technologies:

1. Frequency Division Multiple Access (FDMA):


• FDMA divides the available frequency spectrum into multiple non-
overlapping frequency bands or channels.
• Each channel is allocated to a single user or communication stream, and only
one user can use each channel at a time.
• FDMA is commonly used in analog systems like traditional radio
broadcasting.
2. Time Division Multiple Access (TDMA):
• TDMA divides the available transmission time into fixed-length time slots.
• Multiple users share the same frequency channel, but each user is assigned a
unique time slot during which they can transmit data.
• TDMA is often used in digital cellular networks, such as GSM (Global System
for Mobile Communications).
3. Code Division Multiple Access (CDMA):
• CDMA assigns a unique spreading code to each user, which is used to spread
the user's signal across the entire available frequency spectrum.
• Multiple users can transmit simultaneously on the same frequency band, but
their signals are distinguished by their unique codes.
• CDMA is widely used in modern cellular networks, such as CDMA2000 and
WCDMA (Wideband CDMA).

Q)HDLC
Packet Switching:

1. Data is broken into packets for transmission.


2. Packets travel independently across the network.
3. Each packet may take a different path to reach its destination.
4. Efficiently utilizes network resources.
5. Used in modern computer networks like the Internet.

Circuit Switching:

1. Establishes a dedicated connection between sender and receiver.


2. Connection remains dedicated for the entire communication session.
3. Bandwidth is reserved for the duration of the session.
4. Commonly used in traditional telephone networks.
5. Less efficient compared to packet switching.
CHP 4 TRANSPORT AND SESSION LAYER
Session layer design issues:
Session Layer, the fifth layer in the OSI model, is primarily responsible for managing
sessions between communicating entities. While it facilitates reliable communication,
several design issues can arise:

1. Overhead: Establishing sessions can introduce delays and consume resources.


2. State Management: Keeping track of session states can be complex, especially in
distributed systems.
3. Timeouts and Recovery: Dealing with session timeouts and failures requires robust
mechanisms.
4. Scalability: Systems need to handle increasing numbers of sessions efficiently.
5. Multiplexing: Efficiently managing multiple sessions over a single connection is
essential.
6. Security: Ensuring data security during sessions adds complexity.
7. Interoperability: Making different systems work together smoothly can be
challenging.
8. Termination: Ending sessions gracefully without data loss or resource leaks is
important.

USER DATAGRAM PROTOCOL:


UDP Services:

User Datagram Protocol (UDP) is a simpler, connectionless transport protocol that offers
fewer services compared to TCP. Here are the main services provided by UDP:
1. Quick Messaging: UDP lets you send messages without needing to establish a
connection first. It's like sending a letter without waiting for confirmation that it's
been received.
2. Speedy Delivery: UDP has less "extra stuff" in its messages compared to other
protocols, so it's faster to send and receive data.
3. No Waiting: With UDP, you can keep sending data without waiting for the receiver
to say "got it". This can be handy for things like live video streaming or online
gaming where speed matters more than making sure every piece of data arrives.
4. No Guarantees: Unlike some other protocols, UDP doesn't promise that your data
will always arrive or arrive in order. It's more like shouting across a room – you hope
the other person hears you, but you can't be sure.
5. Broadcasting: UDP allows you to send messages to multiple people at once, which is
useful for things like sending out updates to a group of devices all at once.
6. Simple and Lightweight: UDP is easy to use and doesn't have a lot of complicated
rules, which makes it good for simple tasks where you don't need all the extra features
of other protocols.
UDP Applications:
User Datagram Protocol (UDP) is a connectionless transport protocol that offers minimal
functionality compared to TCP. Its simplicity and low overhead make it suitable for various
applications where reliability and congestion control are less critical. Here are some common
applications of UDP:

1. Streaming Media: Live video and audio streaming services often use UDP for its
low-latency transmission, where a small amount of packet loss is acceptable.
2. Online Gaming: Multiplayer online games rely on UDP for real-time communication
between players and servers, prioritizing speed over reliability.
3. VoIP (Voice over Internet Protocol): Voice and video calling applications utilize
UDP for its low overhead and reduced latency, making real-time conversations
smoother.
4. DNS (Domain Name System): UDP is employed for DNS queries, translating
domain names into IP addresses swiftly, crucial for web browsing.
5. DHCP (Dynamic Host Configuration Protocol): UDP facilitates DHCP for
assigning IP addresses and network configuration to devices on a network
dynamically.
6. SNMP (Simple Network Management Protocol): UDP is used for SNMP, allowing
network administrators to monitor and manage network devices efficiently.
7. TFTP (Trivial File Transfer Protocol): Simple file transfer tasks, like updating
firmware on network devices, use UDP due to its lightweight and straightforward
nature.
8. NTP (Network Time Protocol): UDP is utilized for time synchronization across
networked devices, ensuring accurate timekeeping for various applications.
9. Syslog: Syslog servers and clients use UDP for transmitting system log messages,
allowing for centralized logging and monitoring in network environments.
10. Network Audio/Video Communication: UDP is used in applications such as video
conferencing and IP-based intercom systems, where real-time audio and video
communication are essential, and small delays or packet loss can be tolerated.

TRANSMISSION CONTROL PROTOCOL:

TCP Services:

Transmission Control Protocol (TCP) provides several services to applications and users to
ensure reliable, ordered, and error-checked delivery of data across networks. Here are the
primary services offered by TCP:
1. Reliable Communication: TCP ensures that data sent from one computer reaches the
other computer without errors and in the right order.
2. Connection Setup: Before sending data, TCP establishes a connection between the
sender and receiver to ensure smooth communication.
3. Flow Control: It manages the speed of data transmission so that the sender doesn't
overwhelm the receiver, preventing data loss.
4. Congestion Control: TCP monitors the network to avoid traffic jams and ensures fair
sharing of network resources among users.
5. Error Handling: It checks for errors in transmitted data and asks for retransmission
if any errors are found, ensuring data integrity.
6. Two-Way Communication: TCP allows data to be sent and received simultaneously,
enabling real-time communication.
7. Multiple Connections: It supports multiple connections between computers, letting
different applications run smoothly on the same network.
8. Connection Termination: TCP ends connections gracefully, freeing up resources
and ensuring no data loss.

CONGESTION CONTROL
Causes of Congestion:

1. Network Overload: When the volume of data being transmitted exceeds the capacity
of the network infrastructure (routers, switches, links), congestion occurs. This can
happen during peak usage periods or when the network experiences sudden spikes in
traffic.
2. Slow Network Devices: Network devices such as routers or switches may become
bottlenecks if they're not capable of processing data quickly enough. Outdated or
misconfigured equipment can contribute to congestion.
3. Packet Loss and Retransmissions: When packets are lost or corrupted during
transmission, they need to be retransmitted. This can lead to increased traffic on the
network, contributing to congestion.
4. Buffer Overflow: Network devices use buffers to temporarily store incoming
packets. If these buffers become full, incoming packets are dropped, leading to
congestion and potential packet loss.
5. Network Topology: The layout and design of the network can also contribute to
congestion. For example, if multiple devices are connected to a single switch port
(known as "oversubscription"), it can lead to congestion at that point.
Leaky Bucket Algorithm:

The leaky bucket algorithm is a simple yet effective technique used in computer networks for
traffic shaping and congestion control. Imagine a bucket with a small hole at the bottom.
Water (or data packets) pours into the bucket, and if it fills up too quickly, excess water spills
out of the hole at a constant rate.

In networking, the "bucket" represents a buffer where incoming data packets are temporarily
stored. If packets arrive faster than the network can handle, they're placed in the bucket.
However, if the bucket fills up and overflows, excess packets are discarded or delayed.

Here's how the leaky bucket algorithm works:

1. Incoming Data: Data packets arrive at the network device (router, switch, etc.) and
are placed in the bucket (buffer).
2. Bucket Capacity: The bucket has a maximum capacity, representing the maximum
amount of data the network can handle at any given time.
3. Leaky Bucket Operation: If the bucket is full and incoming packets continue to
arrive, excess packets are discarded or delayed. This prevents the network from
becoming overwhelmed and helps regulate the flow of data.
4. Constant Rate: The leaky bucket empties at a constant rate, ensuring a steady output
of data from the buffer. This helps smooth out bursts of traffic and prevents
congestion.

The leaky bucket algorithm is commonly used for traffic shaping, where data transmission
rates are controlled to match the capacity of the network. It's also used for rate limiting,
ensuring that users or applications don't exceed a certain data rate.

Token Bucket Algorithm:

The token bucket algorithm is another traffic shaping technique used in computer networks.
Instead of a leaky bucket with a constant drain rate, the token bucket contains tokens that
represent permission to send data packets. If there are tokens available, packets can be sent
immediately. If not, packets must wait until tokens become available.

Here's how the token bucket algorithm works:

1. Token Generation: Tokens are generated at a fixed rate and added to the token
bucket. Each token represents permission to send one data packet.
2. Packet Transmission: When a data packet needs to be sent, it must "spend" a token
from the bucket. If there are tokens available, the packet can be transmitted
immediately.
3. Token Consumption: If the token bucket is empty and no tokens are available,
packets must wait until tokens are replenished. This helps regulate the flow of data
and prevent congestion.
4. Rate Limiting: By controlling the rate at which tokens are generated and consumed,
the token bucket algorithm effectively limits the rate of data transmission, ensuring
that it doesn't exceed a predefined limit.
Both the leaky bucket and token bucket algorithms are used for traffic shaping and
congestion control in computer networks, helping to regulate the flow of data and prevent
network overload.

RPC (REMOTE PROCEDURE CALL)


Certainly! Here are the key points about the Remote Procedure Call (RPC) session layer
protocol:

1. Functionality: RPC allows a program on one computer to execute a subroutine on a


remote server as if it were local, abstracting network communication complexities.
2. Components: Involves a client initiating a request and a server executing the
procedure, with data exchange managed by the RPC protocol.
3. Marshalling and Unmarshalling: Parameters for procedure calls are converted into
a suitable format for transmission (marshalling) and converted back upon reception
(unmarshalling).
4. Connection Management: Handles establishment, maintenance, and termination of
connections between client and server, ensuring synchronization.
5. Error Handling: Incorporates mechanisms for detecting and handling
communication errors gracefully, including network failures and server errors.
6. Security: Supports security features like authentication and encryption to ensure data
confidentiality and integrity.
7. Concurrency Control: Supports concurrent execution of multiple remote procedures,
managing resources efficiently while ensuring data consistency.

TCP HEADER
1. Source Port (16 bits): This field identifies the sending port of the sender. It indicates
the application or service on the sender's device that generated the TCP segment.
2. Destination Port (16 bits): Specifies the port number of the receiving device's
application or service that should handle the TCP segment.
3. Sequence Number (32 bits): Indicates the byte position of the first data byte in the
TCP segment within the entire stream of data being sent from the sender to the
receiver. It helps in ensuring data integrity and ordering during transmission.
4. Acknowledgment Number (32 bits): Used by the receiver to acknowledge receipt of
data from the sender. It contains the next sequence number that the sender expects to
receive, acknowledging all bytes with smaller sequence numbers.
5. Data Offset (4 bits): Specifies the size of the TCP header in 32-bit words. It indicates
where the data begins, allowing variable-length options to be included in the TCP
header.
6. Reserved (6 bits): Reserved for future use. Must be set to zero.
7. Flags (6 bits): Flags control the state and behavior of the TCP connection. Key flags
include:
• URG (Urgent): Indicates urgent data in the TCP segment.
• ACK (Acknowledgment): Acknowledges the receipt of data.
• PSH (Push): Indicates immediate delivery of data.
• RST (Reset): Resets the connection.
• SYN (Synchronize): Initiates a connection.
• FIN (Finish): Terminates the connection.
8. Window Size (16 bits): Specifies the size of the receive window, indicating the
amount of data the sender can transmit before requiring an acknowledgment from the
receiver.
9. Checksum (16 bits): Used for error detection, ensuring data integrity during
transmission. It covers the TCP header and data.
10. Urgent Pointer (16 bits): Only valid if the URG flag is set. Indicates the offset from
the sequence number of the last urgent data byte in the TCP segment.
11. Options: Optional fields that provide additional information or configuration
parameters for the TCP connection. Options can include maximum segment size,
timestamp, window scale factor, etc.

The TCP header, along with the TCP data, forms a TCP segment, which is encapsulated
within an IP packet for transmission over the network. This header provides the necessary
control and addressing information for reliable and ordered data delivery between devices.

You might also like