0% found this document useful (0 votes)
14 views

Worksheet I Answer Data

The document discusses circuit switching versus packet switching for data communication. It provides details on the steps involved in circuit switching, and reasons for why circuit switching is not suitable for computer-to-computer traffic, including inefficient resource allocation and lack of flexibility. The document also compares virtual circuits and datagrams in terms of header information and delivery guarantees.

Uploaded by

Abraham Gadissa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Worksheet I Answer Data

The document discusses circuit switching versus packet switching for data communication. It provides details on the steps involved in circuit switching, and reasons for why circuit switching is not suitable for computer-to-computer traffic, including inefficient resource allocation and lack of flexibility. The document also compares virtual circuits and datagrams in terms of header information and delivery guarantees.

Uploaded by

Abraham Gadissa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Worksheet I

Exercise/review questions on introduction section


Why is circuit switching not suitable for computer-to-computer traffic? What are the three steps
that are required for data communication using circuit switching?
Ans:
Circuit switching is not suitable for computer-to-computer traffic primarily because
 It is designed for establishing dedicated communication channels between two
endpoints for the duration of a communication session.
In computer-to-computer traffic, data is typically transmitted in short bursts and does not require
a dedicated connection throughout the entire communication process.
Additionally, circuit switching involves reserving a fixed amount of bandwidth for each
connection, which can lead to inefficient use of network resources when there are varying levels
of traffic demand.
The three steps involved in data communication using circuit switching are as follows:
1. Establishment of the circuit: In this step, a dedicated path or circuit is established
between the source and destination nodes. This involves reserving the necessary network
resources, such as bandwidth, along the entire route to ensure uninterrupted
communication.
2. Data transmission: Once the circuit is established, data is transmitted between the source
and destination nodes.
The data is typically divided into fixed-size units called packets or frames. These packets
are sequentially transmitted over the dedicated circuit.
3. Circuit teardown: After the data transmission is complete, the circuit is released or torn
down. The network resources that were initially reserved for the circuit are now made
available for other.
It is not suitable for computer-to-computer traffic due to several reasons:
1. Resource Allocation: Circuit switching requires the allocation of dedicated resources, such as
bandwidth, for the entire duration of a communication session. This means that resources
remain reserved even during periods of inactivity, leading to inefficient utilization. In
computer-to-computer traffic, where data transfer can be bursty and sporadic, this static
allocation of resources is not efficient.
2. Scalability: Circuit switching is not easily scalable to accommodate a large number of
simultaneous connections. Each connection requires a dedicated circuit, and as the number of
connections increases, the infrastructure needed to support them becomes more complex and
costly. Computer networks typically have a large number of hosts, and circuit switching
would be impractical to handle the scale of connections needed.
3. Flexibility and Dynamic Routing: Circuit switching is designed for static connections, where
the path between the sender and receiver remains fixed for the duration of the communication
session. In computer networks, data traffic often requires dynamic routing, where packets can
take different paths to reach their destination based on network conditions and congestion.
Circuit switching does not easily support such dynamic routing.
2. with respect of sharing of links, which of circuit switching or packet switching is more
suitable?
Ans:
In the context of sharing links,
Packet switching is generally more suitable, especially for scenarios where multiple
connections need to share the available bandwidth efficiently.
It allows for more flexible and scalable communication, making it well suited for data-intensive
applications such as internet browsing, file transfers, video streaming, and real-time
communication services. However, if the specific requirements prioritize guaranteed bandwidth
and low latency, circuit switching may be a more appropriate choice.
3. Among virtual circuit and datagram, which approach requires less information in the packet
Header.
Ans:
In terms of the amount of information required in the packet header, the datagram approach
requires less information compared to the virtual circuit approach.
In a datagram-based network, each packet (also known as a datagram) is treated as an
independent entity.
Since each packet is handled independently, the header only needs to include the information
required for that specific packet to reach its destination.
On the other hand, in a virtual circuit-based network, a connection is established before
data transfer occurs.
4. Which of virtual circuit and datagram makes better utilization of the links..?
Virtual circuit and datagram networks have different approaches to utilizing links.
In a virtual circuit network, a dedicated communication path is established before data
transmission occurs.
This path remains constant throughout the communication session, allowing for efficient routing
and resource allocation.
Since the path is predetermined, virtual circuit networks can optimize link utilization by
dynamically assigning bandwidth based on the established circuit's requirements.
This approach ensures that the links are efficiently utilized along the established path.
On the other hand, datagram networks, such as the Internet Protocol (IP), operate on a
connectionless basis.
Each packet is treated independently and can take different paths to reach its destination.
Datagram networks do not establish a dedicated communication path before transmission, which
provides flexibility but can result in less efficient link utilization.

Overall, virtual circuit networks tend to make better utilization of links due to their ability to
establish dedicated paths and allocate resources accordingly. Datagram networks, while more
flexible and scalable, may experience variations in link utilization due to the nature of packet-by-
packet routing and potential congestion. However, it's important to note that both network types
have their own advantages and are suitable for different types of applications and scenarios.
4. Which of virtual circuit and datagram will guarantee ordered delivery of packets in the
absence of any errors?
ANS:

 In the absence of any errors, a virtual circuit network guarantees ordered delivery of
packets.
 In a virtual circuit network, a dedicated path is established between the sender and
receiver before any data transmission takes place.

o Therefore, if ordered delivery of packets is a requirement, a virtual circuit


network is more suitable as it provides a connection-oriented and deterministic
approach to packet delivery.

Which layer on the OSI model are host-to-host layers

Ans:-
The host-to-host layers on the OSI model are Layer 4 (Transport Layer) and Layer 5
(Session Layer).
1. Layer 4 - Transport Layer:
This layer is responsible for end-to-end communication between hosts. It ensures the
reliable delivery of data by establishing connections, segmenting data into smaller units
(if needed), and handling flow control and error recovery. Examples of protocols at this
layer include TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
2. Layer 5 - Session Layer:
This layer manages the session or connection between two hosts. It establishes,
maintains, and terminates connections, allowing processes on different hosts to
communicate with each other. It also handles session synchronization and check
pointing.
However, it is important to note that the Session Layer is often combined with the
Transport Layer in many modern protocol stacks, such as the TCP/IP model.
5.
6. What is the responsibility of network layer in the OSI MODEL?
The network layer, which is Layer 3 in the OSI model, is responsible for the routing and
forwarding of data packets across different networks. Its primary responsibilities include:
1. Logical Addressing: The network layer assigns logical addresses (IP addresses) to devices
on the network. These addresses uniquely identify each device and are used for the delivery
of data packets.
2. Routing: The network layer determines the optimal path for data packets to travel from the
source to the destination across different networks. It uses routing algorithms and maintains
routing tables to make routing decisions.
3. Packet Forwarding: Once the path is determined, the network layer is responsible for
forwarding the data packets from one network to another until they reach their destination.
It encapsulates the data packets into network-layer packets (IP packets) and adds necessary
routing information.
4. Fragmentation and Reassembly: The network layer may fragment large data packets into
smaller units to accommodate the maximum transmission unit (MTU) of the underlying
networks. At the destination, it reassembles the fragmented packets to reconstruct the
original data.
5. Quality of Service (QoS): The network layer can implement QoS mechanisms to prioritize
certain types of traffic, ensuring that critical data, such as real-time voice or video, receives
appropriate bandwidth and latency guarantees.
6. Address Resolution: The network layer may provide address resolution services, such as
the Address Resolution Protocol (ARP) in TCP/IP, which maps IP addresses to physical
addresses (MAC addresses) for communication within a local network.
Overall, the network layer plays a crucial role in enabling communication between different
networks by handling addressing, routing, and forwarding of data packets.
7. What does the various layers in the simplified TCP/IP protocl stack correspond to
with respect to the OSI seven-layer model
Ans:-
The host-to-host layers on the OSI model are Layer 4 (Transport Layer) and Layer 5 (Session
Layer).
1. Layer 4 - Transport Layer:
This layer is responsible for end-to-end communication between hosts. It ensures the
reliable delivery of data by establishing connections, segmenting data into smaller units
(if needed), and handling flow control and error recovery. Examples of protocols at this
layer include TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
2. Layer 5 - Session Layer:
This layer manages the session or connection between two hosts. It establishes,
maintains, and terminates connections, allowing processes on different hosts to
communicate with each other. It also handles session synchronization and check
pointing.
However, it is important to note that the Session Layer is often combined with the
Transport Layer in many modern protocol stacks, such as the TCP/IP model.
These layers work together to provide reliable and efficient communication between hosts over a
network.

The network layer, which is Layer 3 in the OSI model, is responsible for the routing and
forwarding of data packets across different networks. Its primary responsibilities include:
1. Logical Addressing: The network layer assigns logical addresses (IP addresses) to
devices on the network. These addresses uniquely identify each device and are used for
the delivery of data packets.
2. Routing: The network layer determines the optimal path for data packets to travel from
the source to the destination across different networks. It uses routing algorithms and
maintains routing tables to make routing decisions.
3. Packet Forwarding: Once the path is determined, the network layer is responsible for
forwarding the data packets from one network to another until they reach their
destination. It encapsulates the data packets into network-layer packets (IP packets) and
adds necessary routing information.
4. Fragmentation and Reassembly: The network layer may fragment large data packets
into smaller units to accommodate the maximum transmission unit (MTU) of the
underlying networks. At the destination, it reassembles the fragmented packets to
reconstruct the original data.
5. Quality of Service (QoS): The network layer can implement QoS mechanisms to
prioritize certain types of traffic, ensuring that critical data, such as real-time voice or
video, receives appropriate bandwidth and latency guarantees.
6. Address Resolution: The network layer may provide address resolution services, such as
the Address Resolution Protocol (ARP) in TCP/IP, which maps IP addresses to physical
addresses (MAC addresses) for communication within a local network.
Overall, the network layer plays a crucial role in enabling communication between different
networks by handling addressing, routing, and forwarding of data packets
1. How many bits are there in the IP address?

SOL

The most common version of the IP address used today is the IPv4 (Internet Protocol version 4)
address.

 An IPv4 address is a 32-bit binary number, typically represented in a human-readable


format as four sets of decimal numbers separated by periods (e.g., 192.168.0.1).
 Each decimal number represents 8 bits, and when you add up the bits from each set, you
get a total of 32 bits.

However, it's worth noting that there is also a newer version of the IP address called IPv6 (Internet
Protocol version 6), which is designed to address the limited number of available addresses in
IPv4. An IPv6 address is 128 bits long, represented as eight sets of four hexadecimal digits
separated by colons (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334). IPv6 provides a
significantly larger address space compared to IPv4, allowing for a vast number of unique
addresses.

2. What does the IP address specify?

SOL:

An IP address (Internet Protocol address)

 Is a numerical label assigned to each device connected to a computer network that


uses the Internet Protocol for communication.

o It serves two main purposes:

1. Network Identification:

 An IP address uniquely identifies a device and its location within a network.


 It specifies the network to which the device is connected and allows other devices
on the network to identify and communicate with it.
 IP addresses form the foundation of the routing system that enables data packets to
be transmitted across networks and reach their intended destinations.

2. Host Identification:

 In addition to identifying the network, an IP address also identifies an individual


device (host) within that network.
 It distinguishes one device from others on the same network, allowing data to be
sent specifically to that device.
 This enables devices to receive and process data intended for them.

IP addresses are fundamental to the functioning of the Internet and are used for various purposes,
such as routing data packets, establishing connections, and enabling communication between
devices across networks.

3. What are the minimum and maximum header sizes of an IP packet?

SOL

In the IPv4 protocol,

 The minimum header size of an IP packet is 20 bytes (160 bits), and


 The maximum header size is 60 bytes (480 bits).

The header size can vary depending on the options present in the IP packet.

The standard IPv4 header is 20 bytes long and consists of several fields, including the version,
total length, time to live (TTL), protocol, source and destination IP addresses, header checksum,
and other control flags. This basic header structure is used in most IPv4 packets.

However, the IPv4 header can be extended with optional fields called IP options. These options
provide additional functionality and flexibility but are not present in every IP packet. If IP
options are included, they increase the overall header size. The maximum header size of 60 bytes
accounts for the maximum possible size of IP options.

It's important to note that with the introduction of IPv6, the IP header structure has changed
significantly, and IPv6 uses a fixed header size of 40 bytes (320 bits) for all packets. IPv6
eliminates the need for IP options by incorporating optional extension headers that can be added
after the fixed header.

4. What is the purpose of the “time to live” field in the IP header?

SOL

The "Time to Live" (TTL) field in the IP header serves an important purpose in the IP protocol.
Its primary function is:

 To prevent packets from circulating indefinitely in a network and


 To ensure the efficient use of network resources.
Here's a breakdown of the purpose and functionality of the TTL field:

1. Hop Limit:
 The TTL field is sometimes referred to as the "hop limit" in IPv6.
 It represents the maximum number of network hops (routers) that an IP packet can
traverse before being discarded.
 Each time a router forwards the packet, it decrements the TTL value by one.
 If the TTL reaches zero, the packet is considered expired and is typically discarded.
2. Preventing Infinite Loops:
 The TTL field helps prevent packets from getting trapped in routing loops or
endlessly circulating through a network.
 If a routing loop occurs, where a packet continuously visits the same set of routers
without reaching its destination, the TTL field ensures that the packet will
eventually expire and be dropped from the network.
3. Time-Based Discard:
 The TTL field can also be used to implement time-based discarding of packets.
 In some scenarios, packets with a high TTL value may be given priority over those
with a low TTL value.
 This prioritization can be used to optimize network performance, ensure timely
delivery of packets, or enforce quality of service (QoS) policies.

4. Path Tracing and Troubleshooting:


 The TTL field is useful for network diagnostics and troubleshooting.
 By setting the initial TTL value to a known value (e.g., 1), a sender can track the
path taken by the packet and receive ICMP Time Exceeded messages from routers
along the way, allowing for path tracing and identifying potential issues.

Overall, the TTL field in the IP header is a mechanism to limit the lifespan of IP packets and
prevent them from circulating indefinitely. It helps ensure efficient routing, prevent network
congestion and aid in network troubleshooting.

5. If the IP header is 192 bytes long, what will be the value of the “HLEN” field?
Sol
In the IP header, there is a field called "HLEN" (Header Length) that specifies the length of the
IP header in 32-bit words (4-byte units). The HLEN field uses 4 bits to represent the header
length.
To determine the value of the HLEN field when the IP header is 192 bytes long, we need to
divide the length by 4 (since each word is 4 bytes).

192 bytes / 4 bytes per word = 48 words

Since the HLEN field uses 4 bits, it can represent values from 0 to 15. The value 48 is not within
this range, so it cannot be directly represented in the HLEN field.

Therefore, if the IP header is 192 bytes long, it is not possible to accurately determine the value
of the HLEN field without additional information or context.

6. What is the maximum size of data that can be accommodated in an IP datagram?

SOL

In IPv4, the maximum size of data that can be accommodated in an IP datagram is determined by
the Maximum Transmission Unit (MTU) of the underlying network. The MTU represents the
maximum size of a packet that can be transmitted over the network without fragmentation.

 The standard MTU for Ethernet networks, which is the most common network type, is
1500 bytes.
 However, it's important to consider that the IP datagram header itself consumes a certain
amount of space within the packet.
 The IP header is 20 bytes long, and additional headers or options may be included,
depending on the specific protocol and configuration.
 Taking the IP header into account, the maximum amount of data that can be
accommodated in an IP datagram without fragmentation on an Ethernet network is:

MTU - IP Header Size = 1500 bytes - 20 bytes = 1480 bytes

Therefore, in this scenario, the maximum size of data that can be included.

 The maximum IP datagram is 1480 bytes.

It's worth noting that other network types or configurations may have different MTU values,
which would affect the maximum size of data that can be accommodated in an IP datagram.

7. An IP packet arrives at a router with the first eight bits as 01000011. The router discards the
packet. Why?

SOL
 The router discards the IP packet because the first eight bits, which are "01000011,"
correspond to the value of the Differentiated Services Code Point (DSCP) field in the IP
header.
 The DSCP field is used for Quality of Service (QoS) purposes to prioritize and classify
network traffic.

In this case, the value "01000011" indicates that the packet has a DSCP value of 67 in decimal or
0x43 in hexadecimal.

The router's configuration or network policies might specify that packets with certain DSCP
values should be discarded or given a lower priority.

 The specific reason for discarding the packet would depend on the router's configuration
and the network's QoS policies.
 Discarding packets based on their DSCP value is a common practice to enforce QoS
policies and manage network congestion.
 It allows network administrators to prioritize certain types of traffic while deprioritizing or
discarding others based on their assigned DSCP values.

8. An IP packet arrives at a router with the first eight bits as 01001000. How many bytes of
options are there in the packet?

SOL

To determine the number of bytes of options in an IP packet based on the first eight bits, we need
to examine the IP header's IHL (Internet Header Length) field. The IHL field specifies the length
of the IP header, including any optional fields or options.

The IHL field is a 4-bit field that represents the number of 32-bit words in the header. To
calculate the header length in bytes, we multiply the value of the IHL field by 4.

In this case, the first eight bits are "01001000." Let's break it down:

 0100 - The first four bits (0100) represent the version. In this case, it indicates IPv4.
 1000 - The next four bits (1000) represent the IHL field. In binary, 1000 is equal to 8 in
decimal.

Since the IHL field is 8, we can calculate the header length by multiplying it by 4:

8 * 4 = 32 bytes

Therefore, based on the first eight bits, the IP packet has 32 bytes of options in the header.

9. In an IP packet, the value of HLEN is 5, and the value of the total length field is 1000. How many
bytes of data the packet is carrying?
SOL

To determine the number of bytes of data that an IP packet is carrying, we need to consider the
value of the "HLEN" field (Header Length) and the "total length" field in the IP header.

The HLEN field in the IP header represents the length of the IP header in 32-bit words (4-byte
units). In this case, the HLEN value is 5, indicating that the IP header is 5 * 4 = 20 bytes long.

The total length field in the IP header represents the total length of the IP packet, including both
the header and the payload (data). In this case, the total length value is 1000.

To calculate the number of bytes of data carried by the packet, we subtract the IP header length
from the total length:

Total length - Header length = Data length


1000 bytes - 20 bytes = 980 bytes

Therefore, the IP packet is carrying 980 bytes of data.

10. A packet has arrived at the destination with the M bit as zero. What can you say about the packet?

SOL:

When the M bit (More Fragments) in the IP header is set to zero, it indicates that the packet is
the last fragment of a fragmented IP packet.

IP fragmentation is a process used when a packet is too large to be transmitted over a network
without being divided into smaller fragments.

 The original packet is fragmented into smaller pieces, each with its own IP header, and
these fragments are reassembled at the destination.
 When the M bit is set to zero, it means that the current fragment is the last one and there
are no more fragments to follow.
 This information is crucial for the destination host to correctly reassemble the original
packet.

Therefore, if a packet arrives at the destination with the M bit set to zero, it indicates that the
entire original packet has been successfully received and reassembled, and no further fragments
are expected.

11. A packet has arrived at the destination with the M bits as one, and also fragment offset field as zero.
What can you say about the packet?

SOL:
When the M bit (More Fragments) in the IP header is set to one and the fragment offset field is
zero, it indicates that the packet is part of a fragmented IP packet, and there are more fragments
to follow.

IP fragmentation is a process used when a packet is too large to be transmitted over a network
without being divided into smaller fragments. The original packet is fragmented into smaller
pieces, each with its own IP header, and these fragments are reassembled at the destination.

In the case of the packet you described, with the M bit set to one and the fragment offset field as
zero:

 The M bit is set to one indicating that there are more fragments to be received after this
particular fragment.
 The fragment offset field being zero indicates that this fragment is the first fragment in
the series.

Therefore, the packet is an initial fragment of a fragmented IP packet, and more fragments are
expected to arrive at the destination to complete the original packet. The destination will need to
collect and reassemble all the fragments based on the fragment offset and M bits to reconstruct
the original packet.

12. A packet has arrived at the destination with the HLEN value of 5, the fragment offset field as 150,
and the total length field as 2000. What can you say about the packet?

SOL

Based on the information provided about the packet:

1. HLEN value as 5: The HLEN field (Header Length) in the IP header specifies the length
of the IP header in 32-bit words (4-byte units). In this case, the HLEN value is 5,
indicating that the IP header is 5 * 4 = 20 bytes long.
2. Fragment offset field as 150: The fragment offset field in the IP header indicates the
position of the current fragment relative to the original packet. In this case, the fragment
offset is specified as 150. The offset is measured in units of 8 bytes, so the actual offset
would be 150 * 8 = 1200 bytes.
3. Total length field as 2000: The total length field in the IP header represents the total
length of the IP packet, including both the header and the payload (data). In this case, the
total length value is 2000 bytes.

Based on this information, we can deduce the following about the packet:
 The IP header is 20 bytes long.
 The packet is part of a fragmented IP packet because the fragment offset field is non-zero
(150), indicating that it is not the first fragment in the series.
 The current fragment's offset is 1200 bytes, suggesting that it should be placed after the
previously received fragments in the reassembly process.

 The total length of the packet is 2000 bytes, indicating the combined size of the header,
payload, and any other fragments that may be part of the original packet.

To fully understand the packet's significance and reassemble the original packet, it would be
necessary to receive and process the remaining fragments with their respective fragment offsets
until the complete original packet is reconstructed

13. An IP packet with 2500 bytes of data (plus header) passes through an IP network with MTU =500.
How many additional bytes will be delivered at the destination?

SOL:

In this scenario, the IP packet has 2500 bytes of data (plus header) and passes through an IP
network with an MTU (Maximum Transmission Unit) of 500 bytes. This means that the
maximum size of each fragment that can be transmitted without fragmentation is 500 bytes.

To calculate the number of additional bytes delivered at the destination, we need to determine the
number of fragments the packet will be divided into and calculate the additional bytes in the
fragments' headers.

First, let's calculate the number of fragments:

Number of fragments = ceil ((Total packet size) / (MTU - IP header size))

IP header size is typically 20 bytes for IPv4.

Number of fragments = ceil ((2500 + 20) / (500 - 20))


= ceil(2520 / 480)
= ceil(5.25)
=6

The packet will be divided into 6 fragments.

Now, let's calculate the additional bytes in the fragments' headers:

Additional bytes = (Number of fragments - 1) * (IP header size - 20)

Additional bytes = (6 - 1) * (20 - 20)


=0
Since the IP header size is typically 20 bytes, and the fragments' headers will already contain the
necessary information, there are no additional bytes delivered at the destination beyond the
original 2500 bytes of data.

Exercise questions on TCP/UDP layer


1. What does the port number in a TCP connection specify?
Ans: .

In TCP (Transmission Control Protocol), the port number specifies a specific endpoint or service
within a device that is participating in a network connection. It is a 16-bit unsigned integer ranging
from 0 to 65535.

In a TCP connection, both the source and destination devices are identified by IP addresses, which
help in locating the devices in the network. However, IP addresses alone are not sufficient to
determine which specific service or application within a device should handle the incoming data.
This is where port numbers come into play.

Port numbers act as identifiers for specific services or applications running on a device. They allow
multiple services to operate simultaneously on a single device, each with its own unique port
number. For example, web servers typically listen for incoming connections on port 80, while
email servers use port 25 for SMTP (Simple Mail Transfer Protocol).

When establishing a TCP connection, both the source and destination devices include port numbers
in their respective packets to indicate which services they want to communicate with. The
combination of the source IP address, source port number, destination IP address, and destination
port number forms a unique socket, which enables data to be exchanged between the two devices.

Upon receiving a TCP packet, the destination device examines the destination port number to
determine which service or application should receive the data. It then forwards the packet to the
appropriate service based on the port number specified. This allows for proper handling and
delivery of network traffic to the intended service within a device.
2. Why is it necessary to have both IP address and port number in a packet?

Ans:-
An IP address and a port number are necessary in a packet so that the packet can be properly
routed to its destination.
 The IP address identifies the destination device or network.
 The port number is used to identify a specific process or application running on that
device or network.
Together, the IP address and port number allow the network to properly deliver the packet
to the correct destination.

3. Which of the layers TCP, UDP and IP provides reliable communication?


ANS:

 TCP (Transmission Control Protocol) and IP (Internet Protocol) are both fundamental
protocols that provide reliable communication in different ways.
 IP is responsible for routing packets of data across networks, while
 TCP is responsible for ensuring that those packets are delivered correctly and in the right
order.
 Together, IP and TCP make up the backbone of communication on the Internet.
 UDP (User Datagram Protocol), on the other hand, is a simpler protocol that does not
guarantee reliable delivery of data.
Instead, it prioritizes speed over reliability and is often used for applications that can tolerate some
loss of data, such as streaming video or audio.
In short, TCP is the most reliable protocol for communication, while UDP is faster but less
reliable. IP is the protocol that makes communication possible in the first place.
4. Both UDP and IP transmit datagrams. In what ways do they differ?

ANS:-

UDP (User Datagram Protocol) and IP (Internet Protocol) are both protocols used in computer
networks to transmit data, but they have distinct characteristics and serve different purposes. Here
are the key differences between UDP and IP:

1. Transport Layer vs. Network Layer:


o UDP: UDP operates at the transport layer of the TCP/IP protocol stack. It provides
a simple, connectionless, and unreliable transport service. UDP does not guarantee
the delivery, sequencing, or error checking of packets.
o IP: IP operates at the network layer of the TCP/IP protocol stack. It provides the
core functionality for addressing and routing packets across networks. IP is
responsible for the fragmentation, reassembly, and routing of packets.
2. Connectionless vs. Connection-oriented:
o UDP: UDP is connectionless, meaning it does not establish a dedicated connection
before transmitting data. Each UDP datagram is independent and can be sent
without prior setup. However, because of its connectionless nature, UDP does not
provide reliable data delivery and does not guarantee packet ordering.
o IP: IP is also connectionless. It treats each packet independently and does not
maintain a connection state between sender and receiver. IP's role is to route packets
across networks based on the destination IP address.
3. Reliability:
o UDP: UDP does not provide built-in mechanisms for error detection, correction, or
retransmission of lost packets. It is considered an unreliable protocol. If a UDP
packet is lost or damaged during transmission, it is not retransmitted, and the
receiving application will not be aware of the loss.
o IP: IP also does not provide built-in reliability mechanisms. It relies on higher-level
protocols (such as UDP or TCP) to handle error detection, correction, and
retransmission if required.
4. Header Information:
o UDP: UDP has a relatively simple header, containing source and destination port
numbers, length, and a checksum field. The UDP header is added on top of the IP
header when encapsulated within an IP packet.
o IP: IP has a more complex header that includes source and destination IP addresses,
packet identification, time-to-live (TTL), protocol type, checksum, and other fields
necessary for routing and fragmentation.
5. Usage:
o UDP: UDP is commonly used for real-time applications, such as video streaming,
voice over IP (VoIP), online gaming, and DNS (Domain Name System) queries.
These applications prioritize speed and low latency over reliability.
o IP: IP is the fundamental protocol used for routing and delivering packets across
networks. It is utilized by all higher-level protocols, including UDP, TCP, ICMP
(Internet Control Message Protocol), and others.

In summary, UDP and IP differ in their position within the TCP/IP protocol stack, their reliability
features, their header information, and their intended usage.

 UDP provides a lightweight, connectionless transport service primarily used for real-time
applications, while
 IP handles the network layer functionality of addressing, routing, and packet
fragmentation.

6. What are well-known port numbers?

ANS:

Well-known port numbers are standardized port numbers that are commonly used for specific
network services. These port numbers range from 0 to 1023 and are assigned by the Internet
Assigned Numbers Authority (IANA). Here are some examples of well-known port numbers:

 Port 20 and 21: FTP (File Transfer Protocol). Port 20 is used for data transfer, while port
21 is used for control commands.
 Port 22: SSH (Secure Shell). It is used for secure remote administration and secure file
transfers.
 Port 25: SMTP (Simple Mail Transfer Protocol). It is used for email transmission between
mail servers.
 Port 53: DNS (Domain Name System). It is used for translating domain names into IP
addresses and vice versa.
 Port 80: HTTP (Hypertext Transfer Protocol). It is used for unencrypted web browsing.
 Port 443: HTTPS (HTTP Secure). It is used for secure web browsing with encryption.
 Port 110: POP3 (Post Office Protocol version 3). It is used for retrieving email from a
remote server.
 Port 143: IMAP (Internet Message Access Protocol). It is used for accessing and managing
email on a remote mail server.
 Port 3389: RDP (Remote Desktop Protocol). It is used for remote desktop connections to
a Windows-based system.
These are just a few examples of well-known port numbers, and there are many more assigned by
the IANA for various network services and protocols.

7. What are ephemeral port numbers

ANS:

Ephemeral port numbers, also known as dynamic or temporary port numbers, are a range of port
numbers used by the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP)
for outbound connections. These port numbers are assigned dynamically by the operating system
to client applications when they initiate a connection to a server.

When a client application wants to establish a network connection with a server, it needs to specify
a port number on which it will communicate. In the case of ephemeral ports, the client does not
need to explicitly choose a port number. Instead, the operating system assigns an available port
number from a designated range.

The range of ephemeral ports varies depending on the operating system. In most systems, the
ephemeral port range is from 49152 to 65535. However, this range can be configured and may
differ in certain setups.

Once the client application establishes a connection with the server using an ephemeral port, the
server responds by sending data back to the client's ephemeral port. This allows the client to receive
the server's response and establish a bidirectional communication channel.

Ephemeral ports are temporary in nature because they are only used for the duration of a specific
connection. Once the connection is closed, the operating system reclaims the ephemeral port
number and makes it available for future use by other applications.

The use of ephemeral ports helps facilitate the management of multiple simultaneous network
connections on a single machine. By dynamically assigning port numbers, the operating system
can handle a large number of client connections without conflicts or the need for manual port
management.

8. With respect to a transport-level connection, what are the five components in an association?

ANS

In the context of a transport-level connection, an association typically refers to the relationship


between two endpoints that communicate with each other. While the specific components may
vary depending on the communication protocol or framework being used, here are five common
components that can be associated with a transport-level connection:
1. Source Address/Endpoint: This component represents the address or endpoint from
which the communication is initiated. It identifies the sender or the source of the data
packets in the association.
2. Destination Address/Endpoint: This component represents the address or endpoint to
which the communication is directed. It identifies the receiver or the destination of the data
packets in the association.
3. Protocol: The protocol defines the rules and procedures that govern the communication
between the source and destination endpoints. It specifies the format of the data, the order
of transmission, error handling mechanisms, and other aspects necessary for successful
communication.
4. Port Numbers: Ports are used to differentiate between multiple concurrent associations
on the same source and destination addresses. Each transport-level connection typically
uses a specific port number on both the source and destination endpoints to identify the
specific application or service involved in the communication.
5. Transport-Level Protocol Parameters: These parameters encompass various settings and
configurations related to the transport-level protocol being used for the association. They
can include parameters such as the maximum segment size, congestion control
mechanisms, flow control settings, and other parameters specific to the transport protocol
employed.

It's important to note that these components are general and may vary depending on the specific
transport protocol or communication framework in use. Different protocols, such as TCP
(Transmission Control Protocol) or UDP (User Datagram Protocol), may have additional or
slightly different components associated with their transport-level connections.

9. Why is the pseudo-header used in calculating TCP checksum?


ANS

The pseudo-header is used in calculating the TCP checksum to provide additional information
about the TCP segment being transmitted. It is a concept specific to TCP and is not used in other
transport layer protocols like UDP.

The TCP checksum is a mechanism used to detect errors in the transmission of TCP segments. It
verifies the integrity of the data by performing a mathematical calculation on the TCP header,
payload, and some additional information. The pseudo-header is part of this calculation.

The pseudo-header includes fields that are not present in the TCP header itself but are necessary
for the checksum calculation. These fields typically include the source and destination IP
addresses, the protocol number (which is 6 for TCP), and the TCP segment length. By including
these fields, the checksum calculation incorporates the IP layer information, ensuring that any
changes or errors in the IP header or the IP addresses will be detected.

The purpose of including the IP addresses is to ensure that the TCP segment is associated with the
correct IP connection. This is important in scenarios where multiple TCP connections are being
transmitted over the same network interface. By including the IP addresses and other relevant
information in the pseudo-header, the checksum calculation becomes more accurate and reliable.

In summary, the pseudo-header is used in calculating the TCP checksum to include additional
information (such as IP addresses and other relevant fields) that helps verify the integrity of the
TCP segment and ensure it is associated with the correct IP connection.

10. What are the different fields in the pseudo-header

ANS

In networking, the pseudo-header is a concept used in some protocols, such as the Transmission
Control Protocol (TCP) and User Datagram Protocol (UDP).

The pseudo header is not an actual header that is transmitted over the network, but rather a
construct used during the calculation of certain checksums.

The fields in the pseudo-header typically include:

1. Source IP Address: The IP address of the source host or device.


2. Destination IP Address: The IP address of the destination host or device.
3. Reserved: This field is set to zero and reserved for future use.
4. Protocol: The protocol number indicating the transport layer protocol being used (e.g., TCP
or UDP).
5. Length: The length of the transport layer segment (TCP or UDP) in bytes. This includes
both the transport layer header and the data.
6. Transport Layer Segment: The actual transport layer segment (TCP or UDP) for which the
checksum is being calculated.

The pseudo-header is used in the calculation of the transport layer checksum, such as the TCP
checksum or UDP checksum. By including fields from the IP header and the transport layer
segment, the pseudo-header provides additional information to ensure data integrity and detect
transmission errors during network communication.

11. Suppose that 5000 bytes are transferred over TCP. The first byte is numbered 20050. What are
the sequence numbers for each segment if data is sent in four segments with the first two
segments carrying 1000 bytes and the last two segments carrying 1500 bytes?
ANS

To determine the sequence numbers for each segment, we need to understand how sequence
numbers are assigned in TCP.

In TCP, each byte of data is assigned a sequence number. The sequence numbers are used to ensure
that the data is received in the correct order and to detect any missing or duplicate data.

Given that the first byte is numbered 20050 and a total of 5000 bytes are transferred over TCP, we
can calculate the sequence numbers for each segment based on the byte offsets.

Segment 1 carries 1000 bytes:


The first byte of segment 1 is numbered 20050.
The last byte of segment 1 is numbered 21049.
So, the sequence numbers for segment 1 range from 20050 to 21049.

Segment 2 carries 1000 bytes:


The first byte of segment 2 is numbered 21050.
The last byte of segment 2 is numbered 22049.
So, the sequence numbers for segment 2 range from 21050 to 22049.

Segment 3 carries 1500 bytes:


The first byte of segment 3 is numbered 22050.
The last byte of segment 3 is numbered 23549.
So, the sequence numbers for segment 3 range from 22050 to 23549.

Segment 4 carries 1500 bytes:


The first byte of segment 4 is numbered 23550.
The last byte of segment 4 is numbered 25049.
So, the sequence numbers for segment 4 range from 23550 to 25049.

Here's the breakdown of the sequence numbers for each segment:

Segment 1: 20050 to 21049 (1000 bytes)


Segment 2: 21050 to 22049 (1000 bytes)
Segment 3: 22050 to 23549 (1500 bytes)
Segment 4: 23550 to 25049 (1500 bytes)

12. What is the purpose of the PSH flag in TCP header?

ANS

In the TCP (Transmission Control Protocol) header, the PSH (Push) flag is one of the control flags
used to manage the flow of data between TCP endpoints.
The purpose of the PSH flag is to indicate that the receiving TCP stack should deliver the received
data to the application immediately, rather than waiting to accumulate more data.

When the sender sets the PSH flag in the TCP header of a segment, it signals to the receiver that
the data within the segment should be pushed up to the receiving application as soon as possible.
This is particularly useful for applications that require real-time or interactive communication,
where minimizing the delay between data arrival and delivery is crucial.

Upon receiving a TCP segment with the PSH flag set, the receiving TCP stack will promptly
deliver the segment's payload to the application layer, regardless of whether it has received a
complete message or not. This immediate delivery helps reduce the latency in the data transmission
and ensures that the application can process the received data in a timely manner.

It's important to note that the PSH flag does not guarantee the immediate delivery of data to the
application layer, as the TCP stack may still buffer and process the data before passing it up.
However, setting the PSH flag instructs the receiving TCP stack to prioritize the delivery of the
data, which can be beneficial for time-sensitive applications.

13. What is the purpose of ACK flag in the TCP header


ANS

The ACK flag (short for acknowledgment) is a flag in the Transmission Control Protocol (TCP)
header that serves an important purpose in ensuring reliable data transmission between two
communicating devices.

The ACK flag is used to acknowledge the receipt of data segments. When the ACK flag is set to
1 in a TCP segment, it indicates that the acknowledgment number field in the TCP header contains
a valid acknowledgment number. The acknowledgment number represents the next expected
sequence number that the receiver of the TCP segment is expecting to receive.

Here's how the ACK flag works in the TCP handshake process and data transmission:

1. TCP Handshake: When a TCP connection is established between a client and a server, a
three-way handshake occurs. The client sends a TCP segment with the SYN (synchronize)
flag set to 1, indicating the initial sequence number. The server responds with a TCP
segment that has both the SYN and ACK flags set to 1, confirming the receipt of the client's
segment and providing its own initial sequence number. Finally, the client sends an ACK
segment with the ACK flag set to 1, acknowledging the server's segment and confirming
the establishment of the connection.
2. Data Transmission: After the TCP connection is established, data can be transmitted
between the client and server. Each TCP segment sent contains a sequence number that
represents the position of the data within the stream. The receiver acknowledges the receipt
of the data by sending an ACK segment back with the ACK flag set to 1 and the
acknowledgment number set to the next expected sequence number.

By using the ACK flag and acknowledgment numbers, TCP ensures reliable data delivery. If the
sender does not receive an acknowledgment within a specified timeout period, it retransmits the
unacknowledged data segment. This mechanism allows TCP to detect and recover from lost or
corrupted segments, ensuring that the data is reliably transmitted across the network.

In summary, the purpose of the ACK flag in the TCP header is to acknowledge the receipt of data
segments and maintain the reliability of data transmission in TCP connections.

Exercise/Review questions on IP addressing and subnetting


1. Change the following IP address from binary notation to dotted decimal notation
11000100 10001111 00110000 10000001
Sol:

The IP address "11000100 10001111 00110000 10000001" in binary notation can be converted
to dotted decimal notation as follows:

11000100 -> 196


10001111 -> 143
00110000 -> 48
10000001 -> 129

Therefore, the IP address in dotted decimal notation is "196.143.48.129".

2. Find the error if any in the IP address: 144.15.256.7

SOL:

 The error in the IP address "144.15.256.7" is that the octet "256" is outside the valid
range of 0 to 255.
 In the dotted decimal notation, each octet can have a value between 0 and 255.
 To correct the error, the octet "256" needs to be adjusted to a valid value.
 If it is intended to represent a value greater than 255, it would require more than 8 bits to
represent it, which is not possible in the standard IPv4 addressing scheme.

Assuming that the intended value was "144.15.255.7" instead, the corrected IP address would be
"144.15.255.7".

3. Find the class of the IP address: 227.15.75.111


Sol:

The IP address "227.15.75.111" belongs to Class D in the classful IP addressing scheme.

In the classful IP addressing scheme, the first octet of the IP address determines the class. Here's
a breakdown of the classes and their corresponding range of values for the first octet:

 Class A: 0.0.0.0 to 127.255.255.255


 Class B: 128.0.0.0 to 191.255.255.255
 Class C: 192.0.0.0 to 223.255.255.255
 Class D: 224.0.0.0 to 239.255.255.255
 Class E: 240.0.0.0 to 255.255.255.255 (reserved for experimental purposes)

Since the first octet of the given IP address is 227, which falls within the range of 224 to 239, it
belongs to Class D. Class D addresses are used for multicast purposes, where data is intended to
be sent to a group of hosts rather than a specific destination.

4. Given the network address, 135.75.0.0, find the class, the network ID and the range of the
addresses.

Sol

The network address "135.75.0.0" belongs to Class B in the classful IP addressing scheme.

In Class B, the first two octets are used for the network ID, while the remaining two octets are
used for host IDs. Let's break down the information:

 Class: Class B
 Network ID: The network ID is the first two octets of the given address. In this case, the
network ID is "135.75".
 Range of addresses: In Class B, the range of addresses for a given network ID is
determined by the host IDs. The host IDs can range from 0 to 255 (excluding the network
and broadcast addresses).

Therefore, the range of addresses for the network "135.75.0.0" would be:

 Network Address: 135.75.0.0


 First Usable Address: 135.75.0.1
 Last Usable Address: 135.75.255.254
 Broadcast Address: 135.75.255.255
Note that the network address and the broadcast address are reserved addresses representing the
network itself and all hosts on the network, respectively. The first usable address is the lowest
assignable address within the network range, and the last usable address is the highest assignable
address within the network range.

5. For the subnet mask 255.255.192.0, how many hosts per subnet are passible?

Sol:

The subnet mask 255.255.192.0 corresponds to a Class C network with 18 bits for the network
portion and 14 bits for the host portion.

To determine the number of hosts per subnet, we need to calculate the number of unique host
addresses that can be assigned within the subnet. The number of host addresses is determined by
the number of available host bits.

In this case, with 14 bits for the host portion, we have 2^14 = 16,384 possible unique host
addresses.

However, we need to account for the fact that the first and last addresses in a subnet are reserved
for the network address and the broadcast address, respectively. Therefore, the number of usable
host addresses per subnet would be 16,384 - 2 = 16,382.

Hence, with the subnet mask 255.255.192.0, there are 16,382 possible hosts per subnet.

6. To classful addressing, if we are using the subnet mask 255.255.192.0, which address class
does it correspond to?

Sol:

The subnet mask 255.255.192.0 corresponds to Class C in the classful addressing scheme.

In the classful addressing scheme, the first octet of the subnet mask determines the class. Here's a
breakdown of the classes and their corresponding ranges of values for the first octet:

 Class A: 0.0.0.0 to 127.255.255.255


 Class B: 128.0.0.0 to 191.255.255.255
 Class C: 192.0.0.0 to 223.255.255.255
 Class D: 224.0.0.0 to 239.255.255.255
 Class E: 240.0.0.0 to 255.255.255.255 (reserved for experimental purposes)
Since the first octet of the given subnet mask is 255, which falls within the range of 192 to 223, it
corresponds to Class C. Class C addresses have a default subnet mask of 255.255.255.0, but a
subnet mask of 255.255.192.0 can be used to further divide the network into smaller subnets.

7. What is the subnet address if the destination IP address is 144.16.34.124 and the subnet
mask is 255.255.240.0?

Sol:

To find the subnet address, we perform a bitwise AND operation between the destination IP
address and the subnet mask. Here's the calculation:

Destination IP address: 144.16.34.124


Subnet mask: 255.255.240.0

Converting both addresses to binary:


Destination IP address: 10010000.00010000.00100010.01111100
Subnet mask: 11111111.11111111.11110000. 00000000

Performing the bitwise AND operation:


10010000.00010000.00100000.00000000

Converting the result back to decimal:


Subnet address: 144.16.32.0

Therefore, the subnet address corresponding to the destination IP address 144.16.34.124 with a
subnet mask of 255.255.240.0 is 144.16.32.0.

8. What is the natural mask for a class c network?

Sol:

The natural mask for a Class C network is 255.255.255.0.

In the classful addressing scheme, Class C networks are identified by a first octet ranging from
192 to 223. Class C networks are typically assigned to small to medium-sized organizations. The
natural mask for a Class C network allocates 24 bits for the network portion and 8 bits for the
host portion.

The natural mask, 255.255.255.0, indicates that the first three octets of the IP address are used to
identify the network, while the last octet is reserved for host addressing. This allows for a
maximum of 256 unique host addresses within a Class C network (since 2^8 = 256), with the
network and broadcast addresses occupying two of those addresses. Therefore, Class C networks
can support up to 254 usable host addresses.

9. Using VLSM, give a scheme to split a class c address into four subnet where the number
of hosts
required are 100, 55, 20,30

Sol:

To split a Class C address into four subnets with the given number of required hosts, we can use
Variable Length Subnet Masking (VLSM) to allocate the appropriate subnet masks for each
subnet. Here's a possible scheme:

Given Class C address: 192.168.0.0

Subnet 1: Requires 100 hosts


Subnet 2: Requires 55 hosts
Subnet 3: Requires 20 hosts
Subnet 4: Requires 30 hosts

To determine the required subnet mask for each subnet, we need to find the smallest subnet mask
that can accommodate the required number of hosts.

1. Subnet 1: Requires 100 hosts


The smallest subnet mask that can accommodate 100 hosts is /25 (255.255.255.128).
2. Subnet 2: Requires 55 hosts
The smallest subnet mask that can accommodate 55 hosts is /26 (255.255.255.192).
3. Subnet 3: Requires 20 hosts
The smallest subnet mask that can accommodate 20 hosts is /27 (255.255.255.224).
4. Subnet 4: Requires 30 hosts
The smallest subnet mask that can accommodate 30 hosts is /27 (255.255.255.224).

Using these subnet masks, we can allocate the subnets as follows:

Subnet 1: 192.168.0.0/25 (255.255.255.128)

 Network address: 192.168.0.0


 Broadcast address: 192.168.0.127
 Usable host addresses: 192.168.0.1 to 192.168.0.126 (126 hosts)

Subnet 2: 192.168.0.128/26 (255.255.255.192)

 Network address: 192.168.0.128


 Broadcast address: 192.168.0.191
 Usable host addresses: 192.168.0.129 to 192.168.0.190 (62 hosts)

Subnet 3: 192.168.0.192/27 (255.255.255.224)

 Network address: 192.168.0.192


 Broadcast address: 192.168.0.223
 Usable host addresses: 192.168.0.193 to 192.168.0.222 (30 hosts)

Subnet 4: 192.168.0.224/27 (255.255.255.224)

 Network address: 192.168.0.224


 Broadcast address: 192.168.0.255
 Usable host addresses: 192.168.0.225 to 192.168.0.254 (30 hosts)

By using VLSM, we have effectively split the Class C address into four subnets with the
required number of hosts for each subnet.

10. Can the following be the beginning addresses in CIDR-based addressing a.


1.44.16.192.32/28 b. 10.17.18.42/28 c. 188.15.170.55/28 d. 200.0.100.80/28

SOL:

No, none of the provided addresses can be the beginning addresses in CIDR-based addressing.

In CIDR notation, the address consists of the network address followed by a slash ("/") and the
prefix length indicating the number of network bits. The prefix length represents the number of
consecutive 1s in the subnet mask.

Let's analyze each option:

a. 1.44.16.192.32/28
The first octet "1" indicates a Class A network address, but the remaining octets are not within
the valid range for a Class A network. Additionally, the value "192.32" in the address is not a
valid network address.

b. 10.17.18.42/28
This address is a valid CIDR notation. It represents a network address with a prefix length of 28
bits. However, it is not the beginning address; it represents the entire network.

c. 188.15.170.55/28
The first octet "188" indicates a Class B network address, but the remaining octets are not within
the valid range for a Class B network. Additionally, the value "170.55" in the address is not a
valid network address.
d. 200.0.100.80/28
The first octet "200" indicates a Class C network address, but the remaining octets are not within
the valid range for a Class C network. Additionally, the value "100.80" in the address is not a
valid network address.

In summary, none of the provided addresses are valid beginning addresses in CIDR-based
addressing.

11. For a CIDR address of the form W.X.Y.Z/20 what is the maximum number of hosts
possible in the network?

SOL:

For a CIDR address of the form W.X.Y.Z/20, the maximum number of hosts possible in the
network can be calculated by subtracting the prefix length (20 in this case) from the total number
of bits in the IP address (32 bits for IPv4).

The formula to calculate the number of hosts is 2 raised to the power of (32 - prefix length).

In this case, the prefix length is 20, so the calculation would be:

Number of hosts = 2^(32 - 20) = 2^12 = 4096

Therefore, the maximum number of hosts possible in the network with a CIDR address of the
form W.X.Y.Z/20 is 4096.

12. Which of the following can be the starting address of a CIDR block that contains 512
addresses? a. 144.16.24.128 b. 144.16.24.0 c. 144.16.75.0 d. 144.16.0.0

SOL:

To determine which of the given addresses can be the starting address of a CIDR block that
contains 512 addresses, we need to find the appropriate prefix length that can accommodate 512
addresses.

The number of addresses in a CIDR block is calculated as 2^(32 - prefix length). We can find the
required prefix length by solving the equation 2^(32 - prefix length) = 512.

Let's calculate the prefix lengths for each option:

a. 144.16.24.128
This address cannot be the starting address of a CIDR block that contains 512 addresses because
the host portion is not aligned to the required boundaries.
b. 144.16.24.0
This address can be the starting address of a CIDR block that contains 512 addresses. We need to
determine the prefix length that covers 512 addresses. Solving the equation 2^(32 - prefix length)
= 512, we find that the prefix length must be 23.

c. 144.16.75.0
This address cannot be the starting address of a CIDR block that contains exactly 512 addresses
because it falls within a Class B network range, and Class B networks have a maximum of
65,536 addresses.

d. 144.16.0.0
This address cannot be the starting address of a CIDR block that contains exactly 512 addresses
because it falls within a Class B network range, and Class B networks have a maximum of
65,536 addresses.

Therefore, the only option that can be the starting address of a CIDR block containing 512
addresses is b. 144.16.24.0.

Exercise/Review questions on congestion

1. What is congestion? Why does congestion occur?

Sol:

Congestion refers to a situation in computer networks where there is a significant increase in the
amount of traffic or data being transmitted through a network compared to its capacity to handle
that traffic efficiently. It occurs when the demand for network resources, such as bandwidth,
exceeds the available capacity, leading to performance degradation, increased delays, packet loss,
and reduced network throughput.

Congestion can occur due to various reasons, including:

 Network Limitations: The physical infrastructure of the network, such as routers,


switches, and links, has a finite capacity. If the volume of data being transmitted exceeds
the capacity of these network elements, congestion can occur.
 Increased Traffic: A sudden surge in network traffic, such as during peak hours or during
a high-demand event, can overwhelm the network and lead to congestion. This can happen
in both local area networks (LANs) and wide area networks (WANs).
 Bottlenecks: Certain points in a network may have limited capacity compared to the
overall network capacity. These bottleneck points can become overwhelmed when traffic
is concentrated, leading to congestion. For example, a slow link between two network
segments can create a bottleneck.
 Network Misconfiguration: Incorrect network configurations, such as improper
bandwidth allocation, routing issues, or inefficient queuing mechanisms, can contribute to
congestion. These misconfigurations can lead to an inefficient utilization of network
resources and exacerbate congestion.
 Denial of Service (DoS) Attacks: Deliberate malicious activities, such as DoS attacks, can
flood a network with an excessive amount of traffic, overwhelming its capacity and causing
congestion.

Congestion can have detrimental effects on network performance, causing increased latency,
packet loss, decreased throughput, and degraded user experience. Network administrators and
engineers employ various mechanisms and techniques, such as traffic shaping, quality of service
(QoS) policies, and congestion control algorithms, to mitigate and manage congestion in order to
maintain optimal network performance.

2. What are the two basic mechanisms of congestion control?

Sol:

The two basic mechanisms of congestion control are:

1. Traffic Regulation: This mechanism focuses on regulating the rate at which traffic is
injected into the network to prevent congestion from occurring or worsening. It involves
controlling the amount of data transmitted by network devices and endpoints based on the
network's current congestion state. The goal is to ensure that the rate of data transmission
does not exceed the available network capacity.

There are several techniques used for traffic regulation, including:

 Traffic shaping: This technique involves smoothing and controlling the rate of outgoing
traffic from a network device by buffering and delaying packets. It helps in controlling
bursty traffic and ensuring a more uniform transmission rate.
 Admission control: This mechanism determines whether new network flows or
connections should be allowed based on the available network resources. By selectively
admitting or rejecting new flows, it helps in preventing congestion by managing the overall
demand on the network.
2. Congestion Avoidance and Control: This mechanism focuses on detecting and reacting
to congestion that has already occurred in the network. It aims to prevent congestion from
worsening and to reduce the adverse effects caused by congestion. Congestion avoidance
and control mechanisms typically operate at the network layer and involve active feedback
and signaling between network devices.

Common techniques used for congestion avoidance and control include:

 Explicit Congestion Notification (ECN): ECN is a mechanism that allows network


devices to indicate congestion to endpoints by setting a flag in the IP header. This enables
endpoints to react to congestion proactively and adjust their transmission rates accordingly.
 Congestion Window: Congestion window-based algorithms, such as TCP congestion
control algorithms (e.g., TCP Reno, TCP Vegas), adjust the size of the congestion window
dynamically based on network conditions. They aim to strike a balance between sending
enough data to utilize available bandwidth without causing congestion.
 Random Early Detection (RED): RED is a queue management mechanism used in routers
to detect and control congestion. It randomly drops packets from the queue before it
becomes completely full, signaling to endpoints that congestion is occurring and causing
them to reduce their transmission rates.

By combining traffic regulation techniques to control the rate of traffic entering the network
and congestion avoidance and control mechanisms to react and adapt to congestion, network
systems can effectively manage and mitigate congestion, improving overall network
performance and stability.

3. What is Backpressure congestion control? how is it used for congestion control

Sol:

Backpressure congestion control is a mechanism used in computer networks to regulate traffic


and manage congestion.

It operates by applying pressure or feedback from congested downstream nodes or links to the
upstream nodes, effectively controlling the flow of data and reducing congestion.

In backpressure congestion control, when a downstream node or link becomes congested, it


signals this congestion back to the upstream nodes. This feedback informs the upstream nodes
about the congestion state and prompts them to reduce their transmission rates. By reducing the
rate of data transmission, the congestion can be alleviated, allowing the network to recover and
maintain optimal performance.
The backpressure mechanism can be implemented using various techniques, including:

1. Explicit Signaling: The congested node or link explicitly signals the congestion to the
upstream nodes. This can be done through feedback messages, such as congestion
notification packets or explicit congestion notification (ECN) flags in IP headers. Upon
receiving these signals, the upstream nodes adjust their transmission rates accordingly.
2. Queue Length Monitoring: The congested node or link monitors the length of its queue and
uses it as an indicator of congestion. When the queue length exceeds a certain threshold, it
sends signals to the upstream nodes to reduce their transmission rates.
3. Rate-based Feedback: Instead of relying on explicit signaling or queue length, the
congested node or link measures the rate of incoming traffic and provides feedback to the
upstream nodes based on the observed rate. This feedback can be in the form of rate control
messages or congestion control algorithms.

Backpressure congestion control is particularly useful in scenarios where the network topology
or routing paths are dynamic, and the congestion can occur at different points in the network. It
enables a distributed approach to congestion control, where each node can independently react to
congestion and regulate its transmission rates based on the feedback received from downstream
nodes.

By applying backpressure congestion control, the network can achieve a self-regulating


behavior, where the congestion is detected and mitigated in a decentralized manner. This helps in
preventing congestion from propagating throughout the network, improving overall network
performance, and ensuring fairness in resource allocation.

4. What is a choke packet? how is it used for congestion control

Sol:

A choke packet is a specialized type of packet used in congestion control mechanisms to signal
congestion back to the sender and request a reduction in the data transmission rate. It acts as a
feedback mechanism to inform the sender about the congestion state in the network.

 When a congested node or link detects congestion, it generates and sends choke packets to
the upstream sender.
 These choke packets typically contain specific information or signaling that indicates
congestion. The sender receives these choke packets and interprets them as a signal to
reduce its transmission rate.
 Choke packets are used in various congestion control algorithms and mechanisms, such as
Explicit Congestion Notification (ECN) and Random Early Detection (RED).
 The specific usage and implementation of choke packets depend on the congestion control
algorithm employed.
In ECN, choke packets may contain explicit flags or markings in the IP header, such as the ECN
field, to indicate congestion. These markings can be set by congested routers or network devices
and are then communicated back to the sender. Upon receiving the choke packets with ECN
markings, the sender can reduce its transmission rate to alleviate congestion.

In RED, choke packets are generated when the length of a queue at a congested node exceeds a
certain threshold. The congested node randomly drops packets from the queue, and these dropped
packets effectively act as choke packets. The sender, upon observing a packet drop, infers the
occurrence of congestion and reduces its transmission rate.

Choke packets play a crucial role in providing feedback to the sender and facilitating congestion
control. By signaling congestion back to the sender, choke packets enable the sender to adjust its
transmission rate, preventing further congestion and maintaining network stability.

5. How congestion control is performed by the leaky bucket algorithm?

Sol:

The Leaky Bucket algorithm is a congestion control mechanism used to regulate the flow of data
and manage network congestion. It controls the rate at which data is transmitted from a source by
employing a "bucket" analogy.

Here's how the Leaky Bucket algorithm performs congestion control:

1. Bucket Initialization: The algorithm starts by initializing a bucket with a fixed capacity,
which represents the maximum allowed burst size or transmission rate. The bucket can be
visualized as a container that holds the data packets.
2. Incoming Data: As data packets arrive at the source, they are added to the bucket. Each
packet has a certain size associated with it.
3. Leaking Data: The algorithm continuously removes data packets from the bucket at a fixed
rate, which represents the maximum allowed transmission rate. This process is called
"leaking" the bucket. The rate at which the bucket leaks is determined by the network
capacity or the desired transmission rate.
4. Congestion Detection: If the bucket becomes full, indicating that it has reached its capacity,
any additional incoming packets are considered excess and are either dropped or marked
as congestion.
5. Congestion Response: When congestion is detected, the Leaky Bucket algorithm employs
different strategies to control the flow of data:
o Packet Discard: Excess packets can be selectively discarded, known as packet
dropping or packet loss. This reduces the amount of traffic in the network,
preventing congestion from worsening.
o Traffic Shaping: The algorithm can regulate the transmission rate by delaying or
buffering excess packets before releasing them into the network. This smooths out
the traffic and helps prevent bursts of data that can lead to congestion.
The Leaky Bucket algorithm provides a simple mechanism for controlling the rate of data
transmission and preventing network congestion. By setting the bucket size, leak rate, and
handling excess packets, it helps maintain a stable flow of data within the network's capacity
limits.

6. In what way token bucket algorithms is superior to leaky bucket algorithm?

Sol:

The Token Bucket algorithm is often considered superior to the Leaky Bucket algorithm in
certain aspects. Here are some ways in which the Token Bucket algorithm is advantageous:

1. Burstiness Handling: The Token Bucket algorithm is designed to handle bursty traffic more
effectively compared to the Leaky Bucket algorithm. In the Token Bucket algorithm,
tokens are added to the bucket at a fixed rate, and each token represents a unit of data that
can be transmitted. This allows for occasional bursts of data transmission, as long as tokens
are available in the bucket. In contrast, the Leaky Bucket algorithm is more restrictive and
does not allow bursts beyond the bucket's capacity.
2. Flexibility in Transmission Rate: The Token Bucket algorithm provides more flexibility in
controlling the transmission rate. By adjusting the rate at which tokens are added to the
bucket, the algorithm can regulate the transmission rate dynamically. This allows for
adaptive congestion control based on the network conditions or specific requirements. The
Leaky Bucket algorithm, on the other hand, has a fixed leak rate, limiting its adaptability.
3. Quality of Service (QoS) Support: The Token Bucket algorithm is commonly used in
Quality of Service (QoS) implementations. It allows for the specification of different token
arrival rates and bucket sizes for different classes of traffic, enabling differentiated service
levels. This fine-grained control over traffic shaping and rate limiting is beneficial in
scenarios where different types of traffic require varying levels of bandwidth and
prioritization.
4. Traffic Policing and Shaping: The Token Bucket algorithm is well-suited for traffic
policing and shaping purposes. It can be used at network ingress points to enforce specific
traffic contracts or service level agreements (SLAs). By controlling the rate of token arrival
and the bucket size, the algorithm ensures that traffic adheres to the specified limits and
prevents excessive or unauthorized usage of network resources.

Overall, the Token Bucket algorithm offers more flexibility, adaptability, and fine-grained
control over traffic transmission compared to the Leaky Bucket algorithm. It is widely used in
networking applications and QoS implementations to regulate traffic and enforce desired policies
effectively.

7. Explain TCP Congestion control with example


Sol:

TCP (Transmission Control Protocol) congestion control is a fundamental mechanism used to


regulate data transmission in TCP-based networks. It aims to prevent network congestion,
manage congestion when it occurs, and ensure fair sharing of network resources among
competing connections. One of the most well-known TCP congestion control algorithms is TCP
Reno.

TCP Reno uses a combination of techniques, including additive increase and multiplicative
decrease, to dynamically adjust the transmission rate based on network conditions and
congestion signals. Here's a simplified explanation of how TCP Reno congestion control works
with an example:

1. Slow Start:
o Initially, when a TCP connection is established, it enters the slow start phase. The
sender starts by sending a small number of packets, typically one segment, into the
network.
o Upon receiving acknowledgments (ACKs) from the receiver, the sender increases
its transmission rate exponentially. For example, if the sender receives one ACK
for each packet sent, it doubles the number of packets it sends in the next round.
o This exponential growth continues until a congestion event occurs, which is
typically indicated by the absence of ACKs or the receipt of duplicate ACKs.
2. Congestion Avoidance:
o Upon detecting a congestion event, TCP Reno transitions from the slow start phase
to the congestion avoidance phase. The sender reduces its transmission rate to avoid
exacerbating congestion.
o In congestion avoidance, the sender increases its transmission rate linearly, rather
than exponentially, by adding one segment per round-trip time (RTT).
o The sender monitors the occurrence of congestion events by observing packet loss
or receiving explicit congestion notification (ECN) signals from routers along the
path.
3. Fast Retransmit and Fast Recovery:
o When TCP Reno detects packet loss through the receipt of duplicate ACKs, it
assumes that some packets have been lost in the network.
o Upon receiving a specified number of duplicate ACKs, usually three or more, TCP
Reno performs a fast retransmit. It retransmits the missing packet without waiting
for a retransmission timeout (RTO).
o After the fast retransmit, TCP Reno enters the fast recovery phase. It reduces its
transmission rate, usually by cutting it in half, to alleviate congestion.
o During fast recovery, the sender continues to transmit new segments but at a
reduced rate. It also uses a congestion window (cwnd) to limit the number of
unacknowledged packets in flight.
4. Congestion Avoidance and Recovery:
o TCP Reno combines congestion avoidance and recovery mechanisms in subsequent
rounds.
o During congestion avoidance, the sender continues to increase its transmission rate
linearly until it detects another congestion event, such as packet loss or ECN
signals.
o Upon detecting congestion, TCP Reno enters a multiplicative decrease phase. It
reduces its transmission rate by halving the congestion window (cwnd) and starts
the slow start phase again.

By dynamically adjusting the transmission rate based on congestion signals, TCP Reno's
congestion control algorithm helps maintain network stability, prevent congestion collapse, and
allocate network resources fairly among TCP connections. The exact behavior and parameters of
TCP congestion control algorithms can vary across implementations and versions of TCP.

Exercise/Review questions on routing


1. Why routing is important in a packet-switched network?
Routing is crucial in a packet-switched network because it enables the efficient and reliable
delivery of data packets from the source to the destination. Here are several reasons why routing
is important:
1. Packet Delivery: In a packet-switched network, data is divided into smaller packets before
transmission. These packets can take different paths to reach their destination. Routing
determines the optimal path for each packet, ensuring that they reach the destination
efficiently and in the correct order.
2. Network Efficiency: Routing helps to optimize network resources by selecting the most
appropriate paths for packet transmission. It considers factors such as network congestion,
link availability, and bandwidth to ensure efficient utilization of network resources. By
avoiding congested or unreliable routes, routing improves the overall performance and
efficiency of the network.
3. Scalability: Routing enables the network to scale and accommodate a large number of
devices and users. By dynamically determining the best path for each packet, routing
allows the network to handle increasing traffic and adapt to changes in network topology
or link conditions.
4. Fault Tolerance: In a packet-switched network, individual links or network nodes may fail
or become unavailable. Routing protocols can detect such failures and reroute traffic
through alternative paths, ensuring that packets can still reach their destination even in the
presence of network failures. This fault tolerance improves the network's reliability and
resilience.
5. Quality of Service (QoS): Routing plays a significant role in supporting Quality of Service
requirements. Different types of network traffic, such as voice, video, or data, may have
different QoS needs. Routing protocols can prioritize certain types of traffic, allocate
bandwidth accordingly, and ensure that QoS requirements are met.
6. Security: Routing protocols can incorporate security mechanisms to protect the network
from malicious activities such as unauthorized access, data interception, or network
attacks. Secure routing protocols can authenticate network devices, encrypt traffic, and
implement access controls, enhancing the overall security of the network.
Overall, routing is essential in packet-switched networks as it enables efficient, reliable,
and secure packet delivery, optimizes network resources, supports scalability, and provides
fault tolerance and QoS capabilities.

2. What are the primary conditions that affect routing?


Ans:
Routing in computer networks can be affected by various conditions and factors. Here are some
primary conditions that can affect routing:

1. Network Congestion: When a network becomes congested, meaning there is a high volume
of traffic or limited resources available, it can affect routing decisions. Congestion may
result in increased latency, packet loss, or suboptimal routes being chosen by routing
protocols.
2. Link or Node Failures: When there are failures in network links or nodes, routing protocols
need to dynamically adapt and find alternate paths. The failure of a link or node can disrupt
the normal routing paths and require rerouting to maintain connectivity.
3. Network Topology Changes: Changes in the network topology, such as adding or removing
links or nodes, can affect routing.

3. What is statistic routing? Advantages and limitations?


Ans:

 Static routing uses preconfigured routes to send traffic to its destination, while dynamic
routing uses algorithms to determine the best path. How else do the two methods differ?
 Static routes are configured in advance of any network communication. Dynamic
routing, on the other hand, requires routers to exchange information with other routers to
learn about paths through the network.
 The routes are fixed (Does not change with time), they may only change if there is a change
in the topology of the network. •
 routes change slowly over time

Advantages of Static routing


 Simple
 Works well in a reliable network with a stable load in a reliable network
 Same for virtual –circuit and datagram
• Disadvantage of Static routing
 Lack of flexibility
 Does not react to failure or network congestion
 Not a very reliable approach, if the central node fails all collapse

4. What is flooding? Why flooding technique is not commonly used for routing?
Ans:

 Flooding is a very simple routing algorithm that sends all the packets arriving via each
outgoing link.
 Flooding is used in computer networking routing algorithms where each incoming packet
is transmitted through every outgoing link, except for the one on which it arrived.
 It is wasteful if a single destination needs the packet since it delivers the data packet to all
nodes irrespective of the destination.
 The network may be clogged with unwanted and duplicate data packets. This may hamper
the delivery of other data packets.
 Flooding generates a vast number of duplication packets – A suitable damping
mechanism must be used
Flooding is a network communication technique where every incoming packet is forwarded to
every outgoing link except the one it arrived on.
In other words, when a node receives a packet, it retransmits the packet to all of its neighbors,
except for the neighbor from which the packet was received.
This process continues until the packet reaches its destination or is discarded after a certain number
of hops.
While flooding can be a simple and robust method for delivering packets in a network,

It is not commonly used for routing due to several reasons:

1. Inefficiency:
Flooding generates a large number of duplicate packets in the network, as each node
retransmits the packet to all neighbors.
This result in unnecessary bandwidth consumption, increased network congestion, and
reduced overall network performance.
2. Broadcast Storms:
If the network contains loops or redundant paths, flooding can lead to broadcast storms.
In such scenarios, packets continuously circulate in the network, consuming resources and
degrading network performance.
3. Scalability:
Flooding is not scalable for large networks. As the number of nodes and links increases,
the number of duplicate packets grows exponentially, leading to further congestion and
inefficiency.
4. Lack of Control:
5. Flooding does not provide any control mechanism to determine the best path for packet
delivery. It blindly forwards packets to all neighbors, even if they are not on the path
towards the destination. This lack of control can result in inefficient routing and increased
packet delivery delays.
Instead of flooding, routing protocols are commonly used in network communication.

These protocols, such as the Border Gateway Protocol (BGP) for the Internet or the Open Shortest
Path First (OSPF) protocol for internal networks, employ specific algorithms and techniques to
determine the best path for packet forwarding based on factors like network topology, link quality,
and traffic conditions. Routing protocols aim to optimize network performance, minimize
congestion, and provide efficient packet delivery while avoiding the issues associated with
flooding.

5. In what situation flooding is most appropriate? How the drawbacks of flooding can be
minimized?
Ans:
 Flooding is used in routing protocols such as O.S.P.F. (Open Shortest Path First), peer-
to-peer file transfers, systems such as Usenet, bridging, etc.
 Flooding is of three types: controlled, uncontrolled, and selective.
 Selective flooding ----- A variation which is slightly more practical is selective flooding •
The routers do not send every incoming packet out on every line, only on those lines that
go in approximately in the direction of the destination
 Flooding always attempts to select the shortest path.
 All nodes, directly or indirectly connected, are visited
Here's a breakdown of the appropriate situations for flooding and how its drawbacks can
be minimized:
Appropriate Situations for Flooding:
1. Network Discovery: Flooding can be used in the initial stages of network setup to discover
all the nodes and their connections. . This is particularly useful in self-organizing networks
or in scenarios where the network topology is dynamic.
2. Broadcasting: Flooding is suitable for broadcasting information to all nodes in a network.
For example, in scenarios where a network administrator needs to distribute time-sensitive
updates or alerts to all devices on the network, flooding can ensure that every node receives
the information.
Minimizing Drawbacks of Flooding:
1. Time-to-Live (TTL) Mechanism: One of the main drawbacks of flooding is the potential
for excessive network traffic and packet duplication, leading to congestion and
inefficiency. To minimize these drawbacks, a TTL mechanism can be implemented.
2. Sequence Numbers: Another way to minimize the drawbacks of flooding is by using
sequence numbers in packets. Each packet carries a unique sequence number, and when a
packet is received at a node, the sequence number is checked to determine if it has already
been processed
Selective Flooding: Instead of flooding packets indiscriminately to all paths, selective
flooding can be employed.
This approach involves using heuristics or routing tables to determine the most likely paths
that lead to the destination. By selectively flooding packets on these paths, the number of
unnecessary transmissions can be reduced while still ensuring packet delivery.
3. Network Partitioning: In large networks, it may be beneficial to divide the network into
smaller logical sections or partitions. Within each partition, flooding can be used for local
communication, while inter-partition communication can employ other routing algorithms.
This approach helps contain the effects of flooding within smaller subsets of the network,
minimizing its impact on the overall performance.

6. Why dynamic routing is preferred over static routing?


Ans:
 Dynamic routing, which automates table updates and provides the optimum paths for data
transfer,
 Is the most cost-effective routing technique.
 Dynamic routing is simple to set up on extensive networks and is more intuitive when
choosing the best route,
 Detecting route modifications, and
 Discovering faraway networks.
 Routers change more quickly – periodic update – in response to link cost changes •
 The purpose of dynamic routing protocols includes the discovery of remote networks –
 Maintaining up-to-date routing information –Choosing the best path to destination
networks –
 Ability to find a new best path if the current path is no longer available
 Compared to static routing, dynamic routing protocols require less administrative overhead

 Help the network administrator manage the time-consuming process of configuring and
maintaining static routes

7. What are the limitations of dynamic routing?
Ans:
 Routing decision is more complex
– more processing burden on the switching modes
 Depends on status information that is collected at one place but used at another
o Traffic overhead increase
 It may react too quickly to changing network state, therefore by producing congestion
producing oscillation
 Despite the drawbacks, adaptive routing is widely used
– Improves performance
– Can aid in congestion control
 Part of a router’s resources are dedicated for protocol operation, including CPU time and
network link bandwidth – Times when static routing is more appropriate

8. Compare and contrast distance vector routing with link state routing
Ans:
 Distance vector protocols send their entire routing table to directly connected neighbors.
 DV suffer from the count-to-infinity problem.
 Link state routing protocols are widely used in large networks due to their fast convergence
and high reliability.
 LS send information about directly connected links to all the routers in the network.

Distance vector protocols

When using a distance vector protocol -- such as

 Routing Information Protocol (RIP) or

 Interior Gateway Routing Protocol (IGRP) -- each routing table entry specifies the number
of hops to each destination.

The router sends its routing table to each directly connected router and receives the tables of the
other routers in return. Routers using distance vector protocols periodically exchange their routing
tables with neighboring routers.

Distance vector protocols have their advantages and disadvantages.

Routers that use distance vector protocols periodically send out their entire routing tables, which
produces a significant load when used in a large network and could create a security risk if the
network became compromised. Because distance vector protocols determine routes based on hop
count, they can choose a slow link over a high data rate link when the hop count is lower.

Link state protocols

Link state protocols -- such as

 Open Shortest Path First (OSPF) and

 Intermediate System to Intermediate System (IS-IS) -- determine routes by exchanging a


link state packet (LSP) with each neighboring router. Each router constructs an LSP that
contains its preconfigured identifier along with information about connected networks and
subnets. The router then sends the LSP to nearby routers. Received LSPs contain additional
information about paths to other networks and link data rates. Routers combine this
information with previously known information and store it in their routing tables.

9. Based on the given figure find the least cost path to send data from node A to all other nodes
a) Using Dijkstra’s algorithm
b) Using Bellman Foard algorithm

10. What is the difference between interior and exterior routing protocols?
Ans:
 Interior gateway protocols are used inside an organization's network and are limited to the
border router.
 Exterior gateway protocols are used to connect the different Autonomous Systems (ASs)
–Interior
 Routing information protocol (RIOP) •
 Open shortest path first (OSPF)
 Multicast Open shortest path first (MOSPF)
–Exterior • Border Gateway protocol (BGP)
11. What is hierarchical routing?
Ans:

 In hierarchical routing, the routers are divided into regions.


 Each router has complete details about how to route packets to destinations within its own
region.
 But it does not have any idea about the internal structure of other regions.

As we know, in both LS and DV algorithms, every router needs to save some information about
other routers.

When network size is growing, the number of routers in the network will increase. Therefore, the
size of routing table increases, then routers cannot handle network traffic as efficiently.

 To overcome this problem we are using hierarchical routing.


 In hierarchical routing, routers are classified in groups called regions.
 Each router has information about the routers in its own region and it has no information
about routers in other regions.
 So, routers save one record in their table for every other region.
 For huge networks, a two-level hierarchy may be insufficient hence, it may be necessary
to group the regions into clusters, the clusters into zones, the zones into groups and so on.

12. What is an autonomous system?


Ans:
A set of aggregated routers and networks managed by a single organization called
Autonomous systems (AS)
The routers within the As exchange information using a common routing protocol
Routers in same AS run same routing protocol
– “intra-AS” routing protocol
Routers in different AS can run different intraAS routing protoco
13. How do routers update information in RIP?
Ans:

RIP (Routing Information Protocol) is a distance-vector routing protocol used in computer


networks to exchange routing information between routers.
RIP routers update information in the routing table using a process called
 route advertisement and
 Route convergence.
Here is a general overview of how routers update information in RIP:
1. Route Advertisement: RIP routers periodically broadcast their entire routing table to
neighboring routers.
These updates are sent as RIP packets using the User Datagram Protocol (UDP) on port
520.
The routing table contains information about networks and their associated metrics (hop
count) that the router has learned from other routers.
2. Hop Count: RIP uses hop count as its metric. Each router determines the number of hops
(intermediate routers) required to reach a particular network. The maximum hop count in
RIP is 15, meaning that a destination is considered unreachable if it is more than 15 hops
away.
3. Routing Table Updates: When a router receives a RIP update from a neighboring router, it
examines the information in the update and updates its routing table accordingly. If the
update contains information about a network that is not already in the routing table, the
router adds an entry for that network. If the update contains information about an existing
network, the router compares the hop count in the update with the hop count in its routing
table. If the update has a lower hop count, the router updates the routing table entry with
the new hop count and updates the next-hop router.
4. Split Horizon: RIP routers employ a technique called split horizon to prevent routing loops.
Split horizon means that a router does not advertise routes back to the router from which it
learned them. This prevents routing loops where packets keep getting forwarded between
two routers indefinitely.
5. Route Convergence: After receiving updates, routers update their routing tables and
exchange information with neighboring routers. This process continues until all routers in
the network have converged on consistent routing information. Convergence means that
all routers have reached a state where they have the same routing information and agree on
the best paths to reach different networks.
6. Timers: RIP routers also use timers to control the frequency of updates and to determine
when a route is considered invalid. Routers send periodic updates at regular intervals, and
if a router does not receive an update from a neighboring router within a certain time period,
it assumes that route is no longer reachable and removes it from the routing table.
It's important to note that RIP is an older routing protocol and has limitations, such as slow
convergence and limited scalability.
More modern routing protocols like OSPF (Open Shortest Path First) and BGP (Border Gateway
Protocol) are commonly used in larger networks today.
14. What is the difference between OSPF and MOSPF
Ans:
 MOSPF, a multicast routing protocol, is an enhancement of the unicast routing protocol
-Widely used as the interior router protocol in TCP/IP networks
 OSPF is a link-state routing protocol in which the routers advertise the state of their directly
attached links and based on these advertisements, each router builds up a link-state
database.
OSPF
At steady state
All routers know the same network topology
“Hello” packets sent every 10 seconds to neighbors
Link state advertisement (LSA) flooded
initially from each router
Absence of “Hello” packets for 40 seconds
 indicate failure of neighbor
o Cause LSA to be flooded again
 LSAs re-flooded every 20 minutes anyway
OSPF (Open Shortest Path First) and MOSPF (Multicast Open Shortest Path First) are both routing
protocols used in computer networks,
but they serve different purposes.
OSPF (Open Shortest Path First):
**.
 OSPF is an Interior Gateway Protocol (IGP) that is primarily used for routing unicast
traffic within an autonomous system (AS).
 It operates based on the link-state algorithm and is designed to determine the
shortest path between network nodes.
 OSPF uses a metric called cost to calculate the best path, considering factors
such as bandwidth and network congestion.
 It supports multiple areas within an AS, allowing for better scalability and easier
management of large networks.
 OSPF is widely used in enterprise networks and internet service provider (ISP) networks.
 MOSPF (Multicast Open Shortest Path First):
MOSPF, on the other hand, is an extension of OSPF that specifically addresses multicast
routing. Multicast allows the transmission of data from a single sender to multiple
recipients simultaneously.
 MOSPF enables routers to exchange information about multicast group memberships and
to calculate the shortest path for delivering multicast packets.
 It uses the same link-state algorithm as OSPF but includes additional mechanisms for
handling multicast traffic. MOSPF is typically used in environments where multicast
applications, such as video conferencing or multimedia streaming, are prevalent.
 In summary, OSPF is used for unicast routing, determining the shortest path for sending
data between individual devices within a network.
 MOSPF, an extension of OSPF, focuses on multicast routing, facilitating the efficient
delivery of multicast traffic to multiple recipients.

15. What are the four types of BGP messages?


The Border Gateway Protocol (BGP) is a routing protocol used in the Internet to exchange routing
information between autonomous systems (ASes). BGP uses several types of messages to
communicate and exchange routing information. The four main types of BGP messages are:
1. OPEN: The OPEN message is the first message exchanged between two BGP speakers to
establish a BGP session. It carries information about the BGP version number, the sender's
BGP identifier (ID), and various BGP capabilities supported by the sender.
2. UPDATE: The UPDATE message is the most important BGP message type. It carries the
actual routing information and is used to advertise new routes, withdraw existing routes,
or modify attributes of routes. The UPDATE message includes the network layer
reachability information (NLRI), which specifies the destination prefixes (network
prefixes) for the advertised routes, along with associated attributes.
3. NOTIFICATION: The NOTIFICATION message is used to report errors or exceptional
conditions in the BGP session. When a BGP speaker detects an error, it sends a
NOTIFICATION message to the peer indicating the specific error condition encountered.
The message includes an error code and a diagnostic message to provide details about the
error.
4. KEEPALIVE: The KEEPALIVE message is sent periodically to maintain the liveliness of
the BGP session. It is a simple message with no payload, and its purpose is to inform the
peer that the BGP speaker is still reachable and functioning. If a BGP speaker does not
receive a KEEPALIVE message within a certain interval, it assumes that the connection to
the peer has failed.
These four message types, OPEN, UPDATE, NOTIFICATION, and KEEPALIVE, form the basis
of BGP communication and enable the exchange of routing information and the establishment of
reliable BGP sessions between routers in different autonomous systems.

16. How is a BGP connection between two routers maintained?


 Establish the BGP process and specify the local AS number with router bgp.
 Specify a neighbor and the neighbor's AS number with neighbor remote-as.
 A BGP (Border Gateway Protocol) connection between two routers is maintained through
a series of steps and mechanisms.
 Here's a general overview of how a BGP connection is established and maintained:
1. Establishing TCP Connection: BGP uses TCP (Transmission Control Protocol) as its
transport protocol. The two routers establish a TCP connection using a designated port
(179) to communicate with each other.
2. BGP Peer Discovery: Each router must be configured with the IP address of its BGP
neighbor (the other router it wants to establish a BGP connection with). BGP routers use
the established TCP connection to send BGP open messages containing information about
their BGP capabilities and the autonomous system (AS) they belong to.
3. BGP Peering: Once the BGP open messages have been exchanged and the routers have
agreed upon the parameters, a BGP peer relationship is established. This relationship can
be either internal (within the same AS) or external (between different ASes).
4. BGP Route Exchange: After the peering is established, the routers exchange BGP routing
updates. Each router sends its BGP neighbor information about the network prefixes it can
reach and the associated attributes, such as the AS path, next hop, and other path attributes.
5. Best Path Selection: When a router receives BGP updates from its neighbor, it evaluates
the received routes based on a set of criteria defined by the BGP policy. The router selects
the best path for each network prefix based on factors like the length of the AS path, routing
policies, and the origin of the route.
6. BGP Route Advertisement: Once the best path for each prefix is determined, the router
advertises these routes to its BGP neighbors. The advertising router encapsulates the BGP
update message in a TCP segment and sends it to the neighboring router.
7. BGP Keepalive and Hold Timer: To maintain the BGP session, routers periodically
exchange keepalive messages. These messages serve as a heartbeat to confirm that the BGP
peer is still active. Additionally, BGP routers implement a hold timer mechanism to detect
if the BGP neighbor fails to respond within a certain time. If the hold timer expires, the
BGP session is considered down, and the routers attempt to reestablish the connection.
8. BGP Convergence: BGP convergence refers to the process where routers reach a
consistent and stable state where they have exchanged all required routing information and
have selected the best paths for each network prefix. Convergence may take some time
depending on the size of the network and the complexity of the BGP policies.
9. BGP Monitoring and Maintenance: Administrators monitor BGP connections to ensure
their stability and troubleshoot any issues that may arise. BGP routers can

You might also like