Unit 3- Transport protocols and QOS.pptx
Unit 3- Transport protocols and QOS.pptx
NETWORKS
• TCP’s challenges and Design Issues in Ad Hoc Networks – Transport
QoS – MAC Layer QoS solutions – Network Layer QoS solutions – QoS
Model.
Introduction
• The objectives of a transport layer protocol include the setting up of
an end-to-end connection, end-to-end delivery of data packets, flow
control, and congestion control. There exists
• UDP- unreliable, and connection-less transport layer protocols such as UDP,
and
• TCP-reliable, byte-stream-based, and connection oriented transport layer
protocol
• These traditional wired transport layer protocols are not suitable for
ad hoc wireless networks due to the inherent problems associated
with the Ad hoc Wireless Networks
ISSUES IN DESIGNING A TRANSPORT LAYER
PROTOCOL FOR AD HOC WIRELESS NETWORKS
1. Induced Traffic:
• In a path having multiple link, the traffic at any given link (or path) due to the
traffic through neighboring links (or paths) is referred to as induced traffic.
• This is due to the broadcast nature of the channel and the location-dependent
contention on the channel
• Induced Traffic affects the throughput achieved by the transport layer protocol.
2. Induced throughput unfairness:
• This refers to the throughput unfairness at the transport layer due to the
throughput/delay unfairness existing at the lower layer such as the n/w and MAC
layers.
• A transport layer should consider these in order to provide a fair share of
throughput across contending flows
3. Separation of congestion control, reliability and flow control:
• A transport layer protocol can provide better performance if end-to-end reliability,
flow control and congestion control are handled separately.
• Reliability and flow control are end-to-end activities, whereas congestion can at times
be a local activity
• Objective is minimization of the additional control overhead generated by them
4. Power and Band width constraints:
• Nodes in ad hoc wireless networks face resource constraints including the two most
important resources: (i) power source and (ii) bandwidth
• The performance of a Transport layer protocol is significantly affected by these
resource constraints
5. Interpretation of congestion:
• Interpretation of network congestion as used in traditional networks is not
appropriate in ad hoc networks.
• This is because the high error rates of wireless channel, location-dependent
contention, hidden terminal problem, packet collisions in the network, path breaks
due to mobility of nodes, and node failure due to drained battery can also lead to
packet loss in ad hoc wireless networks
6. Completely decoupled transport layer:
• Another challenge faced by Transport layer protocol is the interaction with the
lower layers.
• Cross-layer interaction between the transport layer and lower layers is important
to adapt to the changing network environment
7. Dynamic topology:
• Experience rapidly changing network topology due to mobility of nodes
• Leads to frequent path breaks, partitioning and remerging of networks & high
delay in re-establishment of paths
• Performance is affected by rapid changes in network topology.
DESIGN GOALS OF A TRANSPORT LAYER
PROTOCOL FOR AD HOC WIRELESS NETWORKS
• The protocol should maximize the throughput per connection.
• It should provide throughput fairness across contending flows.
• It should incur minimum connection set up and connection maintenance overheads.
• It should have mechanisms for congestion control and flow control in the network.
• It should be able to provide both reliable and unreliable connections as per the requirements of
the application layer.
• It should be able to adapt to the dynamics of the network such as rapid changes in topology.
• Bandwidth must be used efficiently.
• It should be aware of resource constraints such as battery power and buffer sizes and make
efficient use of them.
• It should make use of information from the lower layers for improving network thruput.
• It should have a well-defined cross-layer interaction framework.
• It should maintain End-to-End Semantics.
CLASSIFICATION OF TRANSPORT LAYER
SOLUTIONS
TCP OVER AD HOC WIRELESS NETWORKS
• The transmission control protocol (TCP) is the most predominant transport layer
protocol in the Internet today.
• It transports more than 90% percent of the traffic on the Internet.
• Its reliability, end-to-end congestion control mechanism, byte stream transport
mechanism, and, above all, its elegant and simple design have not only contributed
to the success of the Internet, but also have made TCP an influencing protocol in the
design of many of the other protocols and applications.
• Its adaptability to the congestion in the network has been an important feature
leading to graceful degradation of the services offered by the network at times of
extreme congestion.
• TCP in its traditional form was designed and optimized only for wired networks.
• Since TCP is widely used today and the efficient integration of an ad hoc wireless
network with the Internet is paramount wherever possible, it is essential to have
mechanisms that can improve TCP's performance in ad hoc wireless networks.
Why Does TCP Not Perform Well in Ad Hoc
Wireless Networks?
• The major reasons behind throughput degradation that TCP faces when used in
ad hoc wireless networks are the following:
1.Misinterpretation of packet loss:
• Traditional TCP was designed for wired networks where the packet loss is mainly
attributed to network congestion.
• Ad hoc wireless networks experience a much higher packet loss due to factors
such as high bit error rate (BER) in the wireless channel, increased collisions due
to the presence of hidden terminals, presence of interference, location-
dependent contention, uni-directional links, frequent path breaks due to mobility
of nodes, and the inherent fading properties of the wireless channel.
2.Frequent path breaks:
• If the route re-establishment time is greater than the RTO period of TCP sender,
then the TCP sender assumes congestion in the n/w ,retransmits lost packets and
initiates congestion control algorithm. This leads to wastage of bandwidth and
battery power.
3.Effect of path length:
• It is found that the TCP throughput degrades rapidly with an increase in path
length in string (linear chain) topology ad hoc wireless networks
• This is shown in Figure 9.3. The possibility of a path break increases with path
length.
4. Misinterpretation of congestion window:
• When there are frequent path breaks, the congestion window may not reflect
the maximum transmission rate acceptable to the network and the receiver.
5. Asymmetric link behavior:
• Radio channel used in ad hoc wireless network has different properties such as
location dependent contention, directional properties etc leading to asymmetric
links.
• This can lead to TCP invoking the congestion control algorithm and several
retransmissions.
6. Uni directional path:
• TCP relies on end-to-end ACK for ensuring reliability. Path break on an entirely
different reverse path can affect the performance of the network as much as a
path breaks in the forward path.
7. Multipath Routing:
• For TCP, multipath routing leads to significant amount of out of order packets,
when intern generates a set of duplicate acknowledgement (DUPACKs),which
cause additional power consumption and invocation of congestion control.
8. Network partitioning and remerging:
• The randomly moving nodes in an ad hoc wireless network can lead to network
partitions.
• As long as the TCP sender, the TCP receiver, and all the intermediate nodes in the
path between the TCP sender and the TCP receiver remain in the same partition,
the TCP connection will remain intact.
• It is likely that the sender and receiver of the TCP session will remain in different
partitions and, in certain cases, that only the intermediate nodes are affected by
the network partitioning.
• Figure 9.5 illustrates the effect of network partitions in ad hoc wireless networks.
9. The use of sliding window based transmission:
• TCP uses a sliding window for flow control.
• This can contribute to degraded performance in bandwidth constrained ad hoc
wireless network.
• It can also lead to burstines in traffic due to the subsequent transmission of TCP
segments.
Feedback-Based TCP (TCP-F)
• Improves performance of TCP.
• Uses a feedback based approach.
• The routing protocol is expected to repair the broken path within a reasonable
time period
Operation:
• In TCP-F, an intermediate node, upon detection of a path break, originates route
failure notification (RFN) packet. This intermediate node is called Failure point
(FP).
• This RFN packet is routed toward the sender of the TCP session, Sender
information that is obtained from TCP packets.
• If any intermediate nodes that receive RFN has an alternate route to the same
destination, then it discards the RFN packet and uses the alternate path for
forwarding further data packets, thus reducing control overhead involved in the
route reconfiguration process.
• When TCP sender receives an RFN packet, it goes into a state called
snooze. In this state, a sender,
• Stops sending any more packets to the destination.
• Cancels all timers.
• Freezes its congestion window.
• Freezes the retransmission timer.
• Sets up a route failure timer.
• When route failure timer expires, the TCP sender changes from
snooze state to connected state.
• When the route re-establishment has been done, then the failure
point sends Route Re-establishment Notification (RRN) packet to the
sender and the TCP state is updated back to the connected state.
Advantages :
• Simple feedback solution for problem arising from path breaks.
• Permits TCP congestion control mechanism to respond to congestion in
the network.
Disadvantages:
• If a route to sender is not available at the FP, then additional control
packets may need to be generated for routing RFN packets.
• TCP-F has an additional state compared to traditional TCP state
mechanism.
• Congestion window used after a new route is obtained may not reflect the
achievable transmission rate acceptable to the network and the TCP-F
receiver.
TCP with Explicit Link Failure Notification
• Improves TCP performance in ad hoc wireless network.
• Similar to TCP-F except for the handling of explicit link failure notification (ELFN)
and the use of TCP probe packets for detecting the route reestablishment.
• The ELFN is originated by the node detecting a path break upon detection of a
link failure to the TCP sender.
• This can be implemented in two ways:
• (i) by sending an ICMP2destination unreachable (DUR) message to the sender, or
• (ii) by piggy-backing this information on the RouteError3 message that is sent to the sender.
• Once the TCP sender receives the ELFN packet, it disables its retransmission
timers and enters a standby state.
• In this state, it periodically originates probe packets to see if a new route is
reestablished. Upon reception of an ACK by the TCP receiver for the probe
packets, it leaves the standby state, restores the retransmission timers, and
continues to function as normal.
• Advantages:
• Improves TCP performance by decoupling the path break information from
the congestion information by the use of ELFN.
• Less dependent on routing protocol & requires only link failure notification
about the path break.
• Disadvantages:
• When the network is temporarily partitioned, the path failure may last
longer & this can lead to the origination of periodic probe packets
consuming bandwidth & power.
• Congestion window used after a new route is obtained may not reflect the
achievable transmission rate acceptable to the network and the TCP
receiver.
TCP-BuS
• It is similar to TCP-F and TCP-ELFN in its use of feedback information from an
intermediate node on detection of a path break. But it is more dependent on the
routing protocol.
• TCP-BuS was proposed, with Associativity-Based Routing (ABR) protocol as the
routing scheme. Hence it makes use of some special messages such as LQ and
REPLY for finding partial path.
• Operation:
• Upon detection of a path break, an upstream intermediate node, called pivot
node (PN),originates an explicit route disconnection notification ( ERDN )
message to the TCP-BuS sender.
• ERDN propagated in a reliable way.
• Upon receiving ERDN packet, the TCP-BuS sender stops transmission and freezes
all timers and windows as in TCP-F.
• The packets in transmit at the intermediate nodes from the TCP-BuS sender to
the PN are buffered until a new partial path from the PN to the TCP-BuS receiver
is obtained by the PN.
• Upon detection of a path break, the downstream node originates a Route
Notification (RN) packet to the TCP-BuS receiver, which is forwarded by all
the downstream nodes in the path.
• PN attempts to find new partial path (route) to the TCP-BuS receiver , and
the availability of such a partial ath to destination is intimated to the TCP-
BuS sender through an explicit route successful notification (ERSN)
packet.TCP utilizes route reconfiguration mechanism of ABR to obtain
partial path to the destination.
• Upon a successful LQ-REPLY process to obtain a new route to the TCP-BuS
receiver, PN informs the TCP-BuS sender of the new partial path using ERSN
Packet.(it is sent reliably)
• TCP-BuS sender also periodically originates probe packets to check the
availability of a path to the destination.
• Below figure illustrates the operation of TCP-BuS.
• Advantages:
• Performance improvement.
• Avoidance of fast retransmission due to the use of buffering, sequence
numbering, and selective acknowledgement.
• Also takes advantage of the underlying routing protocols.
• Disadvantages:
• Increased dependency on the routing protocol and the buffering at the
intermediate nodes.
• The failure of intermediate nodes that buffer the packets may lead to loss
of packets and performance degradation.
• The dependency on the routing protocol may degrade its performance
with order routing protocols that do not have similar control messages as
in ABR.
AD HOC TCP
• Based on feedback information received from the intermediate nodes, the
TCP sender changes its state to the
• Persist state.
• Congestion control state or
• Retransmission state.
• When an intermediate node finds that the network is partitioned, then the
TCP sender state is changed to the persist state where it avoids
unnecessary retransmissions.
• Figure shows the thin layer implementation of ATCP between the
traditional TCP layer and the IP layer.
• This does not require changes in the existing TCP protocol.
• This layer is active only at the TCP sender.
• Major function of the ATCP Layer is that it monitors the :
• Packet sent and received by TCP sender,
• The state of the TCP sender,
• State of the network.
• Fig (b) shows the state transmission diagram for the ATCP at the TCP
sender.
• The four states in the ATCP are:
• 1. NORMAL.
• 2. CONGESTED
• 3. LOSS
• 4. DISCONN
• When a TCP connection is established, the ATCP sender state is in
NORMAL, here ATCP does not interfere with the operation of TCP and it
remains invisible.
• Advantages:
• It maintains the end to end semantics of TCP.
• It is compatible with traditional TCP.
• Improves throughput of TCP in adhoc wireless network.
• Disadvantages:
• Dependency on the network layer protocol to detect the route
changes and partitions.
• Addition of thin ATCP layer to TCP/IP protocol stack requires changes
in the interface functions currently being used
Split TCP
• Major issues that affect the performance of TCP over adhoc wireless network
is the degradation of throughput with increasing path length.
• This can also lead to unfairness among TCP sessions where one session may
obtain much higher throughput than other sessions.
• This unfairness problem is further worsened by the use of MAC protocols,
which are found to give a higher throughput for certain link level sessions,
leading to an effect known as channel capture.
• Split TCP provides a unique solution to this problem by splitting the transport
layer objectives into:
• Congestion control.
• End to End reliability.
• In addition, split TCP splits a long TCP connection into a set of short
concatenated TCP connections (called segments or zones) with a number of
selected intermediate nodes (known as proxy nodes) as terminating points of
these short connections.
• Figure illustrates the operation of split-TCP where a three segment
split –TCP connection exists between source node1 and destination
node 15.
• A proxy node receives the TCP packets, reads its contents, stores it in
its local buffer, and sends an acknowledgement to the source (or the
previous proxy)
• This acknowledgement is called Local acknowledgement (LACK) does
not guarantee end to end delivery.
• The responsibility of further delivery of packets is assigned to the
proxy node.
• In figure, node 1 initiates a TCP session to node 15, node 4 and node
13 are chosen as proxy nodes.
• The number of proxy nodes in a TCP session is determined by the
length of the path between source & destination node.
• Based on a distributed algorithm, the intermediate nodes that
receive TCP packets determine whether to act as a proxy node or just
as a simple forwarding node.
• In figure, the path between nodes 1 & 4 is the first zone (segment),
the path between nodes 4 to 13 is the second zone (segment), and
the last zone is between node 13 and 15.
• The proxy node 4, upon receipt of each TCP packet from source
node1,acknowledges it with a LACK packet, & buffers the received
packets. This buffered packet is forwarded to the next proxy node at a
transmission rate proportional to the arrival of LACKs from the next
proxy node or destination.
• Advantages:
• Improved throughput.
• Improved throughput fairness.
• Lessened impact of mobility.
• Disadvantages:
• Requires modifications to TCP protocol.
• End to End connection handling of traditional TCP is violated.
• The failure of proxy nodes can lead to throughput degradation.
QUALITY OF SERVICE IN AD HOC
WIRELESS NETWORKS
• Quality of service (QoS) is the performance level of a service offered by the network to the user.
• The goal of QoS provisioning is to achieve a more deterministic network behavior, so that
information carried by the network can be better delivered and network resources can be better
utilized.
• A network or a service provider can offer different kinds of services to the users.
• Here, a service can be characterized by a set of measurable pre specified service requirements
such as
• minimum bandwidth,
• maximum delay,
• maximum delay variance (jitter),and
• maximum packet loss rate.
• After accepting a service request from the user, the network has to ensure that the service
requirements of the user's flow are met, as per the agreement, throughout the duration of the
flow (a packet stream from the source to the destination).
• In other words, the network has to provide a set of service guarantees while transporting a flow.
ISSUES AND CHALLENGES IN PROVIDING QOS
IN AD HOC WIRELESS NETWORKS
• Ad hoc wireless networks have certain unique characteristics that pose several difficulties
in provisioning QoS. Some of them are
Dynamically varying network topology:
• Since the nodes in an ad hoc wireless network do not have any restriction on mobility,
the network topology changes dynamically.
• Hence, the admitted QoS sessions may suffer due to frequent path breaks, thereby
requiring such sessions to be reestablished over new paths.
• The delay incurred in reestablishing a QoS session may cause some of the packets
belonging to that session to miss their delay targets/deadlines, which is not acceptable
for applications that have stringent QoS requirements.
Lack of central coordination:
• Unlike wireless LANs and cellular networks, ad hoc wireless networks do not have central
controllers to coordinate the activity of nodes.
• This further complicates QoS provisioning in ad hoc wireless networks.
Imprecise state information:
• In most cases, the nodes in an ad hoc wireless network maintain both the link-
specific state information and flow-specific state information.
• The state information is inherently imprecise due to dynamic changes in network
topology and channel characteristics.
• Hence, routing decisions may not be accurate, resulting in some of the real-time
packets missing their deadlines.
Error-prone shared radio channel:
• The radio channel is a broadcast medium by nature. During propagation through
the wireless medium, the radio waves suffer from several impairments such as
attenuation, multipath propagation, and interference
Limited resource availability:
• Resources such as bandwidth, battery life, storage space, and processing
capability are limited in ad hoc wireless networks.
• Out of these, bandwidth and battery life are critical resources, the availability of
which significantly affects the performance of the QoS provisioning mechanism.
• Hence, efficient resource management mechanisms are required for optimal
utilization of these scarce resources.
Hidden terminal problem:
• The hidden terminal problem is inherent in ad hoc wireless networks. This
problem occurs when packets originating from two or more sender nodes, which
are not within the direct transmission range of each other, collide at a common
receiver node.
• It necessitates the retransmission of the packets, which may not be a
Insecure medium:
• Due to the broadcast nature of the wireless medium, communication through a
wireless channel is highly insecure.
• Therefore, security is an important issue in ad hoc wireless networks, especially
for military and tactical applications.
• Ad hoc wireless networks are susceptible to attacks such as eavesdropping,
spoofing, denial of service, message distortion, and impersonation.
• Without sophisticated security mechanisms, it is very difficult to provide secure
communication guarantees acceptable for flows that have stringent QoS
requirements
Design choices for providing QoS support
1. Hard State vs. Soft State Resource Reservation
• Hard State Reservation:
• Resources are explicitly reserved at all nodes along the path for the session duration.
• Requires explicit release mechanisms when the path breaks, leading to additional
control overhead.
• Risk of resource wastage if nodes become unreachable without releasing
reservations.
• High call blocking ratio during high network loads.
Soft State Reservation:
• Resources are reserved temporarily and refreshed based on packet arrivals within a
timeout period.
• Resources are automatically deallocated if no packets arrive before the timeout.
• Reduces control overhead as no explicit teardown is required.
• Offers better call acceptance at degraded quality under heavy load conditions.
2. Stateful vs. Stateless Approaches
Stateful Approach:
• Nodes maintain state information (global or local) such as topology and flow-specific details.
• Global state allows centralized routing but incurs high control overhead for maintaining
accuracy.
• Local state enables distributed routing with reduced overhead but requires care to avoid
routing loops.
Stateless Approach:
• Nodes do not maintain flow-specific or link-specific state information.
• Solves scalability issues and reduces storage and computational burdens.
• Challenges arise in providing QoS guarantees due to the lack of state information.
3. Hard QoS vs. Soft QoS Approaches
Hard QoS:
• Guarantees QoS requirements for the entire session duration.
• Difficult to implement in dynamic environments like ad hoc wireless networks.
Soft QoS:
• Provides QoS guarantees within statistical bounds rather than absolute certainty.
• More feasible for networks with high variability and mobility.
CLASSIFICATIONS OF QOS SOLUTIONS
Layer-wise classification of QoS solutions.
MAC LAYER SOLUTIONS
• The MAC protocol determines which node should transmit next on the broadcast
channel when several nodes are competing for transmission on that channel.
• The existing MAC protocols for ad hoc wireless networks use channel sensing and
random back-off schemes, making them suitable for best-effort data traffic.
• Real-time traffic (such as voice and video) requires bandwidth guarantees.
• Supporting real-time traffic in these networks is a very challenging task.
• Several proprietary MAC protocols have been developed to address QoS challenges in ad
hoc wireless networks
• These protocols aim to provide efficient resource allocation and real-time support,
tailored to the decentralized and dynamic nature of such networks.
• QoS support MAC protocols are
• Cluster TDMA
• IEEE 802.11e
• DBASE(distributed bandwidth allocation/sharing/extension )
Cluster TDMA
• In this clustering approach, nodes are split into different groups.
• Each group has a cluster-head (elected by members of that group), which acts as a
regional broadcast node and as a local coordinator to enhance the channel throughput.
• Every node within a cluster is one hop away from the cluster-head.
• The formation of clusters and selection of cluster-heads are done in a distributed
manner.
• Clustering algorithms split the nodes into clusters so that they are interconnected and
cover all the nodes
• Cluster-Head Selection Algorithms:
• Lowest-ID Algorithm: A node with the lowest ID among neighbors becomes the cluster-head.
• Highest-Degree Algorithm: A node with the highest degree (most neighbors) becomes the cluster-
head.
• Least Cluster Change (LCC) Algorithm: Cluster-head changes only occur if:
• Two cluster-heads merge into one cluster.
• A node moves out of range of all cluster-heads.
•MAC Techniques in Clusters:
•Intra-Cluster Access Control:
•TDMA (Time Division Multiple Access): Manages channel access within a cluster.
•CDMA (Code Division Multiple Access): Allows multiple sessions to share a TDMA slot.
•Inter-Cluster Access Control:
•Spatial reuse of time-slots or different spreading codes mitigate inter-cluster interference.
•Synchronous Time Division Frame:
•Divided into control phase and data phase.
•Control Phase Tasks:
•Frame and slot synchronization.
•Routing and clustering.
•Power management and code assignment.
•Virtual Circuit (VC) setup.
•Data Phase Tasks:
•Supports real-time and best-effort traffic.
•Allocates slots based on VC bandwidth requirements.
•Free slots are used for best-effort traffic with a slotted-ALOHA scheme.
•Cluster-Head Responsibilities:
•Reserves VC slots and assigns codes.
•Maintains a power gain matrix for controlling transmission power and
managing intra-cluster codes.
•Node Operations:
•Broadcasts control information in predefined control slots.
•Updates routing tables, slot reservation statuses, and power gain matrices.
•Schedules free slots and verifies reserved slot statuses.
•Real-Time and Best-Effort Traffic:
•Real-time sessions are allocated sufficient slots during the data phase.
•Idle reserved slots are released after a timeout period.
•Expired real-time packets are dropped.
•Fast Reservation Scheme:
•Reservations are made during the transmission of the first packet.
•Same slots are reused for subsequent frames of the connection.
•Idle reserved slots are released automatically if unused for a timeout period.
IEEE 802.11 MAC Protocol
•Modes of Operation:
•Distributed Coordination Function (DCF):
•Operates without centralized control.
•Mandatory in all 802.11 WLAN implementations.
•Point Coordination Function (PCF):
•Requires a central Access Point (AP) to coordinate node activities.
•Optional in the 802.11 standard.
•Inter-Frame Space (IFS):
•The time interval between the transmission of consecutive frames.
•Types of IFS:
1.Short IFS (SIFS): Shortest delay, used for high-priority transmissions like ACKs.
2.PCF IFS (PIFS): Slightly longer than SIFS, used in PCF mode for AP-controlled transmissions.
3.DCF IFS (DIFS): Longer than PIFS, used in DCF mode for regular frame transmissions.
4.Extended IFS (EIFS): Longest delay, used after error detection to prevent collisions.
Distributed Coordination Function (DCF):
• When a station has a frame to transmit, it waits for a random backoff time. The random backoff
time is defined by a contention window having a random number of time slots. The backoff time
is given by the following equation −
Timebackoff = random()×Timeslots
• Here, the function generates a random number and is the time period for one slot.
• If the station senses that the channel is busy during the contention period, it pauses its timer till
the channel is clear.
• At the end of the backoff period, if the channel is clear, the station will wait for an amount of
time equal to DIFS (Distributed Inter-Frame Space) and sense the channel again.
• If the channel is still clear, the station transmits a RTS (request to send) frame.
• The destination station responds using a CTS (clear to send) frame if it is available.
• Then the transmitting station sends the data frames.
• After the frames are sent, the transmitting station waits for a time equal to SIFS (Short Inter-
Frame Space) for the acknowledgement.
• At the end of this transmission process, the station again waits for the backoff time before the
next transmission.
• Point coordination function (PCF)
• In PCF Operation, a speciall scheduler called PC(Point Coordinator in Access Point) take control
over the wireless channel and control the data scheduling (transmission and reception) of all the
stations (WLAN user devices). Actually in PCF, the Access Point hijack the radio channel by
transmission Beacon during the IFS(Inter Frame Space) in which nobody should transmit
anything.a
• Step 1 − PC sends a beacon frame after waiting for PIFS. The beacon frame reaches every station
in the wireless network.
• Step 2 − If AP has data for a particular station, say station X, it sends the data and a grant to
station X.
• Step 3 − When station X gets the grant from the AP, if it has a data frame for AP, it transmits data
and acknowledgement (ACK) to the AP.
• Step 4 − On receiving data from station X, the AP sends an ACK to it.
• Step 5 − The AP then sends goes to the next station, say station Y. If AP has data for Y, it sends
data and grant to Y, otherwise it sends only grant to Y.
• Step 6 − On receiving grant from AP, station Y transmits its data (if any) to AP.
• Step 7 − This process continues for all the stations in the poll.
• Step 8 − At the end of granting access to all the stations, the AP sends an ACK to the last station. It
then notifies all stations that this is the end of polling.
• PCF and DCF frame sharing
• Figure shows the operation of the network in the combined PCF and
DCF modes.
• The channel access switches alternately between the PCF mode and
the DCF mode, but the CFP may shrink due to stretching when DCF
takes more time than expected.
QoS Support Mechanisms of IEEE 802.11e
• The IEEE 802.11 WLAN standard supports only best-effort service.
• The IEEE 802.11 Task Group e (TGe) has been set up to enhance the
current 802.11 MAC protocol so that it is able to support multimedia
applications.
• The TGe has specified a hybrid coordination function (HCF) that
combines enhancedDCF(EDCF) with the features of PCF to simplify
the QoS provisioning.
Enhanced Distributed Coordination Function
• EDCF is an extension of the DCF, enabling differentiated and distributed access to the
wireless medium.
• Supports up to eight user priorities (UPs), which are mapped to access categories (ACs).
Each AC is assigned a priority and unique channel access parameters.
• EDCF supports up to eight ACs, allowing flexible mapping of UPs to ACs.
• Each AC is an enhanced version of the Distributed Coordination Function (DCF) and has
its own set of access parameters, such as CWmin, CWmax, AIFS, and TXOP limit.
• Stations use the AC of the frame being transmitted to compete for channel access, with
identical priorities assigned to flows within the same AC.
• Quality of Service (QoS) is ensured by QoS Access Points (QAPs), which provide at least
four ACs.
• Stations contend for Transmission Opportunities (TXOPs), defined as time intervals with a
specified start and maximum duration (TXOPLimit).
• This allows stations to transmit one or more MSDUs during a TXOP. The priority of an AC
is determined by the lowest UP mapped to it.
• During the contention period (CP) in EDCF, each access category (AC) of a station
independently contends for a transmission opportunity (TXOP) using a back-off
mechanism.
• After the channel is sensed idle for an arbitration inter-frame space (AIFS), determined
by the AIFSN (arbitration inter-frame slot count), the station initiates a random back-off
counter.
• High-priority ACs are assigned lower AIFSN values to ensure quicker access.
• Specific ranges are defined to prevent collisions with poll packets from the hybrid
coordinator (HC), which manages QoS in a QoS basic service set (QBSS) under the HCF.
• If multiple ACs in a station are contending, the highest priority AC is given precedence
when its counter reaches zero, while lower-priority ACs pause and resume later.
• TXOPs can be allocated either through contention (EDCF-TXOP) or granted by the HC
during the contention-free period (CFP).
• The HC specifies the duration and timing of polled TXOPs through beacon frames or CF-
Poll frames.
• If two or more ACs in the same station reach zero simultaneously, an internal scheduler
resolves the conflict, granting TXOP to the highest-priority AC, while others act as if
external collisions occurred.
Hybrid Coordination Function
• The Hybrid Coordination Function (HCF) integrates features of EDCF and
PCF to ensure compatibility with both, offering enhanced handling of MAC
Service Data Units (MSDUs) in QoS-enabled Basic Service Sets (QBSS).
• It operates during both the Contention Period (CP) and Contention-Free
Period (CFP) using a QoS-aware point coordinator, the Hybrid Coordinator
(HC), typically collocated with the QoS Access Point (QAP).
• The HC allocates TXOPs and manages controlled Contention Access Periods
(CAPs) during CP, allowing stations to send reservation requests.
• The HC initiates MSDU transmissions after sensing the channel idle for a
Point Inter-Frame Space (PIFS) period.
• CAPs can occur anytime during CP, and each CAP may include one or more
TXOPs.
• During CFP, the HC grants polled TXOPs to stations via QoS CF-Poll
frames. Stations use allocated TXOPs to transmit MSDUs, fragmenting
large MSDUs if necessary, with each fragment acknowledged
individually.
• The CFP ends based on the time announced in the beacon frame or
via a CF-End frame from the HC. This dynamic management of TXOPs
ensures efficient use of the medium and supports QoS requirements.
DBASE
• The distributed bandwidth allocation/sharing/extension (DBASE) protocol supports
multimedia traffic [both variable bit rate (VBR) and constant bit rate (CBR)] over ad hoc
WLANs.
• In an ad hoc WLAN, there is no fixed infrastructure (i.e., AP) to coordinate the activity of
individual stations.
• The stations are part of a single-hop wireless network and contend for the broadcast
channel in a distributed manner.
• For real-time traffic (rt-traffic), a contention based process is used in order to gain access
to the channel.
• Once a station gains channel access, a reservation-based process is used to transmit the
subsequent frames.
• The non-real-time stations (nrt-stations) regulate their accesses to the channel according
to the standard CSMA/CA protocol used in 802.11 DCF.
• An nrt-station with data traffic has to keep sensing the channel for an additional random
time called data back-off time (DBT) after detecting the channel as being idle for a DIFS
period. The DBT is given by
• The Access Procedure for Real-Time Stations- Each rt-station
maintains a virtual reservation table (RSVT).
• In this virtual table, the information regarding all rt-stations that have
successfully reserved the required bandwidth is recorded.
• Before initiating an rt-session, the rt-station sends an RTS in order to
reserve the required bandwidth. Before transmitting the RTS, a
corresponding entry is made in the RSVT of the node.
NETWORK LAYER SOLUTIONS
• The bandwidth reservation and real-time traffic support capability of
MAC protocols can ensure reservation at the link level only,
• Hence the network layer support for ensuring end-to-end resource
negotiation, reservation, and reconfiguration is very essential.
QoS Routing
Protocols
• Disadvantages
• No Resource Reservation:
• Resources are not reserved along the path, making the protocol unsuitable
for applications needing hard QoS guarantees.
• Delay Limitations:
• Node traversal time only accounts for processing delay.
• Actual delays are dominated by packet queuing and contention at the MAC
layer, which increase under high traffic load.
Bandwidth Routing Protocol
• Bandwidth is the primary QoS parameter, measured as the number of
free slots available at a node in TDMA-based networks.
• Bandwidth Calculation Algorithm
• Determines and informs the source node of the available bandwidth (free
slots) to any destination in the network.
• Bandwidth Reservation Algorithm
• Reserves sufficient free slots along the path for the QoS flow.
• Standby Routing Algorithm
• Provides a mechanism to reestablish the QoS flow in case of path breaks.
• TDMA Frame Structure
• Transmission Time Scale: Organized into frames, each with a fixed number of
time slots.
• Frame Phases:
• Control Phase:
• Handles slot and frame synchronization, virtual circuit (VC) setup, and routing.
• Each node broadcasts its routing information and slot requirements.
• By the end of this phase, nodes know their neighbors' channel reservations.
• Data Phase:
• Reserved for the transmission/reception of data packets.
• Operations
• Nodes use control-phase information to:
• Schedule free slots for data transmission.
• Verify if reserved slots have failed.
• Drop expired real-time packets.
An example of path bandwidth calculation in BR protocol.
• Bandwidth Calculation Process
• Step-by-Step Bandwidth Assignment:
• The path is divided into hops where slots are assigned such that nodes do not transmit and receive simultaneously.
• Example: Path S → A → B → C → D
• pathBW(S, A):
• Nodes S and A are adjacent.
• pathBW(S, A) = linkBW(S, A) = {2, 5, 6, 7} → 4 slots.
• pathBW(S, B):
• Assign slots {6, 7} for S → A.
• Node A can now use only slots {2, 5} for A → B.
• Result: pathBW(S, B) = {2, 5} → 2 slots.
• pathBW(S, C):
• Assign slots {4, 8} for B → C.
• Path scheduling:
• S → A: {6, 7},
• A → B: {2, 5},
• B → C: {4, 8}.
• Result: pathBW(S, C) = 2 slots.
• pathBW(S, D):
• Assign slots {3, 5} for C → D.
• Path scheduling:
• S → A: {6, 7},
• A → B: {2, 5},
• B → C: {4, 8},
• C → D: {3, 5}.
• Result: pathBW(S, D) = 2 slots.
• Advantages:
• Efficient Bandwidth Allocation: The protocol provides a structured way of allocating
bandwidth, which helps in managing the network's data transmission more efficiently.
• Standby Routing Mechanism: This feature reduces packet loss during path breaks by
maintaining standby routes, ensuring smoother data transmission even when primary
paths are disrupted.
• Minimized Collision Risk: By allocating unique slots for each node in the TDMA frame, it
reduces the likelihood of packet collisions, improving the network's stability.
• Disadvantages:
• Static Slot Assignment: The need to assign unique control slots to each node before the
network is commissioned means that it is not adaptable to new nodes joining the
network later. This limits network flexibility and scalability.
• Unusable Slots: If a node leaves the network, the corresponding control slot remains
unused and cannot be reclaimed or reassigned. This leads to inefficiency in resource
utilization.
• Full Synchronization Requirement: The protocol requires the network to be fully
synchronized for proper functioning, which can be challenging to maintain, especially in
dynamic or large-scale networks.
On-Demand QoS Routing Protocol
• The OQR (On-demand QoS Routing) protocol is designed to guarantee bandwidth
for real-time applications.
• It operates on-demand, meaning it does not require periodic control information
exchange or the maintenance of routing tables at each node.
• The protocol is time-slotted, similar to the BR protocol, and bandwidth is the
primary Quality of Service (QoS) parameter.
• Key Features:
• Bandwidth Calculation: The path bandwidth calculation algorithm from the BR protocol is
used to measure the available end-to-end bandwidth for QoS routing.
• Sequence Numbers: Each QRREQ packet includes a new, monotonically increasing sequence
number to prevent multiple forwarding of the same packet by intermediate nodes.
• Route List & Slot Array List: The route list tracks the nodes visited by the QRREQ packet,
while the slot array list records the free slots available at each of these nodes.
• TTL (Time-to-Live): The TTL field restricts the maximum length of the path that can be
discovered, ensuring that the route discovery process does not go on indefinitely.
• Route Discovery Process:
• Flooding of QRREQ: The source node initiates the route discovery by flooding a QoS
route request (QRREQ) packet to find a suitable path to the destination.
• QRREQ Packet Structure: The QRREQ packet includes fields like packet type, source
and destination IDs, sequence number, route list, slot array list, data, and TTL.
• Efficient Discovery: The use of sequence numbers and route lists ensures that nodes
only process a QRREQ packet once, improving efficiency and preventing redundant
routing information exchanges.
• When node N receives a QRREQ packet, it follows these steps to determine whether
to process or discard the packet:
• Check for Duplicate QRREQ: If a QRREQ with the same source ID and sequence
number has been received previously, the packet is discarded.
• Check for Looping: If the node's address is already present in the route list of the
QRREQ packet, the packet is discarded to prevent routing loops.
• Process New QRREQ Packet:
• Decrement TTL: Node N decreases the TTL (Time-to-Live) field by one. If TTL reaches zero, the
packet is discarded.
• Check QoS Requirement: Node N calculates the path bandwidth from the source to itself. If
the available bandwidth satisfies the QoS requirement, it updates the slot array list with the
free slots available at node N. If the bandwidth does not meet the QoS requirement, the
packet is discarded.
• Update Route List: Node N appends its address to the route list and rebroadcasts the QRREQ
packet unless it is the destination node.
• Bandwidth Reservation:
• The destination node may receive multiple QRREQ packets, each representing a feasible
QoS path. It selects the least-cost path and sends a QoS Route Reply (QRREP) packet to
the source along this path, copying the route list and slot array list fields from the chosen
QRREQ packet.
• As the QRREP traverses back to the source, each node along the route reserves the free
slots listed in the slot array list. Once the source receives the QRREP, the bandwidth
reservation is complete.
• The reservations are soft state, meaning they can be released at the end of the session
to prevent resource lock-up. The source can start sending data once the reservation is
confirmed.
• Route Maintenance:
• If a route breaks, the nodes detecting the break send a RouteBroken packet to the
source and destination. The upstream node sends it to the source, and the downstream
node sends it to the destination.
• All intermediate nodes along the broken route release reserved slots and drop any
pending data packets.
• Upon receiving the RouteBroken packet, the source restarts the route discovery process
to find a new path, while the destination frees its reserved resources.
• Advantages:
• The OQR protocol uses an on-demand resource reservation scheme, reducing
control overhead compared to periodic schemes.
• It guarantees bandwidth for real-time applications with QoS requirements.
• Disadvantages:
• The network must be fully synchronized due to its reliance on the CDMA-
over-TDMA channel model.
• The on-demand nature of route discovery can lead to higher connection
setup times.
On-Demand Link-State Multipath QoS Routing
Protocol
• The On-demand Link-state Multipath QoS Routing (OLMQR) protocol is designed
to find multiple paths from the source to the destination that together satisfy the
QoS (Quality of Service) requirements, especially when finding a single path that
meets all QoS conditions is difficult. Here's a breakdown of how it works:
• Key Features:
• Multipath Routing: Unlike traditional routing protocols that focus on a single path, OLMQR
searches for multiple paths that collectively satisfy the required QoS. The original bandwidth
requirement is split into sub-bandwidth requirements, allowing for more flexibility and
improving the call acceptance rate in ad hoc wireless networks.
• Path Sharing: Paths found by the protocol may share sub-paths, making the approach more
efficient by reusing common network segments.
• CDMA-over-TDMA Channel Model: Similar to the BR and OQR protocols, OLMQR assumes
the use of the CDMA-over-TDMA channel model, where the network must be synchronized.
• Bandwidth Awareness: Each mobile node in the network is aware of the bandwidth available
to its neighbors. When a source node requires a QoS session with bandwidth BWBWBW to
the destination, it floods a QoS route request (QRREQ) packet that carries path history and
link-state information.
• Network Topology View: The destination node collects link-state information from
all received QRREQ packets and constructs its own view of the current network
topology. This enables the destination to evaluate and select multiple paths that
together meet the original bandwidth requirement BWBWBW.
• Resource Reservation: Once the destination node selects the paths, it sends reply
packets along these paths, which reserve resources (sub-bandwidth requirements)
on the corresponding paths as the packets travel back to the source.
• Phases of the Protocol:
• Phase 1 - On-demand Link-state Discovery: The source node floods the QRREQ
packets to discover the current link-state and topology information across the
network.
• Phase 2 - Unipath Discovery: The destination identifies and selects the best possible
path for the QoS flow.
• Phase 3 - Multipath Discovery and Reply: The destination identifies multiple paths
that together satisfy the QoS requirement. The corresponding resources are reserved
as the reply packets travel back to the source.
• Advantages of OLMQR Protocol:
• Improved Call Acceptance Rate (ACAR):
• OLMQR allows multiple paths to be used to satisfy the required QoS for a flow, which increases the likelihood
of meeting the bandwidth and other QoS requirements. This results in a higher ACAR (Call Acceptance Rate)
compared to protocols that rely on a single path.
• Flexible Path Selection:
• By splitting the bandwidth requirement and allowing multiple paths to share sub-paths, OLMQR can
effectively manage resources and optimize the use of available network capacity.
• Fault Tolerance:
• The protocol's ability to use multiple paths improves fault tolerance since it can continue to transmit data
even if one or more paths fail.
• Disadvantages of OLMQR Protocol:
• High Overhead for Path Maintenance and Repair:
• Since OLMQR uses multiple paths to satisfy a flow’s QoS, the overhead involved in maintaining and repairing
these paths is significantly higher than traditional unipath routing protocols. This overhead includes the
additional control messages and resources needed to handle multiple paths.
• Increased Complexity:
• The process of finding multiple paths and ensuring that all paths meet the required QoS can introduce
complexity in the routing and path maintenance procedures, making the protocol more resource-intensive
and harder to implement.
• Potential for Higher Resource Consumption:
• Due to the need for maintaining multiple paths, the protocol may consume more resources (such as
bandwidth and processing power) compared to simpler routing schemes.
Asynchronous Slot Allocation Strategies
• The Asynchronous QoS Routing (AQR) scheme, addresses bandwidth reservation in ad
hoc wireless networks without relying on global time synchronization.
• It uses the real time MAC (RTMAC) protocol, allowing bandwidth reservation in an
asynchronous environment.
• AQR extends Dynamic Source Routing (DSR) and consists of three major phases:
• Bandwidth Feasibility Test Phase:
• The source node floods RouteRequest packets to the destination.
• Intermediate nodes check for available bandwidth and add their reservation tables to the packet.
• Each node keeps track of synchronization offsets to handle asynchronous time differences.
• Bandwidth Allocation Phase:
• The destination node allocates free slots along the selected path using various slot allocation
strategies.
• The RouteReply packet is sent back to the source, containing slot allocation information for each
link.
• Bandwidth Reservation Phase:
• Intermediate nodes reserve bandwidth in an asynchronous manner using RTMAC.
• If a reservation fails, the RouteReply is dropped, and a control packet is sent to release resources
and initiate a new path discovery.
• Slot Allocation Strategies:
• Early Fit Reservation (EFR): Allocates the first available free slot in
sequence, minimizing end-to-end delay.
• Minimum Bandwidth-Based Reservation (MBR): Allocates
bandwidth starting from the link with the lowest free bandwidth,
potentially leading to higher delays.
• Position-Based Hybrid Reservation (PHR): Assigns free slots in
proportion to the link's position in the path.
• k-Hopcount Hybrid Reservation (k-HHR): Chooses EFR or PHR
dynamically, depending on the hop length of the path.
• Advantages:
• Asynchronous Operation: Provides end-to-end bandwidth reservation
without the need for synchronized clocks across the network.
• Flexible Slot Allocation: Dynamic selection of slot allocation strategies based
on path characteristics and delay requirements.
• Scalable: Suitable for networks with dynamic topologies and variable
bandwidth requirements.
• Disadvantages:
• High Setup and Reconfiguration Time: As an on-demand protocol, AQR has
higher setup and reconfiguration times for real-time calls.
• Bandwidth Efficiency: May be less efficient than fully synchronized TDMA
systems due to the formation of bandwidth holes (unused short slots).
QoS Models
• In wired networks, two major service models for Quality of Service (QoS) are the
Integrated Services (IntServ) and Differentiated Services (DiffServ) models.
However, these models face scalability issues in the context of ad hoc wireless
networks due to the challenges of maintaining per-flow state information and
dealing with dynamic topologies.
• IntServ Model
• Overview: Provides QoS on a per-flow basis, where each flow represents a session between a
pair of end users.
• Components: The model involves routers maintaining flow-specific state information, such as
bandwidth requirements and delay bounds. The Resource Reservation Protocol (RSVP) is
used to reserve resources along the route.
• Challenges: The model is not scalable due to the high volume of information maintained at
each router, which is proportional to the number of flows. Additionally, in ad hoc wireless
networks, the limited processing power, frequent topology changes, and time-varying radio
link capacity make it difficult to maintain precise per-flow information.
• DiffServ Model
• Overview: Aggregates flows into a limited number of service classes. Each flow belongs to
one of these service classes, solving the scalability problem of the IntServ model.
• Challenges: Although more scalable than IntServ, DiffServ still struggles with the inherent
challenges of ad hoc wireless networks, including dynamic topologies and limited resources.
• Flexible QoS Model for Mobile Ad Hoc Networks (FQMM)
• To address these limitations, the FQMM hybrid service model was proposed. It
combines the strengths of IntServ (per-flow granularity) and DiffServ (service class
aggregation), tailored to the dynamic nature of ad hoc wireless networks.
• Components:
• Ingress Node (Source): Responsible for traffic shaping, which controls the flow of traffic to
conform to certain defined parameters (mean rate, burst size).
• Interior Node (Intermediate Relay Node): Passes along the traffic, possibly adjusting or
shaping it.
• Egress Node (Destination): The recipient of the traffic flow.
• Traffic Shaping: The source node shapes traffic to ensure it conforms to a traffic
profile, which includes the mean rate (average rate at which data can be sent) and
burst size (amount of data that can be sent in bursts).
• Service Classes: High-priority flows receive per-flow QoS, while low-priority flows
are aggregated into service classes for more efficient handling. This hybrid
approach allows flexibility in providing QoS based on the network's current traffic
load.
• Dynamic QoS Service: The service level of a flow can switch between per-flow
QoS and per-class QoS, depending on the traffic conditions and the flow’s priority.
• Advantages of FQMM
• Scalability: The FQMM model overcomes scalability issues by aggregating low-
priority traffic into service classes, allowing only high-priority traffic to be handled on
a per-flow basis.
• Flexible QoS: Provides per-flow QoS for high-priority traffic while efficiently
managing lower-priority traffic with aggregated classes.
• Adaptability: The model dynamically adjusts the service level of a flow based on
current network conditions, providing greater flexibility.
• Disadvantages of FQMM
• Unresolved Issues:
• Traffic classification: Deciding which traffic should be treated per-flow and which
should be aggregated into classes.
• Mechanisms for intermediate nodes to gather flow information: How intermediate
nodes obtain and manage information about each flow.
• Scheduling and forwarding: How intermediate nodes schedule and forward traffic,
especially for aggregated classes.