Unit 5 Transport Layer
Unit 5 Transport Layer
SENSOR NETWORKS
• Data-Centric and Contention-Based Networking – Transport Layer and
Examples.
Data-Centric and Contention-Based
Networking
• Data-centric networking shifts the focus from communicating between
specific nodes to transmitting and processing data itself.
• Instead of addressing individual devices, data is requested, processed, and
delivered based on its content, making it ideal for wireless sensor
networks (WSNs).
• Content Based Networking enhances data-centric networking by allowing
subscriptions to specific content.
• Subscribers define conditions (filters) on the data they want.
• Publishers send data without addressing specific receivers.
• Example: A fire alarm system where sensors publish smoke levels, and fire
stations subscribe to alerts when smoke exceeds a threshold.
Introduction
The publish/subscribe interaction paradigm
• The Publish/Subscribe model is a data-centric communication pattern
where:
• Publishers generate and send data without knowing the recipients.
• Subscribers express interest in specific types of data without knowing the data
sources.
• Main Features
• Decoupling in Space: Publishers and subscribers do not need to know each
other.
• Decoupling in Time: Data can be stored and forwarded asynchronously.
• Decoupling in Flow: The system manages the data flow dynamically.
Implementation of Publish/Subscribe in Wireless Sensor Networks
• Mechanism
• Data is published to a software bus.
• Figure 12.1 illustrates this concept; note how several publishers can publish data of the
same kind and how notifications can be delivered to various subscribers.
• Subscribers receive notifications when their subscribed data becomes available.
• Types of Publish/Subscribe Models
• Topic-based: Data is categorized by topics (e.g., stock prices).
• Content-based: Subscribers receive data based on conditions (e.g., temperature > 25°C).
• Addressing data
•In data-centric networking, data is identified and retrieved based on its
content, not node addresses.
•Addressing data properly allows efficient data publication,
subscription, and retrieval in sensor networks.
• Types of Addressing Methods
• Topic-Based Addressing
• Data is grouped into predefined topics or categories.
• Example: A stock market monitoring system where updates are categorized
by stock names (e.g., "AAPL").
• Content-Based Addressing
• Data is retrieved based on specific conditions or queries.
• Example: A query searching for temperature readings > 25°C, regardless of
which sensor provides them.
• Implementation options
• There are different ways to implement addressing in
publish/subscribe systems within wireless sensor networks (WSNs):
• Centralized Approach
• A single central node handles all subscriptions and publications.
• It evaluates content-based queries and forwards data accordingly.
• Drawback: Creates a single point of failure and scalability issues.
• Distributed Approach (Decentralized)
• Each node stores and processes data based on local subscriptions.
• Uses content-based routing to forward data only where needed.
• Advantage: Reduces network congestion and avoids central bottlenecks.
• Multicast-Based Approach (For Topic-Based Publish/Subscribe)
• Each topic is assigned to a multicast group, and interested nodes subscribe.
• Limitation: Difficult to implement for dynamic or content-based
subscriptions.
• Distribution versus gathering of data – In-network processing
• Publish/Subscribe decouples publishers and subscribers in space, making
the number of each irrelevant.
• Data can be distributed from a few sources to many nodes or gathered
from many sources to a few sinks (convergecast).
• Types of Data Flow:
• Data Distribution(Push Model)
• Data flows from a small set of sources to multiple nodes.
• Example: Dissemination of event notifications.
• Data Gathering (Pull Model - Convergecast)
• Data is collected from many nodes towards a single or few sinks.
• Gathering data admits an additional optimization. an aggregate of the data –
maximum, average, or minimum, for example, – is to be computed anyway. In such a
case, performing these aggregation operations within the network is a viable option
to reduce the amount of data that has to be transported.
• Example: Sensor readings reporting to a base station.
Data-centric routing
• Data-centric networking protocols can be classified based on how
frequently they interact with the network. It identifies two main types:
• Repeated Interactions: These protocols handle periodic data requests,
such as continuously reading sensor values. Since these interactions occur
frequently, they justify investing effort in establishing an optimized routing
structure that can be used multiple times.
• One-Shot Queries: These protocols handle single, ad-hoc data requests.
Since the interaction happens only once, setting up a complex routing
structure is inefficient and cannot be justified.
• Two data-centric routing protocols for wireless sensor networks (WSNs):
• SPIN (Sensor Protocol for Information via Negotiation) and
• ACQUIRE (Active Query Forwarding in Sensor Networks).
SPIN (Sensor Protocol for Information via Negotiation)
•SPIN is a protocol designed to efficiently disseminate large data sets across a
network.
•Instead of flooding the network with redundant messages, SPIN negotiates
data transmission using a three-step process:
1.A node advertises new data to its neighbors.
2.A neighbor requests the data only if it is new or relevant.
3.The requested data is then transmitted.
•This approach prevents unnecessary redundancy and reduces network
congestion.
•Variants of SPIN also adapt to battery levels, allowing low-energy nodes to
participate less.
•SPIN can transmit 60–80% more data per unit energy than conventional
flooding-based protocols
SPIN (Sensor Protocol for Information via Negotiation) is more energy-efficient
than simple flooding due to the following key reasons:
1. Avoids Redundant Transmissions
In simple flooding, every node blindly forwards received data, leading to multiple
nodes transmitting the same data unnecessarily.
SPIN advertises data first, and nodes request it only if they don’t already have it,
preventing duplicate transmissions.
2. Reduces Implosion Problem
In flooding, nodes may receive the same data multiple times from different
neighbors, wasting energy in redundant reception and transmission.
SPIN avoids this by ensuring that a node only requests data once if it is new,
minimizing repeated processing.
3. Eliminates Overlap in Data Reporting
Many sensor nodes monitor overlapping areas and may generate similar data.
SPIN allows nodes to selectively request only needed data rather than receiving
identical data multiple times.
4. Uses Data Negotiation Instead of Blind Forwarding
Flooding sends data without verifying if a node already has it.
SPIN’s three-step process (advertise, request, send) ensures that data is sent
only when needed, reducing wasted transmissions.
5. Adapts to Energy Constraints
Some SPIN variants allow nodes with low battery levels to reduce
participation, helping conserve energy across the network.
Flooding does not consider energy levels, leading to faster depletion of node
resources.
6. More Effective in Large Networks
As the network size grows, flooding leads to an exponential increase in
redundant transmissions, draining energy quickly.
SPIN’s controlled dissemination ensures that only required data flows,
maintaining efficiency even in large-scale deployments.
ACQUIRE (Active Query Forwarding)
•ACQUIRE is designed for one-shot complex queries in networks where data
is replicated across multiple nodes.
•The protocol works by progressively resolving queries:
1.A query is sent into the network.
2.Intermediate nodes attempt to answer part of the query using local data.
3.The query continues to be forwarded until fully resolved.
4.Once resolved, the answer is routed back to the requester.
•Nodes can fetch additional data from nearby nodes (within d hops) to
improve query resolution.
•The protocol is more efficient than traditional flooding since queries are
processed incrementally rather than broadcasting them network-wide.
Repeated interactions
The transport layer and QoS in wireless
sensor networks
•QoS in the Internet:
•The Internet is mainly used for transporting byte streams between independent users.
•QoS is judged based on protocol-specific metrics like delay, jitter, throughput, and packet
loss.
•Transport protocols like TCP and UDP ensure reliability, congestion control, and packet
sequencing.
•The Internet follows the end-to-end principle, where the transport layer handles most of the
reliability mechanisms.
•QoS in Sensor Networks:
•Sensor networks are not just data transport systems; they are designed for monitoring and
controlling the physical environment.
•Nodes process data locally before forwarding it to sink nodes.
•QoS in sensor networks is application-dependent, focusing on reliability, event detection,
and efficient data transmission rather than just packet delivery.
•Unlike the Internet, sensor networks have energy, memory, and computational constraints,
which require a more integrated and cross-layer approach to protocol design.
• Why Traditional Transport Layer Approaches Don’t Work Well in
Sensor Networks:
• Sensor networks have a specific task, so protocols must work
together efficiently, rather than treating lower layers as black boxes.
• Due to limited resources (energy, memory, processing power),
sensor network protocols must be optimized jointly rather than being
purely end-to-end like in the Internet.
• Transport mechanisms in sensor networks are seen as collections of
mechanisms that provide services across multiple layers, rather than
being strictly defined at the transport layer.
Quality of service/reliability
• One of the most important qualities is reliability. In sensor networks,
the notion of reliability has several facets:
•Detection Reliability:
•Ensures that events are actually detected by the sensor network.
•Depends on node density, sensing range, and environmental
conditions (e.g., obstacles).
•Information Accuracy:
•Single sensor readings may be inaccurate, requiring multiple readings
over time or space.
•Too many similar readings waste energy, so a balance is needed.
•Reliable Data Transport:
•After detection, data must be reliably transmitted to the sink nodes
over multiple hops.
•Applications like code distribution also require reliable data delivery.
•Timely Data Delivery:
•Some applications require fast data transmission within specific time
bounds.
•Challenges include low energy budgets, sleep cycles, node failures,
and multi-hop delays.
•While reliability is well-studied, timeliness is less explored.
Transport protocols task
• Reliable data transport: This task requires the ability to detect and repair losses
of packets in a multihop wireless network;
• Flow control: The receiver of a data stream might temporarily be unable to
process incoming packets because of lack of memory or processor power.
• Congestion control: Congestion occurs when more packets are created than the
network can carry and the network starts to drop packets. Dropping packets is a
waste of energy and counteracts any efforts to achieve reliability or information
accuracy. Congestion-control schemes try either to avoid this situation or to react
to it in a reasonable manner. One important way to avoid congestion is to control
the rate at which sensor nodes generate packets.
• Network abstraction: The transport layer offers a programming interface to
applications, shielding the latter from the many complexities and vagaries of data
transport.
Some of the particular challenges for transport protocols in wireless
sensor networks are the following:
• Wireless sensor networks are multihop wireless networks of
homogeneous nodes. This is not an easy environment as the different
problems of TCP over wireless channels illustrate
• Any transport protocol must comply with the stringent energy
constraints, memory constraints, or computational constraints of
sensor nodes. Significant engineering efforts would be required to run
heavyweight protocols like TCP on such nodes.
• Transport protocols are faced with variable topologies.
Reliable data transport
Single packet delivery
Block delivery
Congestion Control in network processing
Mechanisms for congestion detection
• Sensor nodes detect congestion locally based on two key indicators:
• Buffer Occupancy: Measures how full a node's packet queue is.
• Channel Utilization: Estimates how much the wireless channel is being used.
1. Buffer Occupancy-Based Congestion Detection
• Simple Method: Compare instantaneous buffer level with a threshold.
• If buffer exceeds the threshold → Congestion detected.
• Issue: Late detection if the threshold is too high.
• Improved Method (e.g., ESRT Protocol):
• Monitors buffer growth trend over time.
• Congestion is detected if:
• Buffer exceeds threshold AND
• Buffer has been increasing recently.
• If buffer stops growing, congestion is likely resolving.
• Limitation:
• Buffer occupancy alone is not always reliable.
• Packets may get lost due to collisions before reaching the buffer.
2. Channel Utilization-Based Congestion Detection
• Used in CODA (Congestion Detection and Avoidance) framework.
• Estimates channel usage U to detect congestion.
• Congestion is diagnosed when U approaches a critical level Umax.
• The relationship between U and congestion depends on the MAC
protocol:
• TDMA: Can tolerate high utilization without congestion.
• CSMA: Congestion occurs when U exceeds a threshold, causing more
collisions.
3. Channel Sampling for Congestion Estimation
• Trigger: When a node's packet queue becomes nonempty (ready to
transmit).
• Sampling Process:
• Time is divided into sampling epochs (each spanning multiple packets).
• In each epoch, the channel is sampled N times.
• If M out of N samples show a busy channel → Utilization estimate U = M/N.
• The estimates from K consecutive epochs are combined (e.g., using
exponential weighting).
• Adjustable parameters (K, weighting factor) allow tuning the congestion
estimator.
Mechanisms for congestion handling
• To manage congestion in WSNs, different mechanisms are employed to
either prevent congestion or react to it when it occurs.
1. Rate Control
• Definition: Adjusts the rate at which sensor nodes transmit data to avoid
congestion.
• Types:
• End-to-End Rate Control: The sink node detects congestion and signals transmitting
nodes to reduce their data rate using acknowledgments or control packets.
• Local Rate Control: A congested node signals its immediate upstream neighbor to
reduce its transmission rate. The signal can propagate backward if necessary.
• Trade-Off:
• Reducing transmission rates prevents congestion but may affect data accuracy.
• Applications must balance data fidelity vs. network stability.
2. Packet Dropping
• Definition: When a node’s buffer is full, it must drop packets to make
room for new ones.
• Types of Dropping Strategies:
• Random Dropping: Drops packets arbitrarily.
• Priority-Based Dropping: Assigns priority values to packets. The lowest
priority packet is dropped first.
• Oldest-First Dropping: Removes the oldest buffered packet to prioritize
recent data.
• Key Benefit: Prevents buffer overflow while ensuring critical packets
are preserved.
3. In-Network Processing and Aggregation
• Definition: Since sensor networks serve specific applications,
intermediate nodes can process and aggregate data instead of blindly
forwarding all packets.
• Methods:
• Data Compression: Reduces data size before transmission.
• Data Aggregation: Combines multiple packets into one to reduce congestion.
• Redundant Data Elimination: Removes duplicate or less relevant data.
• Key Benefit: Reduces the number of packets transmitted, minimizing
congestion while saving energy.
The CODA congestion-control framework
• COngestion Detection and Avoidance (CODA) is a congestion detection and
control framework designed to handle both transient and persistent
congestion in wireless sensor networks.
• It combines two congestion-control mechanisms that operate on different
timescales.
• 1. Congestion Detection in CODA
• CODA detects congestion using:
• Buffer Occupancy: Monitors the queue size in nodes.
• Channel Utilization: Measures wireless channel load.
• Congestion is confirmed when buffer thresholds are exceeded and the
channel is highly utilized.
• 2. Congestion Control Mechanisms in CODA
• a) Open-Loop Hop-by-Hop Backpressure (For Transient Congestion)
• Used to handle temporary congestion spikes (e.g., near event-detecting sensor nodes).
• How it works:
• If a node detects congestion, it sends backpressure signals to upstream nodes.
• Upstream nodes reduce their data transmission rate to prevent overflow.
• This process propagates backward to distribute the congestion load.
• Goal: Quickly resolve short-term congestion without major data loss.
• b) Closed-Loop Acknowledgment-Based Control (For Persistent Congestion)
• Used for long-term congestion management when transient control is insufficient.
• How it works:
• Sink nodes use ACK-based feedback to regulate traffic.
• A self-clocking mechanism dynamically adjusts transmission rates based on network conditions.
• Nodes adaptively reduce their transmission rates to prevent persistent congestion.
• Goal: Maintain a stable data flow and prevent prolonged network congestion.
Open-Loop Hop-by-Hop Backpressure Mechanism
in CODA
• This congestion control mechanism in CODA reacts to transient congestion by
propagating backpressure signals without requiring feedback (hence, "open-
loop").
1. How It Works:
•Congestion Detection: A node detects congestion based on buffer occupancy and
channel utilization.
•Backpressure Messages: The congested node broadcasts backpressure messages
to its neighbors.
•Congestion Mitigation Actions:
•Packet Dropping: Drops lower-priority or redundant packets.
•Rate Reduction: Reduces its own data transmission rate.
•Temporary Halt: Stops forwarding and resumes later when congestion eases.
2. Response of Neighboring Nodes
•A node (B) receiving a backpressure message from a congested node (A) can:
•Drop Packets: Since A cannot accept packets, B might drop some to reduce
network load.
•Reduce Transmission Rate: Adjusts its own data flow to ease congestion.
•Forward Backpressure Signal: If necessary, B sends the backpressure message
further upstream toward data sources.
•Count Congestion Area: If B is also congested, it increments a counter in the
backpressure message.
•Routing Adaptation: The counter value helps estimate the size of the
congested region, which routing protocols can use to avoid congested paths.
3. Selective Transmission Control
•Backpressure messages may indicate a "chosen node."
•The chosen node is allowed to continue transmissions, while others
apply backpressure policies (drop, reduce, or halt transmission).
4. Why It’s Called "Open-Loop"?
•The congested node does not receive direct feedback about whether
congestion has been resolved.
•No dynamic adjustments based on acknowledgment signals—it just
continues broadcasting backpressure until congestion subsides.
Closed-loop regulation mechanism
• The closed-loop regulation mechanism in CODA manages persistent
congestion near the sink or in other network regions by dynamically
adjusting source transmission rates based on acknowledgment
feedback.
• 1. Why is Backpressure Not Enough?
• The hop-by-hop backpressure mechanism helps temporarily, but
once congestion subsides, nodes resume sending at normal rates,
potentially causing recurring congestion.
• Backpressure messages would need to travel multiple hops from the
sink to the source, adding to network load.
• 2. How Closed-Loop Regulation Works
• Sources Monitor Channel Utilization:
• A source node detects congestion when its packet generation rate exceeds a certain
threshold fraction (r) of the available channel capacity.
• It then requests acknowledgments from the sink by setting a special "request ACK"
bit in its packets.
• Sink Responds with Acknowledgments:
• The sink generates ACKs at an application-specific rate (e.g., 1 ACK per 100 received
packets).
• The acknowledgment rate is not fixed and depends on congestion conditions.
• Sources Adjust Transmission Rate:
• Each source expects a minimum number of ACKs within a certain time.
• If fewer acknowledgments are received, the source reduces its transmission rate
according to a congestion control policy.
• If congestion decreases, the source stops requesting ACKs and resumes normal
operation.
• 3. Regulation Strategies at the Sink
• The sink can regulate traffic in two ways:
• Reduce or Stop ACKs in Congested Regions:
• If the sink detects congestion, it reduces its acknowledgment rate or stops sending
ACKs.
• This forces sources to slow down their transmission due to missing ACKs.
• Allow Hotspots to Block ACKs:
• The sink continues sending ACKs at a normal rate, but congestion in "hotspot" regions
causes them to get lost before reaching sources.
• As a result, sources beyond the congested area automatically reduce their rates.
• 4. Key Benefits of Closed-Loop Regulation
• ✅ Effectively manages persistent congestion near the sink.
✅ Adapts dynamically to network conditions.
✅ Reduces unnecessary network load compared to backpressure.
✅ Can be restricted to specific sources based on the phenomenon
being observed.
Operating system for WSN
• WSNs require specialized operating systems due to resource
constraints.
• Must support energy-efficient execution, event-driven operations,
and dynamic power management.
• Cannot use traditional general-purpose OS.
• Role of an OS in WSNs
• Manages access to limited resources.
• Supports concurrent execution of tasks.
• Enables energy-efficient operations and low-power states.
Structure of operating system and protocol stack