Chapter 5 Peer-To-Peer Protocols and Data Link
Chapter 5 Peer-To-Peer Protocols and Data Link
Peer-to-Peer Protocols
and Data Link Layer
PART I: Peer-to-Peer Protocols
Peer-to-Peer Protocols and Service Models
ARQ Protocols and Reliable Data Transfer
Flow Control
Timing Recovery
TCP Reliable Stream Service & Flow Control
Chapter 5
Peer-to-Peer Protocols
and Data Link Layer
PART II: Data Link Controls
Framing
Point-to-Point Protocol
High-Level Data Link Control
Link Sharing Using Statistical Multiplexing
Chapter Overview
Peer-to-Peer protocols: many protocols involve the
interaction between two peers
Service Models are discussed & examples given
Detailed discussion of ARQ provides example of
development of peer-to-peer protocols
Flow control, TCP reliable stream, and timing recovery
Data Link Layer
Framing
PPP & HDLC protocols
Statistical multiplexing for link sharing
Chapter 5
Peer-to-Peer Protocols
and Data Link Layer
Peer-to-Peer Protocols and
Service Models
Peer-to-Peer Protocols
Peer-to-Peer processes
execute layer-n protocol
to provide service to
n + 1 peer process n + 1 peer process layer-(n+1)
Messages Messages
Network
Segments can experience long delays, can be lost, or
arrive out-of-order because packets can follow different
paths across network
End-to-end error control protocol more difficult
C
1 2 3 2 1
2 2
1 1
End System
α 1 1 End System
1 2
2 2 β
4 3 21 12 3 2 1 1 2 3 2 1 1 2 3 4
Medium
2
A B 1
Network
Simple
End-to-end
ACK/NAK
inside the
network
More scalable
1 2 3 4 5 if complexity at
Data Data Data Data the edge
Chapter 5
Peer-to-Peer Protocols
and Data Link Layer
ARQ Protocols and Reliable
Data Transfer
Automatic Repeat Request (ARQ)
Purpose: to ensure a sequence of information
packets is delivered in order and without errors or
duplications despite transmission errors & losses
We will look at:
Stop-and-Wait ARQ
Go-Back N ARQ
Selective Repeat ARQ
Basic elements of ARQ:
Error-detecting code with high error coverage
ACKs (positive acknowledgments
NAKs (negative acknowlegments)
Timeout mechanism
Stop-and-Wait ARQ
Transmit a frame, wait for ACK
Error-free
Packet
packet
Information frame
Transmitter Receiver
Timer set after (Process A) (Process B)
each frame
transmission Control frame
Header
CRC
Information Header
packet CRC
Information frame Control frame: ACKs
Need for Sequence Numbers
(a) Frame 1 lost Time-out
Time
A Frame Frame Frame Frame
0 1 1 2
ACK ACK
B
Time-out
Time
A Frame
0 Frame Frame Frame
ACK 0 1 2
ACK
B
Slast Rnext
Timer
A t
B t
B
frame
tprop tproc tack tprop
tf time
nf na bits/ACK frame
2t prop 2t proc
R R
Transmission efficiency:
Effect of
n f no no
1 frame overhead
Reff t0 nf
0 .
R R na 2(t prop t proc ) R
1
nf nf
Effect of
Effect of
Delay-Bandwidth Product
ACK frame
Example: Impact of Delay-
Bandwidth Product
nf=1250 bytes = 10000 bits, na=no=25 bytes = 200 bits
2xDelayxBW 1 ms 10 ms 100 ms 1 sec
Efficiency
200 km 2000 km 20000 km 200000 km
1 Mbps 103 104 105 106
88% 49% 9% 1%
1 Gbps 106 107 108 109
1% 0.1% 0.01% 0.001%
Stop-and-Wait does not work well for very high speeds or long propagation
delays
S&W Efficiency in Channel with
Errors
Let 1 – Pf = probability frame arrives w/o errors
Avg. # of transmissions to first correct arrival is then 1/ (1–Pf )
“If 1-in-10 get through without error, then avg. 10 tries to success”
Avg. Total Time per frame is then t0/(1 – Pf)
n f no
t0 no
1
Reff 1 Pf nf
SW (1 Pf )
R R na 2(t prop t proc ) R
1
nf nf
Effect of
frame loss
Example: Impact Bit Error Rate
nf=1250 bytes = 10000 bits, na=no=25 bytes = 200 bits
Find efficiency for random bit errors with p=0, 10-6, 10-5, 10-4
nf n f p
1 Pf (1 p ) e for large n f and small p
fr fr fr fr fr fr fr fr fr fr fr fr fr fr Time
0 1 2 3 4 5 6 3 4 5 6 7 8 9
A
B
A A A out of sequence A A A A A A
C C C frames C C C C C C
K K K K K K K K K
1 2 3 4 5 6 7 8 9
Rnext 0 1 2 3 3 4 5 6 7 8 9
Frame transmission are pipelined to keep the channel busy
Frame with errors and subsequent out-of-sequence frames are ignored
Transmitter is forced to go back when window of 4 is exhausted
Window size long enough to cover round trip time
Time-out expires
Stop-and-Wait ARQ
fr fr fr Time
0 0 1
A
B
Receiver is A
looking for C
K
Rnext=0
1
B A A
A A A A
Receiver is Out-of- C C
C C C C
looking for sequence K K K K K K
Rnext=0 frames 1 2 3 4 5 6
Go-Back-N with Timeout
Problem with Go-Back-N as presented:
If frame is lost and source does not have frame to
send, then window will not be exhausted and
recovery will not commence
Use a timeout with each frame
When timeout expires, resend all outstanding
frames
Go-Back-N Transmitter & Receiver
Transmitter Receiver
Send Window
... Receive Window
Frames
transmitted S
last Srecent Slast+Ws-1
and ACKed
Frames
Buffers Rnext
received
oldest un-
Timer Slast
ACKed frame
Timer Slast+1 Receiver will only accept
... a frame that is error-free and
that has sequence number Rnext
Timer
Srecent most recent
transmission
When such frame arrives Rnext is
...
incremented by one, so the
Slast+Ws-1 max Seq # receive window slides forward by
allowed one
Sliding Window Operation
Transmitter
Frames
transmitted S 0
and ACKed
last Srecent Slast+Ws-1 2 –1
m
1
2
Transmitter waits for error-free
ACK frame with sequence
number Slast Slast
send
When such ACK frame arrives, i
Slast is incremented by one, and
window
the send window slides forward i + Ws – 1 i+1
by one
Maximum Allowable Window Size is Ws = 2m-1
M = 22 = 4, Go-Back - 4: Transmitter goes back 4
fr fr fr fr fr fr fr fr Time
A 0 1 2 3 0 1 2 3
A A A A
B C C C C Receiver has Rnext= 0, but it does not
K K K K
1 2 3 0
know whether its ACK for frame 0 was
received, so it does not know whether
Rnext 0 1 2 3 0 this is the old frame 0 or a new frame 0
A A A
B C C C
K K K Receiver has Rnext= 3 , so it
1 2 3
rejects the old frame 0
Rnext 0 1 2 3
ACK Piggybacking in Bidirectional GBN
SArecent RA next
Transmitter Receiver
Receiver Transmitter
S B
recent R
B
next
RA next RB next
“A” Send Window “B” Send Window
... ...
Tprop Tf Tf Tprop
1 ms 1000 bits 1
10 ms 10,000 bits 2
Ws t f Ws t f
tGBN t f (1 Pf ) Pf {t f } t f Pf and
1 Pf 1 Pf
n f no no
1
tGBN nf
GBN (1 Pf )
R 1 (Ws 1) Pf
fr fr fr fr fr fr fr fr fr fr fr fr fr fr Time
0 1 2 3 4 5 6 2 7 8 9 10 11 12
A
B
A A N A A A A A A A A A
C C A C C C C C C C C C
K K K K K K K K K K K K
1 2 2 2 2 2 7 8 9 1 1 1
0 1 2
Selective Repeat ARQ
Transmitter Receiver
Frames Frames
transmitted S received Rnext Rnext + Wr-1
last Srecent Slast+ Ws-1
and ACKed
Buffers Buffers
Timer Slast Rnext+ 1
Timer Slast+ 1 Rnext+ 2
...
Timer
Srecent ...
Rnext+ Wr- 1 max Seq #
... accepted
Slast+ Ws - 1
Send & Receive Windows
Transmitter Receiver
0 0
2 -1
m
1 2 -1
m
1
2 2
Rnext
Slast
send receive j
window i window
i
i + Ws – 1 i+1
j + Wr – 1
Moves k forward when ACK Moves forward by 1 or more
arrives with Rnext = Slast + k when frame arrives with
k = 1, …, Ws-1 Seq. # = Rnext
What size Ws and Wr allowed?
Example: M=22=4, Ws=3, Wr=3
Frame 0 resent
Send
{0,1,2} {1,2} {2} {.}
Window
fr0 fr1 fr2 fr0
A Time
B ACK1 ACK2
Receive
{0,1} {1,2} {2,3}
Window
Old frame 0 rejected because it
falls outside the receive window
Why Ws + Wr = 2m works
Transmitter sends frames 0 Receiver window starts at {0, …, Wr}
to Ws-1; send window empty
All arrive at receiver
Window slides forward to {Ws,
…,Ws+Wr-1}
All ACKs lost
Receiver rejects frame 0 because it is outside receive
window
Transmitter resends frame 0
0 0
2m-1 1 2 -1
m
1
Slast Ws +Wr-1 2
2
receive Rnext Ws
send window
window Ws-1
Applications of Selective Repeat
ARQ
TCP (Transmission Control Protocol):
transport layer protocol uses variation of
selective repeat to provide reliable stream
service
Service Specific Connection Oriented
Protocol: error control for signaling
messages in ATM networks
Efficiency of Selective Repeat
Assume Pf frame loss probability, then number of
transmissions required to deliver a frame is:
tf / (1-Pf)
n f no
t f /(1 Pf ) no
SR (1 )(1 Pf )
R nf
Example: Impact Bit Error Rate on
Selective Repeat
nf=1250 bytes = 10000 bits, na=no=25 bytes = 200 bits
Compare S&W, GBN & SR efficiency for random bit errors with p=0, 10-6, 10-5, 10-4 and R= 1
Mbps & 100 ms
Selective Repeat outperforms GBN and S&W, but efficiency drops as error
rate increases
Comparison of ARQ Efficiencies
Assume na and no are negligible relative to nf, and
L = 2(tprop+tproc)R/nf =(Ws-1), then
Selective-Repeat:
no
SR (1 Pf )(1 ) (1 Pf )
nf
Go-Back-N: For Pf≈0, SR & GBN same
1 Pf 1 Pf
GBN
1 (WS 1) Pf 1 LPf
For Pf→1, GBN & SW same
Stop-and-Wait:
(1 Pf ) 1 Pf
SW
n 2(t t ) R 1 L
1 a prop proc
nf nf
ARQ Efficiencies
ARQ Efficiency Com parison
Selective
1.5 Repeat
Go Back N 10
Efficiency
- LOG(p)
p Stop and Wait
10
Control frame
on off on off
B Time
2Tprop
B
Time
Sliding Window ARQ method with Ws equal to buffer available
Transmitter can never send more than Ws frames
ACKs that slide window forward can be viewed as permits to transmit
more
Can also pace ACKs as shown above
Return permits (ACKs) at end of cycle regulates transmission rate
Problems using sliding window for both error & flow control
Choice of window size
Interplay between transmission rate & retransmissions
TCP separates error & flow control
Chapter 5
Peer-to-Peer Protocols
and Data Link Layer
Timing Recovery
Timing Recovery for Synchronous
Services
Synchronous source Network output
sends periodic not periodic
information blocks
Network
Packet Arrivals
Tmax
• Delay first packet by maximum network delay
• All other packets arrive with less delay
• Playout packet uniformly thereafter
Arrival times
Time
Playout clock must
Send
be synchronized to times Receiver too
transmitter clock Playout slow;
times buffer fills and
overflows
Tplayout time
Many late
packets
Counter
M M
fs Network fr
Transmitter Receiver
Send buffer Receive buffer
ACKs
TCP ARQ Method
• TCP uses Selective Repeat ARQ
• Transfers byte stream without preserving boundaries
• Operates over best effort service of IP
• Packets can arrive with errors or be lost
• Packets can arrive out-of-order
• Packets can arrive after very long delays
• Duplicate segments must be detected & discarded
• Must protect against segments from previous connections
• Sequence Numbers
• Seq. # is number of first byte in segment payload
• Very long Seq. #s (32 bits) to deal with long delays
• Initial sequence numbers negotiated during connection setup
(to deal with very old duplicates)
• Accept segments within a receive window
Transmitter Receiver
= x +1
n o = y , A C K, Ack_no
Three-way SYN, Seq_
Handshake Seq_no = x+1,
ACK, Ack_no =
y+ 1
Data Transfer
FIN, Seq_no =
w
w +1
ACK, Ack_no =
Graceful
Close Data Transfer =z
FIN, Seq_no
ACK, Ack_no = z+1
1st Handshake: Client-Server
Connection Request
ACK Seq. # =
Init. Seq. # + 1
ACK Seq. # =
Init. Seq. # + 1
12 bytes of payload
Push set
12 bytes of payload
carries telnet option
negotiation
Graceful Close: Client-to-Server
Connection
ACK Seq. # =
Previous Seq. # + 1
Server ACKs request; client-
to-server connection closed
Flow Control
TCP receiver controls rate at which sender transmits to prevent buffer overflow
TCP receiver advertises a window size specifying number of bytes that can be
accommodated by receiver
WA = WR – (Rnew – Rlast)
TCP sender obliged to keep # outstanding bytes below W A
(Srecent - Slast) ≤ WA
ta
4 8, No Da t0
20 00 , Win = 20
_no =
= 1, A c k
Seq_no
Seq_no =
t1 20 00, Ack_n
o = 1, Win =
1024, Dat
a = 2000-30
Seq_no = 23
t2 30 24, Ack_n
o = 1, Win =
1024, Data
= 3024-40
47
= 1-128
12, Data t3
, Win = 5
, Ac k _ n o = 4048
Seq_ no = 1
Seq_no =
4048, Ack
t4 _no = 129
, Win = 102
4 , Data = 4
048-4559
TCP Retransmission Timeout
TCP retransmits a segment after timeout period
Timeout too short: excessive number of retransmissions
Timeout too long: recovery too slow
Timeout depends on RTT: time from when segment is sent to
when ACK is received
Round trip time (RTT) in Internet is highly variable
Routes vary and can change in mid-connection
Traffic fluctuates
TCP uses adaptive estimation of RTT
Measure RTT each time ACK received: n
tRTT(new) = tRTT(old) + (1 – ) n
typical
RTT Variability
Estimate variance 2 of RTT variation
Estimate for timeout:
tout = tRTT + k RTT
If RTT highly variable, timeout increase accordingly
If RTT nearly constant, timeout close to RTT estimate
Examples
Directly connected, wire-like PPP
Losses & errors, but no out-of- HDLC
sequence frames
Ethernet LAN
Applications: Direct Links;
IEEE 802.11 (Wi Fi) LAN
LANs; Connections across
WANs
Chapter 5
Peer-to-Peer Protocols
and Data Link Layer
Framing
Framing
transmitted received
Mapping stream of
frames frames physical layer bits into
frames
Mapping frames into
Framing bit stream
Frame boundaries can
be determined using:
Character Counts
0110110111
0111110101
Control Characters
Flags
CRC Checks
Character-Oriented Framing
Data to be sent
A DLE B ETX DLE STX E
After stuffing and framing
DLE STX A DLE DLE B ETX DLE DLE STX E DLE ETX
Frames consist of integer number of bytes
Asynchronous transmission systems using ASCII to transmit printable
characters
Octets with HEX value <20 are nonprintable
Special 8-bit patterns used as control characters
STX (start of text) = 0x02; ETX (end of text) = 0x03;
Byte used to carry non-printable characters in frame
DLE (data link escape) = 0x10
DLE STX (DLE ETX) used to indicate beginning (end) of frame
Insert extra DLE in front of occurrence of DLE STX (DLE ETX) in frame
All DLEs occur in pairs except at frame boundaries
Framing & Bit Stuffing
HDLC frame
0110111111111100
After stuffing and framing
0111111001101111101111100001111110
01111110000111011111011111011001111110
After destuffing and deframing
*000111011111-11111-110*
PPP Frame
Flag Address Control Protocol Information Flag
CRC
01111110 1111111 00000011 01111110
integer # of bytes
All stations are to Unnumbered Specifies what kind of packet is contained in the
accept the frame frame payload, e.g., LCP, NCP, IP, OSI CLNP, IPX
Data to be sent
41 7D 42 7E 50 70 46
7E 41 7D 5D 42 7D 5E 50 70 46 7E
After stuffing and framing
Generic Framing Procedure
GFP payload area
2 2 2 2 0-60
PLI cHEC Type tHEC GEH GFP payload
1 or 2 variable 2 or 4
Flag Address Control Protocol Information Flag
FCS
01111110 1111111 00000011 01111110
CRC 16 or
All stations are to
CRC 32
accept the frame
HDLC
Unnumbered frame
LCP
Setup
PAP
IP NCP
setup
Chapter 5
Peer-to-Peer Protocols
and Data Link Layer
High-Level Data Link Control
High-Level Data Link Control
(HDLC)
Bit-oriented data link control
Derived from IBM Synchronous Data Link
Control (SDLC)
Related to Link Access Procedure Balanced
(LAPB)
LAPD in ISDN
LAPM in cellular telephone signaling
NLPDU Network
Network
layer layer
“Packet”
DLPDU
Data link
Data link
layer
layer
“Frame”
Physical Physical
layer layer
HDLC Data Transfer Modes
Normal Response Mode
Used in polling multidrop lines
Commands
Primary
Responses
Supervisory Frame
1 0 S S P/F N(R)
Unnumbered Frame
1 1 M M P/F M M M
SABM UA
Data UA
transfer DISC
Example: HDLC using NRM
(polling)Address of secondary
Primary A Secondaries B, C
A polls B B, RR, 0, P N(S) N(R)
B, I, 0, 0
N(R) B sends 3 info
X B, I, 1, 0
frames
B, I, 2, 0,F
A rejects fr1 B, SREJ, 1
A polls C C, RR, 0, P
C, RR, 0, F C nothing to
send
A polls B, B, SREJ, 1,P
requests
selective B, I, 1, 0 B resends fr1
retrans. fr1 B, I, 3, 0 Then fr 3 & 4
B, I, 4, 0, F
A send info fr0 B, I, 0, 5
to B, ACKs up to 4
Time
Frame Exchange using
Asynchronous Balanced Mode
Combined Station A Combined Station B
B, I, 0, 0 A, I, 0, 0
B, I, 1, 0 A, I, 1, 1 A ACKs fr0
X
B sends 5
frames B, I, 2, 1 A, I, 2, 1
B, I, 3, 2 A rejects
B, REJ, 1
fr1
B, I, 4, 3
A, I, 3, 1
B goes B, I, 1, 3
back to 1
B, I, 2, 4 B, RR, 2 A ACKs fr1
B, I, 3, 4
B, RR, 3 A ACKs fr2
Flow Control
Flow control is required to prevent transmitter from
overrunning receiver buffers
Receiver can control flow by delaying
acknowledgement messages
Receiver can also use supervisory frames to
explicitly control transmitter
Receive Not Ready (RNR) & Receive Ready (RR)
I3 I4 I5 RNR5 RR6 I6
Chapter 5
Peer-to-Peer Protocols
and Data Link Layer
Link Sharing Using Statistical
Multiplexing
Statistical Multiplexing
Multiplexing concentrates bursty traffic onto a shared line
Greater efficiency and lower cost
B Buffer
Output line
C
Input lines
Tradeoff Delay for Efficiency
(a) Dedicated lines A1 A2
B1 B2
C1 C2
Dedicated lines involve not waiting for other users, but lines
are used inefficiently when user traffic is bursty
Shared lines concentrate packets into shared line; packets
buffered (delayed) when line is not immediately available
Multiplexers inherent in Packet
Switches
1 1
2 2
N N
Performance Measures:
Delay Distribution; Packet Loss Probability; Line Utilization
Delay = Waiting + Service Times
P1 P2 P3 P4 P5
Packet completes
transmission
Service
Packet begins
time
transmission
Waiting
Packet arrives
time
at queue P1 P2 P3 P4
P5
Packets arrive and wait for service
Waiting Time: from arrival instant to beginning of service
Service Time: time to transmit packet
Delay: total time in system = waiting time + service time
Fluctuations in Packets in the
System
B1 B2
C1 C2
(c) N(t)
Number of
packets in the
system
Packet Lengths & Service Times
R bits per second transmission rate
L = # bits in a packet
X = L/R = time to transmit (“service”) a packet
Packet lengths are usually variable
Distribution of lengths → Dist. of service times
Common models:
Constant packet length (all the same)
Exponential distribution
Internet Measured Distributions fairly constant
See next chart
Measure Internet Packet
Distribution
Dominated by TCP
traffic (85%)
~40% packets are
minimum-sized 40 byte
packets for TCP ACKs
~15% packets are
maximum-sized
Ethernet 1500 frames
~15% packets are 552
& 576 byte packets for
TCP implementations
that do not use path
MTU discovery
Mean=413 bytes
Stand Dev=509 bytes
Source: caida.org
M/M/1/K Queueing Model
Exponential service
K – 1 buffer
Poisson Arrivals time with rate
rate
At most K customers allowed in system
( t ) k t
P k arrivals in t seconds e
k!
Exponential Distribution
- t/E [X] - t
P[ X > t ] = e =e for t > 0 .
Probability density
1-e-t
P[X<t]
e-t
t t
0 0
M/M/1/K Performance Results
(From Appendix A)
Probability of Overflow:
(1 ) K
Ploss K 1
1
E[ N ]
E[T ]
(1 PK )
M/M/1/10
10
normalized avg delay
9
E[T]/E[X]
8 Maximum 10 packets
7
6 allowed in system
5
4 Minimum delay is 1
3
2 service time
1
0 Maximum delay is 10
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
0.9
At 70% load delay &
s s probability
0.8
0.7
0.6 loss begin increasing
0.5
0.4 What if we add more
l oloss
0.3
0.2
0.1
buffers?
0
0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3
lo a d
M/M/1 Queue
Exponential service
Infinite buffer
Poisson Arrivals time with rate
rate
Unlimited number of customers
allowed in system
Pb=0 since customers are never blocked
Average Time in system E[T] = E[W] + E[X]
When customers arrive infrequently and delays are low
As approaches customers start bunching up and
average delays increase
When customers arrive faster than they can be
processed and queue grows without bound (unstable)
Avg. Delay in M/M/1 & M/D/1
10
delay 8
7
avg . delay
6
average
5
no rmalized
4
M/M/1
normalized
3
constant
2
service time
1 M/D/1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99
lo a d
1 1 1 1 1
E[TM ]
for M/M/1 model.
1 1 1
1 1 1
E[T D ] 1 for M/D/1 system.
2(1 ) 2(1 )
Effect of Scale
C = 100,000 bps C = 10,000,000 bps
Exp. Dist. with Avg. Packet Exp. Dist. with Avg. Packet
Length: 10,000 bits Length: 10,000 bits
Service Time: X=0.1 second Service Time: X=0.001
second
Arrival Rate: 750 pkts/sec
Arrival Rate: 7.5 pkts/sec
Load: =0.75 Load: =0.75
E[T] = 0.001/(1-.75) =
E[T] = 0.1/(1-.75) = 0.4 sec
0.004 sec
Reduction by factor of 100
0.5 L=1200
0.4
0.3 L=800
L=400
0.2
0.1
L=200
0
0 8000 16000 24000 32000 40000 48000 56000 64000
Goodput (bits/second)
Burst Multiplexing / Speech
Interpolation
Many Fewer
Voice Trunks
Calls
0.1
0.01
Typical 48
requirement
0.001
24 32 40
# connections
n
n k
k m 1
(k m) p (1 p ) n k
k n n!
speech loss where .
np
k k ! ( n k )!
Effect of Scale
Larger flows lead to better performance
Multiplexing Gain = # speakers / # trunks
Trunks required for 1% speech loss
Multiplexing
Speakers Trunks Utilization
Gain
24 13 1.85 0.74
32 16 2.00 0.80
40 20 2.00 0.80
48 23 2.09 0.83
Packet Speech Multiplexing
A3 A2 A1
Many voice
terminals
generating B3 B2 B1
voice packets Buffer B3 C3 A2 D2 C2 B1 C1 D1 A1
C3 C2 C1
D3 D2 D1 Buffer overflow
B2
1 2 3
Sent t
1 2 3
Received t
t o u t Pf 1
t 0 t 0 .
1 Pf 1 Pf
Efficiency:
n f no no
1
E[ttotal ] nf
SW (1 Pf ) (1 Pf ) 0 .
R na 2(t prop t proc ) R
1
nf nf
Go-Back-N Performance
1 successful transmission i – 1 unsuccessful transmissions
E[ttotal ] t f (i 1)Ws t f P[nt i ]
i 1
t f Ws t f (i 1)(1 Pf ) i 1 Pf
i 1
Ws t f Pf 1 (Ws 1) Pf
tf tf .
1 Pf 1 Pf
Efficiency:
n f no no
1
E[ttotal ] nf
GBN (1 Pf ) .
R 1 (Ws 1) Pf