0% found this document useful (0 votes)
75 views11 pages

10 1 1 103 316 PDF

This document summarizes recent advancements in network congestion control in the Linux kernel. It describes how Linux has implemented multiple TCP congestion control methods over time, which grew the codebase and made development difficult. As a result, Linux introduced a congestion control framework to facilitate implementing new protocols like DCCP and improve existing protocols like SCTP. It also discusses the Datagram Congestion Control Protocol (DCCP) and Linux's implementation of the DCCP protocol stack.

Uploaded by

Shaik Nisar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views11 pages

10 1 1 103 316 PDF

This document summarizes recent advancements in network congestion control in the Linux kernel. It describes how Linux has implemented multiple TCP congestion control methods over time, which grew the codebase and made development difficult. As a result, Linux introduced a congestion control framework to facilitate implementing new protocols like DCCP and improve existing protocols like SCTP. It also discusses the Datagram Congestion Control Protocol (DCCP) and Linux's implementation of the DCCP protocol stack.

Uploaded by

Shaik Nisar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Congestion control advancements in Linux

Ian McDonald and Dr Richard Nelson


Department of Computer Science
University of Waikato
Hamilton, New Zealand
Email: [email protected], [email protected]

Abstract dow which would be dynamically resized until


the connection reached an initial state of sta-
bility, and as conditions changed. More packets
This paper describes the recent advancements would not be added to the congestion window
in network congestion control in the Linux ker- when it was full until another was removed after
nel. Specifically the paper focuses on the TCP receiving an acknowledgment. These changes
congestion framework, and the implementation are widely credited with preventing ongoing
of the DCCP protocol stack. TCP collapse.

Linux has had multiple TCP congestion meth- At present there is an effort to abstract much of
ods added to it and the subsequent growth of the TCP codebase and move it into a generic IP
the codebase has made development difficult. implementation. The reasoning behind this is
As a result a congestion control framework has to facilitate the implementation of new proto-
been introduced. cols like DCCP and to improve the implemen-
tation of existing protocols such as SCTP.
This paper also outlines how the congestion
control framework was used to implement The rationale behind Datagram Congestion
TCP–Nice. TCP–Nice is an experimental con- Control Protocol (DCCP) and the current sta-
gestion control mechanism that uses less than tus of the protocol is also discussed. DCCP is
it’s fair share of bandwidth when there is con- a new IP based transport protocol which aims
gestion, much like nice does for CPU usage by to replace TCP and UDP in some uses. The
processes in the Unix operating system. implementation of the DCCP protocol in the
Linux kernel is outlined.

1 Introduction
2 TCP Congestion

Congestion became an issue in the 1980’s in


TCP/IP networks as documented by Nagle [23]. TCP congestion control, as described by Ja-
During this period TCP/IP links on the Inter- cobsen, works on a congestion window sys-
net became increasingly congested and Van Ja- tem which allows a window of unacknowledged
cobsen [13] in 1988 proposed that if “conserva- packets. This is initially this is set to one packet
tion of packets” was observed then TCP flows and is then increased by one packet per ac-
would be generally stable. The “conservation of knowledgment which has the effect of almost
packets” was implemented by a congestion win- doubling the window size per round trip time
(RTT). This continues until either: networks, particularly around RTT unfair-
ness, and uses a combination of additive
increase and binary search to alter the size
• the maximum window size is reached of the congestion window.
• the slow start threshold is reached • Vegas [2] is based on Reno and tries to
• congestion occurs. track the sending rate through looking at
variances to the RTT along with other en-
hancements.
When the slow start threshold is exceeded the
• Westwood [18] is an implementation that
congestion window then increases by a max-
estimates the available bandwidth and is
imum of one packet per RTT. If congestion
claimed to be suited to wireless use or other
occurs (detected through multiple duplicate
networks where loss may occur which does
ACKs or timeout) then the congestion window
not mean congestion.
and slow start threshold are altered. A more
detailed explanation is provided by Stevens [28] • TCP–Hybla [3] is a congestion control
and RFC 2581 [1]. mechanism that works with links such as
satellite which have high RTT but also
This section talks about the more recent TCP high bandwidth as some other congestion
congestion changes to Linux. Sarolahti and control mechanism’s favour low RTT flows.
Kuznetsov [26] describe earlier TCP congestion
implementation in Linux. Linux 2.4.x had TCP • H–TCP [7], Highspeed TCP [4], and Scal-
New Reno implemented by Alexey Dobriyan. able TCP [14] all aim to improve conges-
tion control on high speed networks.
TCP researchers were working on the Net100
project for Linux TCP performance which be- As the number of implementations in the Linux
came the Web100 project [30]. Stephen Hem- kernel increased the code became more com-
minger merged parts of this code that meet the plicated and there was not a consistent way
needs of the community and had suitable li- to use each congestion control implementation.
censes. Stephen Hemminger has rewritten the TCP
code to make it more modular. This has in-
The current list of TCP implementations in the volved splitting each algorithm into a separate
Linux kernel are: file in net/ipv4 and implementing a structure
to register these in include/net/tcp.h — de-
• Reno is the implementation of Van Jacob- tails are shown in Appendix A.
son’s research [13] and was the default con-
gestion control scheme until recently. An implementation which demonstrates this
simply is scalable TCP which is implemented
• Binary Increase Congestion Control (BIC) in net/ipv4/tcp scalable.c
[31] was implemented into the kernel in
2.6.6 and the default changed from Reno This new modular structure was implemented
to BIC in 2.6.8 after Stephen Hemminger in Linux 2.6.13. The latest stable release of
investigated data from Standford Linear 2.6.13 should be used as a minimum though
accelerator tests [27]. The initial imple- as there were some critical bugs found in the
mentation of BIC has issues which are de- implementation.
scribed in Li and Leith [16]. This has
been resolved in the 2.6.11 kernel. BIC As part of this implementation the default TCP
aims to address issues on high performance congestion control mechanism can be altered
dynamically, either by a sysctl or on a per
Figure 1: cwnd for TCP Nice vs Reno
socket basis by using TCP options. This al-
lows the use of different algorithms for different
20
link characteristics if desired or to allow differ- TCP-Reno 3
ent applications to use different algorithms. TCP-Nice +
3
+y
It is of some concern that the BIC became the 15 Timeout occurs
3
default Linux TCP implementation while there 3
will still issues with it and there have been 3
3
other regressions within the networking stack cwnd 10 3
of Linux at other times also. A testing frame- 3
3
+ 3
work is necessary for the networking stack for
each release to be tested against. Stephen Hem- +
5 +
minger has carried out some preliminary work 3
+ 3 +
+
in this area [10]. 3
+ 3
+ + + + +
0
0 2 4 6 8 10 12
3 TCP–Nice Round-trip-times

Much of the focus on TCP congestion control loss occurs


research has been on transmitting the most
data possible while maintaining fairness with • Use less than fair share when competing
other data flows and maintaining stability. flows cause congestion

• Make use of available network capacity


At the University of Waikato we are investi- when no competing traffic causing conges-
gating using a TCP congestion control variant tion
which uses less than it’s fair share when faced
with congestion. The reason for this is to al-
low large file transfers that are not time critical TCP–Nice would behave similarly to Reno dur-
to occur at a lower priority and allow available ing the startup but be more conservative after
bandwidth to be used for other applications. a congestion event as shown in Figure 1 where
cwnd is the congestion window size.
It could be argued that the appropriate place
to do this on is a middle box, in a similar man-
ner to the QBone Scavenger Service [25], but 3.1 Implementation
often users do not have access to this or the
equipment is not capable of this – for example
In this section the new TCP congestion frame-
domestic ADSL/cable routers.
work for Linux is demonstrated using the im-
plementation of TCP–Nice as an example. The
The name TCP–Nice is drawn from the use of
complete code for TCP–Nice is in Appendix B.
the nice command in Unix/Linux which lowers
a process’ priority.
To initialise a new TCP congestion con-
trol mechanism the tcp congestion ops struc-
The following characteristics were desired:
ture must be initialised and then a call to
tcp register congestion control is made.
• Use less than fair share when congestion In this case it can be seen that start,
rtt sample, undo cwnd and get info are (LOSS TIME CLAMP ∗ HZ)) {
not implemented. It is also possible to tp−>snd ssthresh = 2;
use other congestion control functions here – tp−>snd cwnd = 2;
for example Scalable TCP sets min cwnd to return;
tcp reno min cwnd. }

Data can be stored by allocating data ref- ... existing reno code follows
erenced through the inet csk ca function
which points to an area of private data. In In this case the code is modeled on TCP Reno
TCP–Nice this is used to record the last loss but keeps snd ssthresh and snd cwnd at two if
time through the tcp nice data structure. there has been loss in the last LOSS TIME CLAMP
seconds.
The following code is run when the the TCP
congestion state changes: Below are the functions for setting the slow
static void tcp nice state(struct sock ∗sk, start threshold and minimum congestion win-
u8 new state) dow:
{ /∗ Slow start threshold is quarter the conges-
struct tcp sock ∗tp = tcp sk(sk); tion window (min 2) ∗/
struct tcp nice data ∗ca = u32 tcp nice ssthresh(struct tcp sock ∗tp)
inet csk ca(sk); {
return max(tp−>snd cwnd >> 2U, 2U);
if (new state == TCP CA Loss) { }
tp−>snd ssthresh = 2;
tp−>snd cwnd = 2; /∗ Lower bound on congestion window. ∗/
ca−>last loss = jiffies; u32 tcp nice min cwnd(struct tcp sock ∗tp)
} {
} return 2;
}
In this case the slow start threshold
(snd ssthresh) and congestion window In implementing TCP–Nice it was decided to
(snd cwnd) are reduced to two and the time of use a slow start threshold of a quarter the con-
the loss is recorded. gestion window instead of a half as per Reno
and to set min cwnd to two.
The congestion avoidance function is imple-
mented as follows: The implementation of TCP–Nice shows that it
#define LOSS TIME CLAMP 4 is relatively simple to implement a new conges-
tion control mechanism in the kernel because
void tcp nice cong avoid(struct sock ∗sk, of the framework that has been introduced into
u32 ack, u32 rtt, u32 in flight, int flag) the Linux kernel.
{
struct tcp sock ∗tp = tcp sk(sk);
struct tcp nice data ∗ca = 3.2 Results
inet csk ca(sk);

if (in flight < tp−>snd cwnd) TCP–Nice was tested with competing TCP
return; flows to see how the flow was reduced and the
time taken to recover. With the code imple-
if ((jiffies − ca−>last loss) < mented as per the previous section, or slight
variations of this, all of the desired characteris- sending rate available rather than halving
tics were not achieved. the window in response to congestion as
TCP can do.
To achieve all of the characteristics desired will
require further experimentation and/or mathe-
matical modelling. 4.1 History

For the Linux 2.4.x kernel there were two main


4 DCCP early releases of DCCP. There was an imple-
mentation by ICIR [11] to test the difficulty of
implementing the spec and an implementation
Datagram Congestion Control Protocol by Patrick McManus [20] which implemented
(DCCP) [15] is a new transport protocol that the base protocol and CCID2.
is at draft RFC status. DCCP is an unreliable
session based protocol. This means that it Waikato University took the implementa-
is session based like TCP but unreliable like tion from Patrick McManus and incorporated
UDP. The rationale behind the new protocol CCID3 code from Lulea University of Technol-
is that existing protocols do not handle the ogy [17] relicensed under the GPL. Earlier this
requirements of modern applications such as year this code was tidied for release and is avail-
multimedia as well as desired. The use of UDP able for the 2.4.27 kernel [29].
does not provide congestion control at the
transport level and is not session based so will
4.2 Implementation
not traverse some firewalls or NAT devices.
TCP does provide congestion control and a
session but is not as suitable due to retrans- Arnaldo Carvalho de Melo started implement-
mission and the use of AIMD based congestion ing a version of DCCP for the 2.6.12 kernel.
response which alters the transmission rate This initially consisted of the base protocol
rapidly rather than smoothly. DCCP aims to without CCIDs. Parts of the Waikato Uni-
provide a solution to these problems. versity code base, which was updated to the
2.6.11.4 kernel, were then merged — mostly
The DCCP protocol is defined in a modular in CCID3 support. As part of the code be-
method — there is a base protocol defined while ing developed TCP socket code was refactored
multiple congestion control mechanisms can be so that it supported multiple protocols which
implemented through the use of Control IDs is discussed later in this paper. This code base
(CCIDs). At the time of writing there are two has been accepted into the kernel tree by Linus
core CCIDs with others proposed: Torvalds and was released in 2.6.14.

• CCID2 [5] is TCP–like congestion control One of the challenges of implementing CCID3
and implements a congestion control mech- was the implementing of the mathematical cal-
anism based on TCP. culation of the rate. The challenge was two fold
— the lack of 64 bit integer operations on 32
• CCID3 [6] is based on TCP Friendly Rate bit architecture and the inability to use float-
Control (TFRC) [8]. TFRC aims to pro- ing point instructions in the kernel. This was
vide a smoother response to congestion resolved by converting the Lulea floating point
than TCP while still using a “fair” share lookup tables to an integer based one with some
of bandwidth compared to other flows. reasonably complex manipulations. As part
TFRC achieves this goal by estimating the of this a large amount of testing was carried
out which showed that most implementations to achieve full compliance with the DCCP spec-
to date had implemented the rate calculation ification. The two major things that need to
incorrectly. occur for this to happen are CCID2 implemen-
tation and feature negotiation. Further inter-
It has proved relatively trivial to port user level operability testing needs to be carried out once
applications to DCCP. Netcat, iperf, ttcp, ssh other operating systems implement DCCP.
all have had DCCP support added relatively
quickly. Programs that depend on the format Devices that implement NAT will have to be
of the packet e.g. tcpdump, ethereal have taken modified to allow DCCP to traverse them. The
more effort. implementation of this will be similar to the im-
plementation of TCP or UDP NAT. To imple-
For applications to make full use of the features ment NAT, port mapping will need to be put in
available in DCCP such as rate and loss feed- place and the checksum recalculated as DCCP
back a new API will need to be developed. Fur- checksums the pseudo–header in the same way
ther research is being carried out in this area by as TCP. This should be a priority to be imple-
the author [19]. mented in future versions of the Linux kernel.

DCCP has been implemented for both IPv4 and


IPv6. IP version specific code has been split
into ipv4.c and ipv6.c 5 IP code restructuring

The directory structure for the DCCP code is:


net/dccp base protocol When Arnaldo Carvalho de Melo started on
net/dccp/ccids CCID specific his implementation of DCCP he realised that
net/dccp/ccids/lib CCID libraries there were many similarities between the cur-
rent TCP code and what would be required for
Further details on the implementing of DCCP DCCP. The code was restructured so that it
can be found in Melo [22]. could be used for both TCP and DCCP, thus
avoiding any replication.
Tests have been conducted between Brazil and
New Zealand by Arnaldo Carvalho de Melo In particular TCP sockets and code using these
and Ian McDonald using the public Internet sockets were examined to see if they could
which consisted of a route using 18 hops. The be used in multiple protocols rather than just
tests consisted of using netcat and ttcp which TCP.
had been modified to use DCCP. Initially the
tests failed with checksum failure on the header. These changes are a continuation of Arnaldo’s
This was proven to be due to NAT as the DCCP earlier work [21]. There still remains potential
checksum covers the source and destination IP to rework existing protocols such as SCTP to
addresses amongst other fields. With checksum use the restructured code, to extend the code
tests temporarily removed the tests proceeded shared between TCP and DCCP, and also to
successfully. It is extremely encouraging to see implement other protocols such as XCP.
that DCCP was able to traverse the public In-
ternet successfully. From this the conclusion A similar process has also been commenced by
could reasonably be drawn that ISPs are en- Arnaldo Carvalho de Melo for DCCP over IPv6
abling all IP based protocols to travel over their as well.
networks.
As both TCP and DCCP codebases use modu-
There also remains work to be done for DCCP lar congestion control mechanisms there is also
scope to consider whether further code can be
Figure 2: NetEm setup
shared or whether the individual congestion
control mechanisms could be used for both pro-
tocols.

6 Testing

For congestion control to be effective a variety


of network conditions need to be simulated and 6.3 Performance testing
measured. The following subsections detail pro-
grams used in the testing of TCP and DCCP.
The programs ttcp and iperf [12] were modified
to enable use of DCCP and selection of TCP
6.1 Tcpdump congestion control at run time. These were used
to measure throughput speed and loss over a
connection.
DCCP support has been added to the devel-
opment branch of tcpdump which is software
used to analyse traffic flows on a packet basis. 6.4 Ostra
Tcpdump was used extensively in the testing
of DCCP to help resolve bugs. At the time
of writing tcpdump is the main packet analy- Arnaldo Carvalho de Melo has developed a tool
sis tool used although there are experimental called Ostra which has proved extremely useful
patches to Ethereal available. in the development of DCCP and has potential
for wider deployment in Linux kernel develop-
ment. Ostra wraps existing structures within
6.2 Netem the kernel to capture variables and parameters
changing, program flow and timing informa-
tion. This is implemented in C wrappers being
Netem [9] is a traffic shaping tool developed by put around existing code. The data is captured
Stephen Hemminger and included in the Linux at runtime and then Python programs are used
kernel from 2.6.7. Netem can be used to drop to output data including an HTML front end.
packets, inject delay, duplicate packets and re-
order packets. The distribution on the loss, du-
plication and delay can be uniform or user de-
fined. 7 Conclusion

Netem only works on the outbound queues so


it is necessarily typically to enable it on a PC In recent Linux kernel releases there has been
running as a router with at least two Ethernet substantial changes made to congestion control.
cards. An example of testing setup used in TCP
and DCCP testing is shown in Figure 2.
• BIC has become the standard TCP imple-
mentation
Netem draws ideas from Dummynet which is
part of FreeBSD and some code from NIST Net • TCP congestion control code has been
[24]. rewritten in a more modular format
• TCP congestion control can be selected on /∗ round trip time sample per acked packet
a per socket basis (optional) ∗/
void (∗rtt sample)(struct sock ∗sk, u32
• DCCP has been implemented usrtt);
• TCP code has been refactored in parts of /∗ call before changing ca state (optional) ∗/
the codebase to support multiple protocols void (∗set state)(struct sock ∗sk, u8
new state);
/∗ call when cwnd event occurs (optional) ∗/
Linux now has a good base implemented for void (∗cwnd event)(struct sock ∗sk, enum
congestion control mechanisms which gives tcp ca event ev);
scope for Linux to be used further for conges- /∗ new value of cwnd after loss (optional) ∗/
tion control research. This should enable the u32 (∗undo cwnd)(struct sock ∗sk);
networking community to continue to make im- /∗ hook for packet ack accounting (optional)
provements to IP networking through the use of ∗/
Linux. void (∗pkts acked)(struct sock ∗sk, u32
num acked);
/∗ get info for inet diag (optional) ∗/
void (∗get info)(struct sock ∗sk, u32
Acknowledgments ext, struct sk buff ∗skb);

char name[TCP CA NAME MAX];


We are grateful for those that provided input,
struct module ∗owner;
reviewed the grammar and critiqued our ideas.
};
Stephen Hemminger of ODSL, Arnaldo Car-
valho de Melo of Mandriva and Dr Alan Holt of
University of Waikato deserve special mention
for their efforts.
B TCP–Nice code

#include <linux/config.h>
A Structure of tcp congestion ops #include <linux/module.h>
#include <net/tcp.h>
struct tcp congestion ops {
struct list head list; #define LOSS TIME CLAMP 4

/∗ initialize private data (optional) ∗/ struct tcp nice data {


void (∗init)(struct sock ∗sk); u32 last loss;
/∗ cleanup private data (optional) ∗/ };
void (∗release)(struct sock ∗sk);
void tcp nice cong avoid(struct sock ∗sk,
/∗ return slow start threshold (required) ∗/ u32 ack, u32 rtt, u32 in flight,
u32 (∗ssthresh)(struct sock ∗sk); int flag)
/∗ lower bound for congestion window (op- {
tional) ∗/ struct tcp sock ∗tp = tcp sk(sk);
u32 (∗min cwnd)(struct sock ∗sk); struct tcp nice data ∗ca =
/∗ do new cwnd calculation (required) ∗/ inet csk ca(sk);
void (∗cong avoid)(struct sock ∗sk, u32
ack, if (in flight < tp−>snd cwnd)
u32 rtt, u32 in flight, int good ack); return;
return 2;
if (before(tcp time stamp, }
ca−>last loss + LOSS TIME CLAMP ∗ HZ))
{ static void tcp nice event(struct sock ∗sk,
tp−>snd ssthresh = 2; enum tcp ca event event)
tp−>snd cwnd = 2; {
return; struct tcp sock ∗tp = tcp sk(sk);
}
/∗ this will keep snd cwnd and switch(event) {
snd ssthresh at 2 case CA EVENT FRTO:
∗ if loss within last x seconds ∗/ tp−>snd ssthresh = 2;
break;
if (tp−>snd cwnd <=
tp−>snd ssthresh) { default:
/∗ In ”safe” area, increase. /∗ don’t care ∗/
∗/ break;
if (tp−>snd cwnd < }
tp−>snd cwnd clamp) }
tp−>snd cwnd++;
} else { static void tcp nice state(struct sock ∗sk,
/∗ In dangerous area, in- u8 new state)
crease slowly. {
∗ In theory this is tp->snd cwnd += struct tcp sock ∗tp = tcp sk(sk);
1 / tp->snd cwnd struct tcp nice data ∗ca =
∗/ inet csk ca(sk);
if (tp−>snd cwnd cnt >=
tp−>snd cwnd) { if (new state == TCP CA Loss) {
if (tp−>snd cwnd < tp−>snd ssthresh = 2;
tp−>snd cwnd clamp) tp−>snd cwnd = 2;
tp−>snd cwnd++; ca−>last loss = jiffies;
tp−>snd cwnd cnt = 0; }
} else }
tp−>snd cwnd cnt++;
} static struct tcp congestion ops tcp nice =
} {
.ssthresh = tcp nice ssthresh,
/∗ Slow start threshold is quarter the conges- .min cwnd = tcp nice min cwnd,
tion window (min 2) ∗/ .cong avoid = tcp nice cong avoid,
u32 tcp nice ssthresh(struct sock ∗sk) .cwnd event = tcp nice event,
{ .set state = tcp nice state,
struct tcp sock ∗tp = tcp sk(sk); .owner = THIS MODULE,
.name = "tcp_nice"
return max(tp−>snd cwnd >> 2U, 2U); };
}
static int init tcp nice register(void)
/∗ Lower bound on congestion window. ∗/ {
u32 tcp nice min cwnd(struct sock ∗sk) return tcp register congestion control(
{ &tcp nice );
} [8] M. Handley, S. Floyd, J. Padhye, and
J. Widmer. Tcp friendly rate control (tfrc):
static void exit Protocol specification, January 2003.
tcp nice unregister(void)
{ [9] S. Hemminger. Network emulation with
tcp unregister congestion control( netem, 2005.
&tcp nice );
[10] S. Hemminger. Tcp probes, Accessed 2005.
}
[11] Icir dccp implementation, Accessed 2005.
module init(tcp nice register);
module exit(tcp nice unregister); [12] Nlanr/dast : Iperf - the tcp/udp band-
width measurement tool, Accessed 2005.
MODULE AUTHOR("Ian McDonald");
MODULE LICENSE("GPL"); [13] Van Jacobson. Congestion avoidance and
MODULE DESCRIPTION("TCP Nice"); control. In ACM SIGCOMM ’88, pages
314–329, Stanford, CA, August 1988.

[14] T. Kelly. Scalable tcp: Improving perfor-


References mance in highspeed wide area networks,
2003.

[1] R. Allman, V. Paxson, and W. Stevens. [15] E. Kohler, M. Handley, and S. Floyd.
Tcp congestion, 1999. Datagram congestion control protocol
(dccp), March 2005.
[2] Lawrence S. Brakmo and Larry L. Peter-
son. Tcp vegas: End to end congestion [16] Yee-Ting Li and Doug Leith. Bictcp im-
avoidance on a global internet. IEEE Jour- plementation in linux kernels. Technical
nal on Selected Areas in Communications, report, Hamilton Institute, February 2004.
13(8):1465–1480, 1995.
[17] Dccp projects, Accessed 2005.
[3] Carlo Caini and Rosario Firrincieli. Tcp
[18] Saverio Mascolo, Claudio Casetti, Mario
hybla: a tcp enhancement for heteroge-
Gerla, M. Y. Sanadidi, and Ren Wang.
neous networks. International Journal of
Tcp westwood: Bandwidth estimation for
Satellite Communications and Networking,
enhanced transport over wireless links. In
22(5):547–566, August 2004.
MobiCom ’01: Proceedings of the 7th an-
nual international conference on Mobile
[4] S. Floyd. Highspeed tcp for large conges-
computing and networking, pages 287–297,
tion windows, 2002.
New York, NY, USA, 2001. ACM Press.
[5] S. Floyd and E. Kohler. Profile for dccp [19] I. McDonald. Phd research proposal: Con-
congestion control id 2: Tcp-like conges- gestion control for real time media appli-
tion control, Accessed 2005. cations, 2005.
[6] S. Floyd, E. Kohler, and J. Padhye. Pro- [20] Dccp implementation by patrick mcmanus,
file for dccp congestion control id 3: Tfrc Accessed 2005.
congestion control, Accessed 2005.
[21] Arnaldo C. Melo. Tcpfying the poor
[7] Leith S. Hamilton. H-tcp: Tcp for high- cousins. In Ottawa Linux Symposium, July
speed and long-distance networks. 2004.
[22] Arnaldo C. Melo. Dccp on linux. In Ottawa
Linux Symposium, pages 305–311, 2005.

[23] John Nagle. Congestion control in ip/tcp


internetworks. SIGCOMM Comput. Com-
mun. Rev., 14(4):11–17, October 1984.

[24] Nist net home page, Accessed 2005.

[25] Qbone scavenger service (qbss), Accessed


2005.

[26] Pasi Sarolahti and Alexey Kuznetsov.


Congestion control in linux tcp. In Pro-
ceedings of the FREENIX Track: 2002
USENIX Annual Technical Conference,
pages 49–62, Berkeley, CA, USA, 2002.
USENIX Association.

[27] Tcp stacks testbed, Accessed 2005.

[28] Richard W. Stevens. The Protocols


(TCP/IP Illustrated, Volume 1). Addison-
Wesley Professional, December 1993.

[29] Wand implementation of dccp, Accessed


2005.

[30] The web100 project, Accessed 2005.

[31] Lisong Xu, Khaled Harfoush, and Injong


Rhee. Binary increase congestion control
(bic) for fast long-distance networks. In
IEEE Infocom 2004, 2004.

You might also like