2011 Right Sizing Cable Modem Buffers For Maximum Application Performance
2011 Right Sizing Cable Modem Buffers For Maximum Application Performance
APPLICATION PERFORMANCE
Greg White (CableLabs®)
Richard Woundy, Yiu Lee, Carl Williams (Comcast)
The new feature is referred to as Buffer As noted, each of the three parameters is
Control, and is defined as a Quality of Service optional. If all three parameters are omitted,
(QoS) Parameter of a DOCSIS Service Flow. the interpretation by the device is that there is
Specifically, the Buffer Control parameter is a no limitation (minimum or maximum) on
set of Type-Length-Value (TLV) tuples that allowed buffer size for this service flow, and
define a range of acceptable buffer sizes for that the device should select a buffer size via a
the service flow, as well as a target size. This vendor specific algorithm. This is exactly the
allows the CM/CMTS implementer some situation that existed prior to the introduction
flexibility on the resolution of its buffer of this feature. As a result, if the operator
configuration. For example, a CM does not change its modem configurations to
implementer might choose to manage include Buffer Control, the operator should
buffering in blocks of 1024 bytes, if that see no difference in behavior between a
modem that supports this new feature and one necessary in order to ensure that the CMTS
that does not. properly sends the TLVs to the CM,
regardless of which of the above methods is
In many configuration cases it will be most utilized.
appropriate to omit the Minimum Buffer and
Maximum Buffer limits, and to simply set the
Target Buffer. The result is that the modem EXPERIMENTAL WORK
will not reject the service flow due to
buffering configuration, and will provide a Methodology
buffer as close as the implementation allows
to the Target Buffer value.
The new Buffer Control parameters
In certain cases however, the operator may described in a previous section enable us to
wish to be assured that the buffer is within provision the buffer size in the cable modem,
certain bounds, and so would prefer an and thus provide the opportunity to
explicit signal (i.e., a rejection of the investigate the application performance
configuration) if the modem cannot provide a impact in various scenarios. For the purposes
buffer within those bounds. Hard limits are of this paper, we ran three sets of tests:
provided for these cases.
In the first set of tests, we correlate the
The cable modem is required to support buffer size and upstream speed. The purpose
buffer configurations of up to 24KiB per of the test is to understand the impact buffer
service flow, but is expected to support size would have on the ability of the customer
significantly more, particularly when a small to utilize his/her provisioned upstream
number of service flows is in use. Put another bandwidth. As an illustration, if the round-
way, if the Minimum Buffer parameter is set trip time for a TCP session is 40ms and the
to a value less than or equal to 24576, the upstream service is provisioned for 5Mb/s, the
cable modem is guaranteed not to reject the rule-of-thumb for buffer size would indicate
configuration due to that parameter value. If 5000Kb/s * 0.04s = 200Kb or 24.4KiB. If the
the Minimum Buffer parameter is set to a buffer size was then set to 16KiB, the
value that is greater than 24576, then there is expectation is that a single TCP session would
some risk that a modem implementation will not be able to utilize the 5Mb/s service. In this
reject the configuration. set of tests, we validate that expectation, and
we additionally look at the effect when
Since they are part of the QoS Parameter multiple simultaneous TCP sessions are
Set, the Buffer Control TLVs can be set sharing the upstream link.
directly in the cable modem's configuration
boot file; they can be set indirectly via a In the second set of tests we correlate the
named Service Class defined at the CMTS; buffer size and QoS of real-time applications
and they can be set and/or modified via the during upstream self-congestion (i.e.
PacketCableTM Multimedia (PCMM) interface saturation of the user's allotted bandwidth by a
to the CMTS. mix of applications). During congestion,
packets from real-time applications may
In order to use this new feature to control queue up in the buffer. If the buffer size is too
upstream buffering in DOCSIS 3.0 cable large, this will increase the latency and may
modems, it is necessary that the CM software severely impact real-time applications such as
be updated to a version that supports it. online games and Voice over IP (VoIP). This
Furthermore, CMTS software updates are set of tests examines whether changing the
buffer size can improve real-time gateway between the CM and the rest of the
applications’ QoS or not. home network equipment. To eliminate any
bias that might be introduced by the packet
The third set of tests examines the forwarding performance of a home gateway,
implications of buffer size when the token we instead used a Gigabit switch to connect
bucket feature defined in the DOCSIS 3.0 the home network equipment to the CM. We
specification is utilized to provide the user a then configured the CM to allow multiple
high burst transmit rate (as is currently the devices connecting to it.
practice by several operators). In this test, the
token bucket is set to 5MB and, after a PSTN
the transfer speed in this test are: a) the HTTP Upload Test
Figure 4
Data Rate (5 Mbps/10 Mbps)
single TCP session and 10 TCP sessions.
Both were able to achieve the full 10Mb/s rate
10 limit.
Data Rate (Mb/s)
8
6 1 TCP vs 10 TCPs (40ms RTT)
12
4
Figure 5
Figure 6a
For 1Mb/s and 2Mb/s services, 16KiB buffer
size was sufficient to utilize the upstream Figure 6b shows further results with a 100ms
bandwidth with either 40ms and 100ms delay. delay. Bandwidth utilization could be
For 5Mb/s, results showed that at least 64KiB improved for both 16KiB and 64KiB buffers
buffer was needed when delay was 100ms. when aggregating 10 TCP sessions. The test
For 10Mb/s, results showed that 64KiB buffer using 16KiB buffer size with a 100ms delay
was needed for 40ms delay and 256KiB resulted in 4.95Mb/s upstream utilization, a
buffer was needed for 100ms delay. three times improvement over the test that
was conducted using a single TCP sessions.
From the previous section, the tests However, 5Mb/s of capacity remains unused.
performed with 10Mb/s upstream service for When using a 64KiB buffer size with 10 TCP
16KiB and 64KiB showed significant sessions, there was improvement with a data
bandwidth under-utilization. The question rate of 8.77Mb/s compared to 6.05Mb/s for
becomes: is it possible to achieve utilization the single TCP flow test. Here again 1Mb/s
of the whole bandwidth with multiple parallel of capacity was unused.
TCP sessions? In additional experiments we
used 10 TCP sessions established in parallel
1 TCP vs 10 TCPs (100ms RTT)
recording individual data rates and provided 10
the aggregation plots in the graph provided in
Data Rate (Mb/s)
8
Figure 6a and Figure 6b for 10Mb/s upstream 6
service with buffer sizes of 16KiB and 64KiB. 4
2
Figure 6a shows that with a 16KiB buffer,
0
a single TCP session with 40ms delay is
16 64
unable to achieve the full upstream rate, with
only 1.97Mb/s of the 10Mb/s upstream Buffer Size (KiB)
1 TCP (100ms RTT) 10 TCP (100ms RTT)
service being used. By aggregating 10 TCP
sessions and keeping other factors fixed, the
channel utilization improved by a factor of Figure 6b
four to 7.11Mb/s. Still, full utilization wasn’t
realized. When 64KiB buffer size was used, Figure 7 and Figure 8 show the results of
we saw no difference in throughput between a packet latency when an upstream link was
congested with a single TCP session. Figure 7 upstream service, the PING latency was
shows the latency of 1Mb/s and 2Mb/s 52.04ms for a 16KiB buffer. The PING
upstream services. Figure 8 shows the latency latency increased to 200ms for a 64KiB buffer
of 5Mb/s and 10Mb/s upstream services when and 1019.17ms for a 256KiB buffer.
the upstream link was congested with a single However, this pattern was not observed when
TCP session. using a 10Mb/s service for 16KiB and 64KiB
buffer sizes. This is because a single TCP
PING Latency (1 Mbps/2 Mbps) session was unable to utilize the full 10Mb/s
10000
service with 40ms and 100ms delay.
Therefore, the PING packets proceeded
without congestion impact.
Latency (ms)
1000
2
Figure 8
1
For all upstream services, the pattern is that 8 16 32 64 128 256
the bigger the buffer, the higher the PING Buffer Size (KiB)
latency. The observed pattern of the test
results show (as expected) that the latency is 1Mbps (40ms) 1Mbps (100ms)
directly proportional to the buffer size. If the
buffer size is increased to four times the size,
Figure 9a
the latency would increase roughly four times.
For example, when testing with a 2Mb/s
Figure 9a shows the results when the both of these cases, the best MOS results were
upstream was configured to 1Mb/s. In this recorded when the buffer size was set between
case, the 16KiB buffer gave the best MOS 32KiB and 64KiB. What was interesting is
result regardless of the TCP RTT. When the when the buffer was set below 64KiB, there
buffer size increased to 64KiB, the MOS should not be any congestion because the
started to drop. In fact, when the buffer size buffer size was too small for the competing
was set to 256KiB with 100 ms delay, many TCP session to utilize the full upstream
tests couldn’t be completed. The test set bandwidth. Also when the buffer size was set
reported a Timeout. The timeout could be to 8KiB or 16KiB, the VoIP traffic should see
caused by long delay introduced by the large very low buffering latency (between 6ms and
buffer. 26ms). Both of these factors should result in
high MOS scores. However, the MOS score
was degraded when the buffer size was set to
MOS vs Buffer Size (2Mbps) 8KiB or 16KiB, and this low MOS is likely to
be caused by packet loss due to the periodic
5
saturation of the small CM buffer by the
4 competing TCP congestion window.
MOS
2
MOS vs Buffer Size (10Mbps)
1
8 16 32 64 128 256 5
3
2Mbps (40ms) 2Mbps (100ms)
2
1
Figure 9b 8 16 32 64 128 256
Figure 9d
MOS vs Buffer Size (5Mbps)
5 When we increased the buffer size above
4 128KiB, the MOS started to deteriorate. In
MOS
Throughput (Mbps)
10.0
In this test, the CM's rate-shaping token
bucket depth was set to 5MB to mimic the
configuration that some operators use to boost 1.0
performance for bursty interactive traffic. The
test set was then used to transit a 5MB file in
the upstream direction using FTP. The test 0.1
average data rate was calculated. Due to the Buffer Size (KiB)
40ms 100ms
fact that the file size was equal to the token
bucket depth, there was effectively no
upstream rate-shaping in play. Figure 10 Figure 11
shows the result of Average Transfer Time for
the scenario with 40ms RTT and the scenario
with 100ms RTT. FUTURE WORK
80
cable operators to optimize network
60 performance and user experience across a
40 wide range of application and usage scenarios.
20
The experimental work described in this
0 paper focused both on upstream bulk TCP
8 16 32 64 128 256 performance, and on the impact that bulk TCP
Buffer Size (KiB) traffic would have on a VoIP session across a
40ms 100ms range of buffer sizes in fairly isolated and
controlled scenarios. Further work will seek
Figure 10
to examine the effect of buffering on some
more real-world application scenarios, such as
Figure 11 shows the calculated average a scenario in which TCP sessions are
throughput based on the average transfer time numerous and short-lived, such that many of
and the file size. It is clear that the choice of them stay in the slow-start phase for their
buffer size can have a dramatic impact on entire duration, and never (or rarely) enter the
achievable throughput during this congestion avoidance phase. Additionally,
performance boost period. It is important to future work should assess the impact of buffer
note that the average throughput shown in sizing on other latency or loss sensitive
Figure 11 is for the entire file transfer, and so applications beyond VoIP.
includes the slow-start effect as well as the
initial FTP hand-shaking.
CONCLUSIONS Further work is needed to evaluate the
performance impact that buffer size has on a
We have shown that the size of the wider range of application scenarios.
upstream service flow buffer in the cable
modem can have significant effects on
application performance. For applications REFERENCES
that use TCP to perform large upstream
transmissions (i.e. upstream file transfers such [1] V. Jacobson, Congestion Avoidance and
as uploading a large video clip), insufficient Control, ACM SIGCOMM '88, August
buffering capacity can limit throughput. 1988.
Applications utilizing a single TCP session
see the most pronounced effect. Applications [2] J. Postel, Transmission Control Protocol,
that are latency sensitive, on the other hand, STD 7, RFC793, September 1981.
see degraded performance when too much
[3] Braden, B., et al., Recommendations on
buffer capacity is provided.
Queue Management and Congestion
Cable modems deployed today appear to Avoidance in the Internet, RFC 2309,
have significantly greater buffering than is April 1998.
needed to sustain TCP throughput, potentially
[4] S. Floyd, Congestion Control Principles,
causing high latency when the upstream is
BCP 41, RFC 2914, September 2000.
congested. The result is poorer than expected
application performance for VoIP, gaming, [5] K. Ramakrishnan, S. Floyd and D. Black,
Web surfing, and other applications that are The Addition of Explicit Congestion
latency-sensitive. Notification (ECN) to IP, RFC 3168,
September 2001.
A new feature for DOCSIS 3.0 CMs allows
operators to configure the upstream buffer [6] M. Allman, V. Paxson and E. Blanton,
size for each upstream service flow in order to TCP Congestion Control, RFC 5681,
optimize application performance and to September 2009.
improve user experience. In choosing the
buffer size, the operator will need to consider [7] C. Villamizar and C. Song, High
the upstream QoS parameters for the service Performance TCP in ANSNet. ACM
flow, the expected application usage for the CCR, 24(5):45-60, 1994.
flow, as well as the service goals for the flow.
[8] G. Appenzeller, I. Keslassy, and N.
Service features that utilize a large token McKeown, Sizing Router Buffers, ACM
bucket size (in order to provide high SIGCOMM, USA, 2004.
throughput for short bursts) complicate
matters since the buffer size cannot [9] M. Dischinger, et al., Characterizing
realistically be resized in real time. Thus a Residential Broadband Networks,
buffer configuration that may be optimized to Proceedings of the 7th ACM SIGCOMM
provide a good balance in performance Conference on Internet Measurement,
between TCP uploads and real-time services Oct. 24-26, 2007, San Diego, CA, USA.
for the configured sustained traffic rate, may
result in poorer than expected burst speeds. [10] A. Vishwanath, V. Sivaraman, M.
Thottan, Perspectives on Router Buffer
Sizing: Recent Results and Open
Problems, ACM CCR, 39(2):34-39, 2009.
[11] ITU-T Recommendation G.114, One-way
Transmission Time, http://www.itu.int