CCN Examples Chapter 1 and 2
CCN Examples Chapter 1 and 2
For the following, assume that no data compression is done; this would in
practice almost never be the case. For (a)–(c), calculate the bandwidth necessary
for transmitting in real time:
(a) Video at a resolution of 640×480, 3 bytes/pixel, 30 frames/second.
(b) 160×120 video, 1 byte/pixel, 5 frames/second.
(c) CD-ROM music, assuming one CD holds 75 minutes’ worth and takes
650 MB.
(d) Assume a fax transmits an 8 × 10-inch black-and-white image at a resolution
of 72 pixels per inch. How long would this take over a 14.4-Kbps modem?
Answer:
2. Hosts A and B are each connected to a switch S via 10-Mbps links as in Figure
1.24. The propagation delay on each link is 20 μs. S is a store-and-forward
device; it begins retransmitting a received packet 35 μs after it has finished
receiving it. Calculate the total time required to transmit 10,000 bits from
A to B.
(a) As a single packet.
(b) As two 5,000-bit packets sent one right after the other.
Answer:
(a) Per-link transmit delay is 104 bits / 107 bits/sec = 1000 μs. Total transmission
time = 2 × 1000 + 2 × 20 + 35 = 2075 μs.
(b) When sending as two packets, here is a table of times for various events:
T=0 start
T=500 A finishes sending packet 1, starts packet 2
T=520 packet 1 finishes arriving at S
T=555 packet 1 departs for B
T=1000 A finishes sending packet 2
T=1055 packet 2 departs for B
T=1075 bit 1 of packet 2 arrives at B
T=1575 last bit of packet 2 arrives at B
Expressed algebraically, we now have a total of one switch delay and two
link delays; transmit delay is now 500μs:
3 × 500 + 2 × 20 + 1 × 35 = 1575 μs.
Smaller is faster, here.
3. Calculate the total time required to transfer a 1,000-KB file in the following
cases, assuming an RTT of 100 ms, a packet size of 1-KB data, and an initial
2×RTT of “handshaking” before data is sent.
(a) The bandwidth is 1.5 Mbps, and data packets can be sent continuously.
(b) The bandwidth is 1.5 Mbps, but after we finish sending each data packet
we must wait one RTT before sending the next.
(c) The bandwidth is “infinite,” meaning that we take transmit time to be
zero, and up to 20 packets can be sent per RTT.
(d) The bandwidth is infinite, and during the first RTT we can send one
packet (21−1), during the second RTT we can send two packets (22−1), during the third we can send four
(23−1), and so on.
Answer:
We will count the transfer as completed when the last data bit arrives at its destination.
An alternative interpretation would be to count until the last ACK arrives
back at the sender, in which case the time would be half an RTT (50ms) longer.
(a) 2 initial RTT’s (200ms) + 1000KB/1.5Mbps (transmit) + RTT/2 (propagation)
≈ 0.25 + 8Mbit/1.5Mbps = 0.25 + 5.33 sec = 5.58 sec. If we pay more
careful attention to when a mega is 106 versus 220, we get
8,192,000 bits/1,500,000bits/sec = 5.46 sec, for a total delay of 5.71 sec.
(b) To the above we add the time for 999 RTTs (the number of RTTs between
when packet 1 arrives and packet 1000 arrives), for a total of 5.71+99.9 =
105.61.
(c) This is 49.5 RTTs, plus the initial 2, for 5.15 seconds.
(d) Right after the handshaking is done we send one packet. One RTT after the
handshaking we send two packets. At n RTTs past the initial handshaking
we have sent 1 + 2 + 4 + · · ·+ 2n = 2n+1 −1 packets. At n = 9 we have
thus been able to send all 1,000 packets; the last batch arrives 0.5 RTT later.
Total time is 2+9.5 RTTs, or 1.15 sec.
(a) The minimum RTT is 2 × 385, 000, 000m / 3×108m/sec = 2.57 sec.
(b) The delay×bandwidth product is 2.57 sec×100Mb/sec = 257Mb = 32MB.
(c) This represents the amount of data the sender can send before it would be
possible to receive a response.
(d) We require at least one RTT before the picture could begin arriving at the
ground (TCP would take two RTTs). Assuming bandwidth delay only, it
would then take 25MB/100Mbps = 200Mb/100Mbps = 2.0 sec to finish
sending, for a total time of 2.0 + 2.57 = 4.57 sec until the last picture bit
arrives on earth.
6. Calculate the latency (from first bit sent to last bit received) for the following:
(a) A 10-Mbps Ethernet with a single store-and-forward switch in the path,
and a packet size of 5,000 bits. Assume that each link introduces a propagation
delay of 10 μs, and that the switch begins retransmitting immediately
after it has finished receiving the packet.
(b) Same as (a) but with three switches.
(c) Same as (a) but assume the switch implements “cut-through” switching: it
is able to begin retransmitting the packet after the first 200 bits have been
received.
Answer:
(a) One packet consists of 5000 bits, and so is delayed due to bandwidth 500 μs
along each link. The packet is also delayed 10 μs on each of the two links
due to propagation delay, for a total of 1020μs.
(b) With three switches and four links, the delay is
4 × 500μs + 4 × 10μs = 2.04ms
(c) With cut-through, the switch delays the packet by 200 bits = 20μs. There
is still one 500μs delay waiting for the last bit, and 20μs of propagation
delay, so the total is 540μs. To put it another way, the last bit still arrives
500μs after the first bit; the first bit now faces two link delays and one
switch delay but never has to wait for the last bit along the way. With three
cut-through switches, the total delay would be:
500 + 3 × 20 + 4 × 10 = 600 μs
7. Calculate the effective bandwidth for the following cases. For (a) and (b) assume
there is a steady supply of data to send; for (c) simply calculate the average
over 12 hours.
(a) A 10-Mbps Ethernet through three store-and-forward switches as in Exercise
18(b). Switches can send on one link while receiving on the other.
(b) Same as (a) but with the sender having to wait for a 50-byte acknowledgment
packet after sending each 5,000-bit data packet.
(c) Overnight (12-hour) shipment of 100 compact discs (650 MB each).
Answer:
(a) The effective bandwidth is 10Mbps; the sender can send data steadily at
this rate and the switches simply stream it along the pipeline. We are assuming
here that no ACKs are sent, and that the switches can keep up and
can buffer at least one packet.
(b) The data packet takes 2.04 ms as in 18(b) above to be delivered; the 400 bit
ACKs take 40μs/link for a total of 4× 40 μs+4× 10 μs = 200μsec = 0.20
ms, for a total RTT of 2.24 ms. 5000 bits in 2.24ms is about 2.2Mbps, or
280KB/sec.
(c) 100 × 6.5 × 108 bytes / 12 hours = 6.5 × 1010 bytes/(12×3600sec) ≈ 1.5MByte/sec = 12Mbit/sec
8. Consider a network with a ring topology, link bandwidths of 100 Mbps, and
propagation speed 2 × 108 m/s. What would the circumference of the loop
be to exactly contain one 250-byte packet, assuming nodes do not introduce
delay? What would the circumference be if there was a node every 100 m, and
each node introduced 10 bits of delay?
Answer:
The time to send one 2000-bit packet is 2000 bits/100Mbps = 20 μs. The length
of cable needed to exactly contain such a packet is 20 μs × 2×108m/sec = 4,000
meters.
250 bytes in 4000 meters is 2000 bits in 4000 meters, or 50 bits per 100m. With
an extra 10 bits/100m, we have a total of 60 bits/100m. A 2000-bit packet now
fills 2000/(.6 bits/m) = 3333 meters.
9. Suppose a host has a 1-MB file that is to be sent to another host. The file takes
1 second of CPU time to compress 50%, or 2 seconds to compress 60%.
(a) Calculate the bandwidth at which each compression option takes the same
total compression+transmission time.
(b) Explain why latency does not affect your answer.
Answer:
10. Suppose, in the other direction, we abandon any pretense at all of DNS
hierarchy, and simply move all the .com entries to the root name server:
www.cisco.com would become www.cisco, or perhaps just cisco. How
would this affect root name server traffic in general? How would this affect
such traffic for the specific case of resolving a name like cisco into a web
server address?
Answer:
If we just move the .com entries to the root name server, things wouldn’t be much
different than they are now, in practice. In theory, the root nameservers now
could refer all queries about the .com zone to a set of .com-specific servers; in
practice the root name servers (x.root-servers.net for x from a to m) all do answer
.com queries directly. (They do not, however, answer .int queries directly.)
The proposal here simply makes this current practice mandatory, and shouldn’t
thus affect current traffic at all, although it might leave other zones such as .org
and .net and .edu with poorer service someday in the future.
The main problem with moving the host-level entries, such as for www.cisco,
to a single root name server entry such as cisco, is that this either limits organizations
to a single externally visible host, or else (if the change is interpreted
slightly differently) significantly increases root name server traffic as it returns
some kind of block of multiple host addresses. In effect this takes DNS back to
a single central server. Perhaps just as importantly, the updating of the IP addresses
corresponding to host names is now out of the hands of the organizations
owning the host names, leading to a considerable administrative bottleneck.
However, if we’re just browsing the web and need only one address for each
organization, the traffic would be roughly equivalent to the way DNS works
now. (We are assuming that local resolvers still exist and still maintain request
caches; the loss of local caches would put an intolerable burden on the root
name servers.)
11. Discuss how you might rewrite SMTP or HTTP to make use of a hypothetical
general-purpose request/reply protocol. Could an appropriate analog of persistent
connections be moved from the application layer into such a transport
protocol? What other application tasks might be moved into this protocol?
Answer:
Both SMTP and HTTP are already largely organized as a series of requests sent
by the client, and attendant server reply messages. Some attention would have
to be paid in the request/reply protocol, though, to the fact that SMTP and HTTP
data messages can be quite large (though not so large that we can’t determine the
size before beginning transmission).
We might also need a Message ID field with each message, to identify which
request/reply pairs are part of the same transaction. This would be particularly
an issue for SMTP.
It would be quite straightforward for the request/reply transport protocol to support
persistent connections: once one message was exchanged with another host,
the connection might persist until it was idle for some given interval of time.
Such a request/reply protocol might also include support for variable-sized messages,
without using flag characters (CRLF) or application-specific size headers or chunking into blocks. HTTP in
particular currently includes the latter as an
application-layer issue.
12. Suppose a very large website wants a mechanism by which clients access
whichever of multiple HTTP servers is “closest” by some suitable measure.
(a) Discuss developing a mechanism within HTTP for doing this.
(b) Discuss developing a mechanism within DNS for doing this.
Compare the two. Can either approach be made to work without upgrading
the browser?
Answer:
(a) A mechanism within HTTP would of course require that the client browser
be aware of the mechanism. The client could ask the primary server if
there were alternate servers, and then choose one of them. Or the primary
server might tell the client what alternate to use. The parties involved might
measure “closeness” in terms of RTT, in terms of measured throughput, or
(less conveniently) in terms of preconfigured geographical information.
(b) Within DNS, one might add a WEB record that returned multiple server
addresses. The client resolver library call (e.g. gethostbyname()) would
choose the “closest”, determined as above, and return the single closest
entry to the client application as if it were an A record.