0% found this document useful (1 vote)
40 views

Comp Cheat Sheet

The document discusses network concepts including packet switching, circuit switching, encapsulation, access control, authentication, firewalls, transport layer protocols, transmission delay calculations, and HTTP protocols. Specifically: - Packet switching is used in the Internet and resources are used on demand, allowing congestion loss and variable delays. Circuit switching reserves resources in advance for calls. - Encapsulation involves adding header fields at each layer and placing the data in the payload of the packet for that layer. - Transport layer protocols like UDP provide best effort delivery while TCP provides reliable in-order delivery, flow control, congestion control, and loss recovery. - Transmission delay calculations show the time needed to transmit a packet over
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
40 views

Comp Cheat Sheet

The document discusses network concepts including packet switching, circuit switching, encapsulation, access control, authentication, firewalls, transport layer protocols, transmission delay calculations, and HTTP protocols. Specifically: - Packet switching is used in the Internet and resources are used on demand, allowing congestion loss and variable delays. Circuit switching reserves resources in advance for calls. - Encapsulation involves adding header fields at each layer and placing the data in the payload of the packet for that layer. - Transport layer protocols like UDP provide best effort delivery while TCP provides reliable in-order delivery, flow control, congestion control, and loss recovery. - Transmission delay calculations show the time needed to transmit a packet over
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Chapter 1 -Consider the scenario shown below, with four different servers connected to four

Hosts = end systems different clients over four three-hop paths. The four pairs share a common middle hop
Internet – A collection of billions of computing devices, and packet switches with a transmission capacity of R = 300 Mbps. The four links from the servers to the
interconnected by links. shared link have a transmission capacity of RS = 50 Mbps. Each of the four links from the
A "network of networks" shared middle link to a client has a transmission capacity of RC = 90 Mbps.
A collection of hardware and software components executing protocols Assuming that the servers are all sending at their maximum rate possible, what are the
-Human protocol-One person asking, and getting, the time to/from another person. link utilizations of the shared link (with transmission capacity R)? Enter your answer in a
Two people introducing themselves to each other. decimal form of 1.00 (if the utilization is 1) or 0.xx (if the utilization is less than 1,
A student raising her/his hand to ask a really insightful question, followed by the rounded to the closest xx). 0.67
teaching acknowledging the student -Consider the scenario shown below, with 10 different servers (three shown) connected
-4G cellular LTE- Wireless up to 10’s Mbps per device to 10 different clients over ten three-hop paths. The pairs share a common middle hop
-802.11 WiFi - Wireless. 10's to 100's of Mbps per device. with a transmission capacity of R = 300 Mbps. The four links from the servers to the
-Digital Subscriber line - Wired. Up to 10's of Mbps downstream per user. shared link have a transmission capacity of RS = 90 Mbps. Each of the four links from the
-Ethernet - Wired. Up to 100's Gbps per link. shared middle link to a client has a transmission capacity of RC = 50 Mbps. 30 Mbps
-Cable access network - Wired. Up to 10's to 100's of Mbps downstream per user. -Delivery of datagrams from a source host to a destination host (typically). Network layer
-Highest transmission rate and lowest bit error rate in practice – Fiber optic cable Transfer of data between neighboring network devices. Link layer
-Forwarding is the local action of moving arriving packets from router's input link to Protocols that are part of a distributed network application. Application layer
appropriate router output link, while routing is the global action of determining the Transfer of data between one process and another process – Transport layer
source-destination paths taken by packets. Transfer of a bit into and out of a transmission media. Physical layer
-Associated with the technique of packet switching- -Application layer. Message
Resources are used on demand, not reserved in advance. Transport layer. Segment
Congestion loss and variable end-end delays are possible with this technique. Physical layer. Bit
Data may be queued before being transmitted due to other user's data that's also Link layer. Frame
queueing for transmission. Network layer. Datagram
This technique is used in the Internet. -Header H2. Network layer, H1. Link layer, H3. Transport layer
-Associated with the technique of circuit switching? -What is encapsulation? Taking data from the layer above, adding header fields
Reserves resources needed for a call from source to destination. appropriate for this layer, and then placing the data in the payload field of the "packet"
Frequency Division Multiplexing (FDM) and Time Division Multiplexing (TDM) are two for that layer.
approaches for implementing this technique. -Limiting use of resources or capabilities to given users. Access control
This technique was the basis for the telephone call switching during the 20th century Proving you are who you say you are. Authentication
and into the beginning of this current century. Specialized "middleboxes" filtering or blocking traffic, inspecting packet contents
-Consider the circuit-switched network shown in the figure below, with four circuit inspections. Firewall
switches A, B, C, and D. Suppose there are 20 circuits between A and B, 19 circuits Used to detect tampering/changing of message contents, and to identify the originator
between B and C, 15 circuits between C and D, and 16 circuits between D and A. of a message. Digital signatures
What is the maximum number of connections that can be ongoing in the network at any Provides confidentiality by encoding contents. Encryption
one time?70 Chapter 2
-Each user generates traffic at an average rate of 0.21 Mbps, generating traffic at a rate Client server vs P2P paradigm
of 15 Mbps when transmitting - Packet switching -Arbitrary end systems directly communicate with each other. P2P paradigm
Each user generates traffic at an average rate of 2.1 Mbps, generating traffic at a rate of -There is a server with a well-known server IP address and port. Client- Server paradigm
15 Mbps when transmitting - Neither works well in this overload scenario -There is a server that is always on. Client-Server Paradigm
Each user generates traffic at an average rate of 2 Mbps, generating traffic at a rate of 2 -A process requests service from those it contacts and will provide service to processes
Mbps when transmitting - Circuit switching that contact it. P2P paradigm
-Time spent waiting in packet buffers for link transmission. - Queueing delay UDP and TCP service
Time spent transmitting packets bits into the link. - Transmission delay -Secure transmission of data. Not provided by transport layer protocols
Time need for bits to physically propagate through the transmission medium from end Best effort service. The service will make a best effort to deliver data to the destination
one of a link to the other. - Propagation delay but makes no guarantees that any particular segment of data will actually get there.UDP
Time needed to perform an integrity check, lookup packet information in a local table Real-time delivery. The service will guarantee that data will be delivered to the receiver
and move the packet from an input link to an output link in a router. – Processing delay within a specified time bound. Not provided by transport layer protocols
-Suppose a packet is L = 1500 bytes long (one byte = 8 bits), and link transmits at R = 1 Flow Control. The provided service will ensure that the sender does not send so fast as
Gbps (i.e., a link can transmit bits 1,000,000,000 bits per second). What is the to overflow receiver buffers. TCP
transmission delay for this packet? .000012 sec Loss-free data transfer. The service will reliably transfer all data to the receiver,
-Suppose a packet is L = 1200 bytes long (one byte = 8 bits), and link transmits at R = 100 recovering from packets dropped in the network due to router buffer overflow. TCP
Mbps (i.e., a link can transmit bits 100,000,000 bits per second). What is the Throughput guarantee. The socket can be configured to provide a minimum throughput
transmission delay for this packet? .000096 secs guarantee between sender and receiver. Not provided by transport layer protocols
- Consider the network shown in the figure below, with three links, each with the Congestion control. The service will control senders so that the senders do not
specified transmission rate and link length. Assume the length of a packet is 8000 bits. collectively send more data than links in the network can handle. TCP
What is the transmission delay at link 2? 8 x 10^(-5) secs HTTP protocol
-Consider the network shown in the figure below, with three links, each with the -HTTP 1.1 allows for multiple requests over a single TCP connection, where the server
specified transmission rate and link length. Assume the length of a packet is 8000 bits. responds in-order to GET requests, and a small object may have to wait for transmission
The speed of light propagation delay on each link is 3x10^8 m/sec behind large objects. Head-of-line blocking (HOL)
What is the propagation delay at (along) link 2? .0033 secs In HTTP 1.1 and HTTP 2, browsers have the incentive to open multiple parallel TCP
- What is the maximum throughput achievable between sender and receiver in the connections to reduce stalling, and increase overall throughput. HTTP over multiple
scenario shown below? 1.5 Mbps parallel TCP connections
- Consider the scenario shown below, with four different servers connected to four An HTTP server does not remember anything about what happened during earlier steps
different clients over four three-hop paths. The four pairs share a common middle hop in interacting with an HTTP client (assuming cookies are not used). HTTP is stateless
with a transmission capacity of R = 300 Mbps. The four links from the servers to the HTTP 1.1 and later versions allow for the client to send multiple requests without
shared link have a transmission capacity of RS = 50 Mbps. Each of the four links from the waiting for the reply. HTTP pipelining
shared middle link to a client has a transmission capacity of RC = 90 Mbps. HTTP 1.1 and later versions allow for using a single TCP connection to send and receive
-What is the maximum achievable end-end throughput (an integer value, in Mbps) for multiple HTTP requests/responses, as opposed to opening a new connection for every
each of four client-to-server pairs, assuming that the middle link is fairly shared (divides single request/response pair. Persistent HTTP
its transmission rate equally) and all servers are trying to send at their maximum rate? HTTP over UDP. This is true only for HTTP version 3.
50 Mbps -Which of the following is true about HTTP cookies.
-Consider the scenario shown below, with four different servers connected to four The cookie value itself doesn't mean anything. It is just a value that was returned by a
different clients over four three-hop paths. The four pairs share a common middle hop web server to this client during an earlier interaction.
with a transmission capacity of R = 300 Mbps. The four links from the servers to the A cookie is a code used by a server, carried on a client's HTTP request, to access
shared link have a transmission capacity of RS = 50 Mbps. Each of the four links from the information the server had earlier stored about an earlier interaction with this Web
shared middle link to a client has a transmission capacity of RC = 90 Mbps. browser.
Assuming that the servers are all sending at their maximum rate possible, what are the -What is the purpose of the conditional HTTP GET request message?
link utilizations for the server links (with transmission capacity RS)? Enter your answer in To allow a server to only send the requested object to the client if this object has
a decimal form of 1.00 (if the utilization is 1) or 0.xx (if the utilization is less than 1, changed since the server last sent this object to the client.
rounded to the closest xx). 1.00 -Which of the following are advantages of using a web cache?
Caching uses less bandwidth coming into an institutional network where the client is Which of the following statements are true about a checksum? The receiver of a packet
located, if the cache is also located in that institutional network. with a checksum field will add up the received bytes, just as the sender did, and
Caching generally provides for a faster page load time at the client, if the web cache is in compare this locally-computed checksum with the checksum value in the packet header.
the client's institutional network, because the page is loaded from the nearby cache If these two values are different then the receiver knows that one of the bits in the
rather than from the distant server. received packet has been changed during transmission from sender to receiver.
-HTTP vs SMTP A checksum is computed at a sender by considering each byte within a packet as a
Operates mostly as a "client push" protocol, Uses CRLF.CRLF to indicate end of message, number, and then adding these numbers (each number representing a bytes) together
Uses server port 25. SMTP to compute a sum (which is known as a checksum).
The sender-computed checksum value is often included in a checksum field within a
Uses server port 80, Uses a blank line (CRLF) to indicate end of request header, packet header.
Operates mostly as a "client pull" protocol. HTTP -Over what set of bytes is the checksum field in the UDP header computed over? The
Uses the peer-2-peer approach for structuring the network application processes. None entire UDP segment, except the checksum field itself, and the IP sender and receive
Pulls email to a mail client from a mail server. Function of IMAP email protocol address fields
Pushes email from a mail client to a mail server. Function of SMTP email protocol -When computing the Internet checksum for two numbers, a single flipped bit (i.e., in
-Record type with what is held in the resource record (RR) in the DNS database. just one of the two numbers) will always result in a changed checksum. True
A hostname and an IP address. Type A -When computing the Internet checksum for two numbers, a single flipped bit in each of
A domain name and the name of the authoritative name server for that domain. TypeNS the two numbers will always result in a changed checksum. False
A name and the name of the SMTP server associated with that name. Type MX -The source port number of the UDP segment (B) sent in reply is: 3546
An alias name and a true name for a server. Type CNAME The source IP address of the IP datagram containing the UDP segment (B) sent in reply
-True property of a local DNS server. is: 128.119.40.186
The local DNS server can decrease the name-to-IP-address resolution time experienced The destination port number of the UDP segment (B) sent in reply is: 4829
by a querying local host when it has a hostname-to-IP translation records with a valid The destination IP address of the IP datagram containing the UDP segment (B) sent in
TTL (not expired) in its cache. reply is: 60.54.75.24
The local DNS server record for a remote host may occasionally be different from that of -Suppose a sender computes a checksum (Internet checksum or some other checksum,
the authoritative server for that host, but they will eventually synchronize. which is essentially a sum of the bytes in a segment), puts the checksum in the segment
-DNS server with the corresponding role header, and sends the segment to the receiver. The receiver receives the segment (with
It provides a list of top level domain (TLD) servers that can be queried to find the IP the checksum in the header). The receiver computes the checksum itself (i.e., performs
address of the DNS server that can provide the definitive answer to this query.RootServe the same calculation as the sender, but over the received data) and compares the
It provides the IP address of the DNS server that can provide the definitive answer to the checksum it has computed to the checksum it received in the header. It finds that its
query. Top level domain name server computed checksum and received checksum in the header are identical. Which of the
It provides the definitive answer to the query with respect to a name in the authoritative following statements is true? The receiver can't tell for certain whether errors (bit flips)
name server's domain. Authoritative name server have occurred in the received data in the segment, but can be relatively confident that
It is a local (to the querying host) server that caches name-to-IP address translation no errors have occurred.
pairs, so it can answer authoritatively and can do so quickly. Local DNS server -For the given purpose/goal/use match it to the RDT mechanism that is used to
Chapter 3 implement the given purpose/goal/use.
-Where is transport-layer functionality primarily implemented? Used by sender or receiver to detect bits flipped during a packet's transmission. Checksu
Transport layer functions are implemented primarily at the hosts at the "edge" of the Lets the sender know that a packet was NOT received correctly at the receiver. NAK
network. Allows for duplicate detection at receiver. Sequence numbers
The transport layer provides for host-to-host delivery service? True Allows the receiver to eventually receive a packet that was corrupted or lost in an earlier
-Check all of the services below that are provided by the TCP protocol. transmission. Retransmission
A flow-control service that ensures that a sender will not send at such a high rate so as Lets the sender know that a packet was received correctly at the receiver. ACK
to overflow receiving host buffers. -What is meant by a cumulative acknowledgment, ACK(n)
In-order data delivery A cumulative ACK(n) acks all packets with a sequence number up to and including n as
Reliable data delivery. being received.
A byte stream abstraction, that does not preserve boundaries between message data -Suppose a packet is 10K bits long, the channel transmission rate connecting a sender
sent in different socket send calls at the sender. and receiver is 10 Mbps, and the round-trip propagation delay is 10 ms. What is the
A congestion control service to ensure that multiple senders do not overload network maximum channel utilization of a stop-and-wait protocol for this channel? .1
links. -Suppose a packet is 10K bits long, the channel transmission rate connecting a sender
-Check all of the services below that are provided by the UDP protocol. and receiver is 10 Mbps, and the round-trip propagation delay is 10 ms. What is the
A message abstraction, that preserves boundaries between message data sent in channel utilization of a pipelined protocol with an arbitrarily high level of pipelining for
different socket send calls at the sender. this channel? 1.0
-The transport layer sits on top of the network layer, and provides its services using the -Suppose a packet is 10K bits long, the channel transmission rate connecting a sender
services provided to it by the network layer. Thus it's important that we know what is and receiver is 10 Mbps, and the round-trip propagation delay is 10 ms. How many
meant by the network layer's "best effort" delivery service. True packets can the sender transmit before it starts receiving acknowledgments back? 10
-What is meant by transport-layer demultiplexing? -Which of the following statements about pipelining are true?
Receiving a transport-layer segment from the network layer, extracting the payload A pipelined sender can have transmitted multiple packets for which the sender has yet
(data) and delivering the data to the correct socket. to receive an ACK from the receiver.
-What is meant by transport-layer multiplexing? With a pipelined sender, there may be transmitted packets "in flight" – propagating
Taking data from one socket (one of possibly many sockets), encapsulating a data chuck through the channel – packets that the sender has sent but that the receiver has not yet
with header information – thereby creating a transport layer segment – and eventually received.
passing this segment to the network layer. -What are some reasons for discarding received-but- out-of-sequence packets at the
-When multiple UDP clients send UDP segments to the same destination port number at receiver in GBN?
a receiving host, those segments (from different senders) will always be directed to the The sender will resend that packet in any case.
same socket at the receiving host. True The implementation at the receiver is simpler.
-When multiple TCP clients send TCP segments to the same destination port number at a -What are some reasons for not discarding received-but- out-of-sequence packets at the
receiving host, those segments (from different senders) will always be directed to the receiver in GBN?
same socket at the receiving host. False Even though that packet will be retransmitted, its next retransmission could be
-It is possible for two UDP segments to be sent from the same socket with source port corrupted, so don't discard a perfectly well-received packet, silly!
5723 at a server to two different clients. True -Which of the following statements about TCP's Additive-increase-multiplicative-
-It is possible for two TCP segments with source port 80 to be sent by the sending host decrease (AIMD) algorithm are true?
to different clients. True AIMD is a end-end approach to congestion control.
-On the sending side, the UDP sender will take each application-layer chunk of data AIMD cuts the congestion window size, cwnd, in half whenever loss is detected by a
written into a UDP socket and send it in a distinct UDP datagram. And then on the triple duplicate ACK.
receiving side, UDP will deliver a segment's payload into the appropriate socket, AIMD cuts the congestion window size, cwnd, i to 1 whenever a timeout occurs.
preserving the application-defined message boundary. True -How is the sending rate typically regulated in a TCP implementation?
-Which of the fields below are in a UDP segment header? By keeping a window of size cwnd over the sequence number space, and making sure
Internet checksum, Source port number, Destination port number, Length (of UDP that no more than cwnd bytes of data are outstanding (i.e, unACKnowledged). The size
header plus payload) of cwnd is regulated by AIMD.
-Why is the UDP header length field needed? Because the payload section can be of Which of the following best completes this sentence: "In the absence of loss, TCP slow
variable length, and this lets UDP know where the segment ends. start increases the sending rate ... "
" ... faster than AIMD. In fact, slowstart increases the sending rate exponentially fast per
RTT."

You might also like