TCP IP Notes
TCP IP Notes
Network:-
-Network is a combination of hardware and software that sends data from one location to another.
OR
-Group of communication devices (PC, Router, and Switch) which is connected via media (Wired/Wireless) to
exchange the information.
Protocol:-
-Protocol is a set of rules that transmit data from one system to another system anywhere in network. It defines what
is communicated, how it is communicated, and when it is communicated.
OR
-Protocols are rules that govern how devices communicate and share information across a network.
Network Components:-
-Devices (PC, Server, Router, Switches, Firewall)
-Media
-Protocol
-Messages
LAN:-
-Local Area Network
-Group of communication devices running under common administration that cover small geographic area, usually
small building, campus, offices, home, school.
MAN:-
-Metropolitan Area Network.
-It is a network that connect LAN’s across a city wide geographic area.
-Ex: Metro Ethernet connection within a city.
WAN:-
-Wide Area Network
-It is a network that cover large geographic location usually to connect multiple LAN between different cities.
Or
- A wide area network (WAN) is a network that connects LANs over large geographic distances.
Internetwork:
-it describing multiple networks connected together.
Intranetwork:
-
Topology:
-How network is physically connected
-Physically structure or connection of your network.
*2 type of topology:
A. Physical Topology
-How the network is look like, how all the cable and devices are connected to each other.
B. Logical Topology
-It is define that how data takes path through the physical topology.
*Types of Physical Topology:
1. Bus
-In a Bus topology all hosts share physical segment. At the end of the cable, terminator is placed.
-A frame sent by one host is received by all other hosts on the bus. However, a host will only process a frame if it
matches the destination hardware address in datalink header.
-The bus represent a single point of failure. A break in the bus will affect all host on the segment.
Or
Shared Bus
-A shared bus is a physical network topology or layout in which multiple devices are connected to the same physical
wire or cable. When one device transmits data, all other devices on the shared bus receive it.
2. Ring
-All computer and network devices are connected on a cable in such way that form “Ring”.
-If cable breaks your network is down.
3. Star
-All end devices are connected to a central devices creating a star model.
-This what we use nowadays on LAN with switch in the middle.
-When switch goes down your network is down as well.
4. Mesh
-Means multiple connection between your devices.
*2 types of Mesh topology:
A. Partial Mesh
-
B. Full Mesh
-Each and every devices connected via individual link.
5. Tree
-Combination of bus and star topology.
6. Hybrid
-Combination of all topology together.
Reference Models:
-The TCP/IP model is a protocol model because it describes the functions that occur at each layer of protocols
within the TCP/IP suite.
-The OSI is used for data network design, operation specifications, and troubleshooting.
-A networking model is only a representation of network operation. The model not the actual network.
>The Benefits of using a Layered Model:
-There are benefits to using a layered model to describe network protocols and operations. Using a layered model:
-Assists in protocol design, because protocols that operate at a specific layer have defined information that they act
upon and a defined interface to the layers above and below.
-products from different vendors can work together.
-Provides a common language to describe networking functions and capabilities.
-Individual parts of the system can be designed independently, but still work together seamlessly.
OSI Model (Open System Interconnection Model):-
-The OSI Model is a layered framework that allows communication between devices. Each layer defines a set of
function.
-Provides guideline on how computers communicate over a network.
-Does not define specific procedure or protocols.
-It’s a theoretical in nature. Does not define protocol.
- Network communication models are generally organized into layers. The OSI model specifically consists of seven
layers, with each layer representing a specific networking function. These functions are controlled by protocols,
which govern end-to-end communication between devices. As data is passed from the user application down the
virtual layers of the OSI model, each of the lower layers adds a header (and sometimes a trailer) containing
protocol information specific to that layer. These headers are called Protocol Data Units (PDUs), and the process of
adding these headers is referred to as encapsulation.
or
The picture below is an example of a simple data transfer between 2 computers and shows how the data is
encapsulated and decapsulated:
EXPLANATION:
-The computer in the above picture needs to send some data to another computer. The Application layer is where the
user interface exists, here the user interacts with the application he or she is using, then this data is passed to the
Presentation layer and then to the Session layer. These three layer add some extra information to the original data
that came from the user and then passes it to the Transport layer. Here the data is broken into smaller pieces (one
piece at a time transmitted) and the TCP header is a added. At this point, the data at the Transport layer is called
a segment.
-Each segment is sequenced so the data stream can be put back together on the receiving side exactly as transmitted.
Each segment is then handed to the Network layer for network addressing (logical addressing) and routing through
the internet network. At the Network layer, we call the data (which includes at this point the transport header and the
upper layer information) a packet.
-The Network layer add its IP header and then sends it off to the Datalink layer. Here we call the data (which
includes the Network layer header, Transport layer header and upper layer information) a frame. The Datalink layer
is responsible for taking packets from the Network layer and placing them on the network medium (cable). The
Datalink layer encapsulates each packet in a frame which contains the hardware address (MAC) of the source and
destination computer (host) and the LLC information which identifies to which protocol in the prevoius layer
(Network layer) the packet should be passed when it arrives to its destination. Also, at the end, you will notice the
FCS field which is the Frame Check Sequence. This is used for error checking and is also added at the end by the
Datalink layer.
-If the destination computer is on a remote network, then the frame is sent to the router or gateway to be routed to
the desination. To put this frame on the network, it must be put into a digital signal. Since a frame is really a logical
group of 1's and 0's, the Physical layer is responsible for encapsulating these digits into a digital signal which is read
by devices on the same local network.
-There are also a few 1's and 0's put at the begining of the frame, only so the receiving end can synchronize with the
digital signal it will be receiving.
*Below is a picture of what happens when the data is received at the destination computer.
EXPLANATION
-The receiving computer will firstly synchronize with the digital signal by reading the few extra 1's and 0's as
mentioned above. Once the synchonization is complete and it receives the whole frame and passes it to the layer
above it which is the Datalink layer.
-The Datalink layer will do a Cyclic Redundancy Check (CRC) on the frame. This is a computation which the
comupter does and if the result it gets matches the value in the FCS field, then it assumes that the frame has been
received without any errors. Once that's out of the way, the Datalink layer will strip off any information or header
which was put on by the remote system's Datalink layer and pass the rest (now we are moving from the Datalink
layer to the Network layer, so we call the data a packet) to the above layer which is the Network layer.
-At the Network layer the IP address is checked and if it matches (with the machine's own IP address) then the
Network layer header, or IP header if you like, is stripped off from the packet and the rest is passed to the above
layer which is the Transport layer. Here the rest of the data is now called a segment.
-The segment is processed at the Transport layer, which rebuilds the data stream (at this level on the sender's
computer it was actually split into pieces so they can be transferred) and acknowledges to the transmitting computer
that it received each piece. It is obvious that since we are sending an ACK back to the sender from this layer that we
are using TCP and not UDP. Please refer to the Protocols section for more clarification. After all that, it then happily
hands the data stream to the upper-layer application.
-You will find that when analysing the way data travels from one computer to another most people never analyse in
detail any layers above the Transport layer. This is because the whole process of getting data from one computer to
another involves usually layers 1 to 4 (Physical to Transport) or layer 5 (Session) at the most, depending on the type
of data.
Header:
- Contains control information, such addressing, and is located at the beginning of the PDU
Trailer:
- Contains control information added to the end of the PDU
- Data Link layer protocols add a trailer to the end of each frame. The trailer is used to determine if the frame arrived
without error. This process is called error detection.
Error Detection:
-Data Link layer protocols add a trailer to the end of each frame. The trailer is used to determine if the frame arrived
without error. This process is called error detection.
Or
-Error detection is the detection of errors caused by noise or other impairments during transmission from the
transmitter to the receiver.
Error Correction:
-Error correction is the detection of errors and reconstruction of the original, error-free data.
PDU (Protocol Data Unit):
-The name given to data at different layers of the OSI models.
*Transport: Segment, Packet
*Network: Packet, Datagram
*Data-link: Frame, Packet
*Physical: Bits
Protocol Suites/Stacks:
-Multiple protocol often work together to facilitate end to end network communication forming protocol suites or
stacks.
TCP/IP(TransmissionControlProtocol/Internet Protocol):
-TCP/IP is the communication protocol that defines the rule computers must follow to communicate with each other
over the internet.
-Ex.: Browser and server, Email, Your Internet Address uses TCP/IP
Connection Oriented:
-Connection has to establish before data is sent.
-Connection Oriented protocol provides several services:
1. Segmentation and sequencing:
-Data is segmented in to smaller pieces for transport.
-Each segment is assigned a sequence number so that the receiving device can reassemble the data on arrival.
2. Connection Establishment:
-Connection are established, maintained and terminated between devices.
3. Acknowledgement:
-Receipt of data is confirmed through the use of acknowledgement. Otherwise data is retransmitted, guaranteed
delivery.
4. Flow Control:
-Data transfer rate is negotiated to prevent congestion.
-Ex: TCP
Connection Less:
-Require no connection before data is sent.
-Ex: UDP
TCP:
-TCP is connection oriented that means connection has to establish before data sent.
-TCP/IP transport layer or TCP is responsible for providing a logical connection between two hosts and can provide
these functions:
>Flow control (through the use of windowing)
>Reliable connections/Transmission (through the use of sequence numbers and acknowledgments)
>Session multiplexing (through the use of port numbers and IP addresses)
>Segmentation (through the use of segment protocol data units, or PDUs)
-TCP is fully duplex. In other words each TCP host has access to two logical channel, Incoming & Outgoing
Channel.
-In TCP, it discards duplicates packet and resequence any packet that arrive out of sequence.
-TCP support flow control at both the source and the destination to avoid too much data being sent at a time and
overloading of the network at the router. TCP flow control only allow sender to gradually increase data transmission
rate. To prevent sender from transmitting data which receiver cannot buffer, the receiver also has flow control
indicating the size of the receiver free buffer.
-TCP segment the data receive from the application layer so that it fits into IP datagram.
-TCP connection are logical point to point connection between two application layer protocols. This type of
communication is also refer to as a “direct transmission”.
-TCP segments are transmitted in IP datagram. TCP segment consisting of TCP header and payload segment is an
encapsulated within IP header. The resulting IP datagram is provided with the header and trailer to Datalink layer
and physical layer.
*TCP provides a reliable, connection-oriented, logical service through the use of sequence and acknowledgment
number, windowing for flow control, error detection and correction through checksum, reordering packets, and
dropping extra duplicated packets.
*In TCP connection goes through three phase:
1. Connection setup [Three-way Handshake]
2. Data Transfer [Established]
3. Connection Close [Modified Three-way Handshake]
IP:
-IP is connection less protocol. Connection less means data is transmitted as independent packets. There is no logical
end to end connection between two devices.
-It is responsible for addressing and routing of packet between the hosts.
-IP is an unreliable protocol as it does not guaranteed the delivery of packet.
-IP function according to the principle of best effort. This means it will any case do it best to deliver the packet
correctly. On its way to receiver, packet might get lost, deliver out of sequence, duplicate.
-IP is also responsible for data fragmentation that means splitting of larger data packet in to smaller once. The
process of putting together small packets at receiver is called reassembling.
-No acknowledgement is sent back when data packet is reaches the destination neither sender nor the receiver are
informed if the packet get lost or transmitted out of sequence. This is the responsibility of higher protocol like TCP.
-IP is datagram switching protocol. This means that each packet is a unnumbered message requiring no
acknowledgement which is routed across the network based on the unique IP address.
OR
Internet Protocol (IP)
-The Internet Protocol (IP) is the most common Layer 3 protocol and is used within the Internet to route packets to
their final destination. IP provides connectionless, best-effort delivery of packets through a network and
fragmentation and reassembly of packets going across Layer 2 networks with different maximum transmission units
(MTUs). Each computer or host has at least one IP address that uniquely identifies it from all other computers on the
Internet. The IP addressing scheme is fundamental to the process of routing packets through a network.
Comparison chart
Differences — Similarities —
TCP UDP
Function As a message makes its way across the UDP is also a protocol used in message transport
internet from one computer to another. or transfer. This is not connection based which
This is connection based. means that one program can send a load of
TCP UDP
TCP is suited for applications that require UDP is suitable for applications that need fast,
high reliability, and transmission time is efficient transmission, such as games. UDP's
Usage relatively less critical. stateless nature is also useful for servers that
answer small queries from huge numbers of
clients.
Use by other HTTP, HTTPs, FTP, SMTP, Telnet DNS, DHCP, TFTP, SNMP, RIP, VOIP.
protocols
TCP rearranges data packets in the order UDP has no inherent order as all packets are
Ordering of data specified. independent of each other. If ordering is
packets required, it has to be managed by the application
layer.
The speed for TCP is slower than UDP. UDP is faster because there is no error-checking
Speed of transfer
for packets.
There is absolute guarantee that the data There is no guarantee that the messages or
Reliability transferred remains intact and arrives in packets sent would reach at all.
the same order in which it was sent.
Header Size TCP header size is 20 bytes UDP Header size is 8 bytes.
Common Header Source port, Destination port, Check Sum Source port, Destination port, Check Sum
Fields
Streaming of data Data is read as a byte stream, no Packets are sent individually and are checked for
distinguishing indications are transmitted integrity only if they arrive. Packets have
to signal message (segment) boundaries. definite boundaries which are honored upon
receipt, meaning a read operation at the receiver
socket will yield an entire message as it was
TCP UDP
originally sent.
TCP does Flow Control. TCP requires UDP does not have an option for flow control
three packets to set up a socket
Data Flow Control connection, before any user data can be
sent. TCP handles reliability and
congestion control.
TCP does error checking UDP does error checking, but no recovery
Error Checking
options.
UDP
The User Datagram Protocol (UDP) is a transport layer protocol defined for use with the IP network layer protocol.
It is defined by RFC 768 written by John Postel. It provides a best-effort datagram service to an End System (IP
host).
The service provided by UDP is an unreliable service that provides no guarantees for delivery and no protection
from duplication (e.g. if this arises due to software errors within anIntermediate System (IS)). The simplicity of
UDP reduces the overhead from using the protocol and the services may be adequate in many cases.
UDP provides a minimal, unreliable, best-effort, message-passing transport to applications and upper-layer
protocols. Compared to other transport protocols, UDP and its UDP-Lite variant are unique in that they do not
establish end-to-end connections between communicating end systems. UDP communication consequently does not
incur connection establishment and teardown overheads and there is minimal associated end system state. Because
of these characteristics, UDP can offer a very efficient communication transport to some applications, but has no
inherent congestion control or reliability. A second unique characteristic of UDP is that it provides no inherent On
many platforms, applications can send UDP datagrams at the line rate of the link interface, which is often much
greater than the available path capacity, and doing so would contribute to congestion along the path, applications
therefore need to be designed responsibly
A computer may send UDP packets without first establishing a connection to the recipient. A UDP datagram is
carried in a single IP packet and is hence limited to a maximum payload of 65,507 bytes for IPv4 and 65,527 bytes
for IPv6. The transmission of large IP packets usually requires IP fragmentation. Fragmentation decreases
communication reliability and efficiency and should theerfore be avoided.
To transmit a UDP datagram, a computer completes the appropriate fields in the UDP header (PCI) and forwards the
data together with the header for transmission by the IP network layer.
Or
- UDP assumes that the application will use its own reliability method, it doesn't use any, which obviously
makes things transfer faster.
Or
-Application programs utilizing UDP accepts full responsibility for packet reliability including message loss,
duplication, delay, out of sequence, multiplexing and connectivity loss.
-Application designers are generally aware that UDP does not provide any reliability, e.g., it does not retransmit any
lost packets. Often, this is a main reason to consider UDP as a transport. Applications that do require reliable
message delivery therefore need to implement appropriate protocol mechanisms in their applications (e.g. tftp).
-UDP's best effort service does not protect against datagram duplication, i.e., an application may receive multiple
copies of the same UDP datagram. Application designers therefore need to verify that their application gracefully
handles datagram duplication and may need to implement mechanisms to detect duplicates.
-The Internet may also significantly delay some packets with respect to others, e.g., due to routing transients,
intermittent connectivity, or mobility. This can cause reordering, where UDP datagrams arrive at the receiver in an
order different from the transmission order. Applications that require ordered delivery must restore datagram
ordering themselves.
Many applications that use UDP send small amounts of data that can fit in one segment. However, some
applications will send larger amounts of data that must be split into multiple segments The UDP PDU is referred to
as a datagram, although the terms segment and datagram are sometimes used interchangeably to describe a
Transport layer PDU.
When multiple datagrams are sent to a destination, they may take different paths and arrive in the wrong order. UDP
does not keep track of sequence numbers the way TCP does. UDP has no way to reorder the datagrams into their
transmission order.
Therefore, UDP simply reassembles the data in the order that it was received and forwards it to the application. If
the sequence of the data is important to the application, the application will have to identify the proper sequence of
the data and determine how the data should be processed.
*UDP has no ability to do reassembly. It basically only has fields for source port, destination port, length and
checksum. Fragmentation and reassembly would be done in the application itself when UDP is used. Lower layer
fragmentation is still possible, for example with IP. For anything more intelligent at the transport layer, TCP would
typically be used.
Key Concept: The UDP was developed for use by application protocols that do not require reliability,
acknowledgment or flow control feature at the transport layer. It is designed to be simple and fact, providing only
transport layer addressing in the form of UDP ports and an optional checksum capability, and little else.
Or
Before the sending device and receiving device start the exchange of data, both devices need to be synchronize.
During the TCP initialization process, the sending device and the receiving device exchange a few control packets
for synchronization purposes. This exchange is known as a 3 –way handshake.
The 3 –way handshake begins with the initiator sending a TCP segment with the SYN control bit flag set.
TCP allow one side to establish a connection. The other side may either accept the connection or refuse it. If we
consider this from application layer point of view, the side that is establish the connection is the client and the side
waiting for a connection is the server.
Active Open:
-In an Active Open call a device (Client process) using TCP takes the active role and initiates the connection by
sending TCP SYN message to start the connection.
Passive Open:
-A Passive Open can specify that device (Server process) is waiting for an active OPEN from a specific client. It
does not generate any TCP message segment. The server processes listening for the clients are in passive open
mode.
Step 1:
Device A (Client) sends a TCP segment with SYN=1, ACK=0 flag set, ISN (Initial Sequence Number) = 2000. The
Active Open device (Device A) sends a segment with the SYN flag set to 1, ACK flag set to 0 and initial sequence
number 2000 (for example), which marks the beginning of the sequence number for data that device A will transmit.
SYN in short for synchronization. SYN flag announces an attempt to open a connection. The first byte transmitted to
Device B will have the sequence number ISN+1.
Step 2:
Device B (Server) receives Device A’s TCP segment and returns a TCP segment with SYN=1, ACK=1, ISN=5000
(Device B’s Initial Sequence Number), Acknowledgement Number=2001 (2000+1, the next sequence number
device B expecting from Device A).
Step 3:
Device A sends TCP segment to Device B that acknowledgement receipt of device B’s ISN, with flag set to as
SYN=0. ACK=1, Sequence Number= 2001, Acknowledgement number=5001 (5000+1, the next sequence number
Device A expecting from Device B).
This handshake technique is referred to as the 3 –way handshake or SYN, SYN-ACK, ACK.
After the 3 – way handshake, the connection is open and the participating computers start sending data using the
sequence and acknowledgment numbers.
-Here is a simple example of a three-way handshake with sequence and acknowledgment numbers:
- In this example, the destination’s acknowledgment (step 2) number is one greater than the source’s sequence
number, indicating to the source that the next segment expected is 2. In the third step, the source sends the second
segment, and, within the same segment in the Acknowledgment field, indicates the receipt of the destination’s
segment with an acknowledgment of 11—one greater than the sequence number in the destination’s SYN/ACK
segment.
“When acknowledging a received segment, the destination returns a segment with a number in the
acknowledgment field that is one number higher than the received sequence number.”
SYN Parameter:
Ack.no. 0
Window 8192
LEN = 0 bytes
The value of LEN is the length of the TCP data which is calculated by subtracting the IP and TCP header sizes from
the IP datagram size.
Ack.no. 0
Window 8192
LEN = 0 bytes
Ack.no. 17768657
Window 8760
LEN = 0 bytes
Ack.no. 82980010
Window 8760
LEN = 0 bytes
Seq.no. 17768657
Ack.no. 82980010
Window 8760
Seq.no. 82980010
Ack.no. 17768729
Window 8688
Seq.no. 17768729
Ack.no. 82980070
Window 8700
Seq.no. 82980070
Ack.no. 17768885
Window 8532
Window 8548
LEN = 0 bytes
FIN-
Seq.no. 82980222
ACK
Ack.no. 17768886
Window 8532
LEN = 0 bytes
Ack.no. 82980223
Window 8548
LEN = 0 bytes
The value of LEN is the length of the TCP data which is calculated by subtracting the IP and TCP header sizes from
the IP datagram size.
1. The session begins with station 160.221.172.250 initiating a SYN containing the sequence
number 17768656 which is the ISS. In addition, the first octet of data contains the next sequence
number 17768657. There are only zeros in the Acknowledgement number field as this is not used in
the SYN segment. The window size of the sender starts off as 8192 octets as assumed to be acceptable to
the receiver.
2. The receiving station sends both its own ISS (82980009) in the sequence number field and acknowledges
the sender's sequence number by incrementing it by 1 (17768657) expecting this to be the starting sequence
number of the data bytes that will be sent next by the sender. This is called the SYN-ACKsegment. The
3. Once the SYN-ACK has been received, the sender issues an ACK that acknowledges the receiver's ISS by
incrementing it by 1 and placing it in the acknowledgement field (82980010). The sender also sends the
same sequence number that it sent previously (17768657). This segment is empty of data and we don't want
the session just to keep ramping up the sequence numbers unnecessarily. The window size of 8760 is
4. From now on ACKs are used until just before the end of the session. The sender now starts sending data by
stating the sequence number 17768657 again since this is the sequence number of the first byte of the data
that it is sending. Again the acknowledgement number 82980010 is sent which is the expected sequence
number of the first byte of data that the receiver will send. In the above scenario, the sender is intitially
sending 72 bytes of data in one segment. The network analyser may indicated the next expected sequence
number in the trace, in this case this will be 17768657 + 72 = 17768729. The sender has now agreed the
5. The receiver acknowledges the receipt of the data by sending back the number 17768729 in the
acknowledgement number field thereby acknowledging that the next byte of data to be sent will begin with
sequence number 17768729 (implicit in this is the understanding that sequence numbers up to and
including 17768728 have been successfully received). Notice that not every byte needs to be
acknowledged. The receiver also sends back the sequence number of the first byte of data in its own
segment (82980010) that is to be sent. The receiver is sending 60 bytes of data. The receiver subtracts 72
bytes from its previous window size of 8760 and sends 8688 as its new window size.
6. The sender acknowledges the receipt of the data with the number 82980070 (82980010 + 60) in the
acknowledgement number field, this being the sequence number of the next data byte expected to be
received from the receiver. The sender sends 156 bytes of data starting at sequence number 17768729. The
sender subtracts 60 bytes from its previous window size of 8760 and sends the new size of 8700.
7. The receiver acknowledges receipt of this data with the number 17768885 (17768729 + 156) since it was
expecting it, and sends 152 bytes of data beginning with the sequence number 82980070. The receiver
subtracts 156 bytes from the previous window size of 8688 and sends the new window size of 8532.
8. The sender acknowledges this with the next expected sequence number 82980070 + 152 = 82980222 and
sends the expected sequence number 17768885in a FIN because at this point the application wants to close
the session. The sender subtracts 152 bytes from its previous window size of 8700 and sends the new size
of 8548.
9. The receiver sends an FIN-ACK acknowledging the FIN and increments the acknowledgement sequence
number by 1 to 17768886 which is the number it will expect on the final ACK. In addition the receiver
sends the expected sequence number 82980223. The window size remains at 8532 as no data was received
10. The final ACK is sent by the sender confirming the sequence number 17768886 and acknowledges receipt
of 1 byte with the acknowledgement number82980223. The window size finishes at 8548 and the TCP
The Sequence and Acknowledgement fields are two of the many features that help us classify TCP as a connection
oriented protocol. As such, when data is sent through a TCP connection, they help the remote hosts keep track of the
connection and ensure that no packet has been lost on the way to its destination.
TCP utilizes positive acknowledgments, timeouts and retransmissions to ensure error-free, sequenced delivery of
user data. If the retransmission timer expires before an acknowledgment is received, data is retransmitted starting at
the byte after the last acknowledged byte in the stream.
A further point worth mentioning is the fact that Sequence numbers are generated differently on each operating
system. Using special algorithims (and sometimes weak ones), an operating system will generate these
numbers, which are used to track the packets sent or received, and since both Sequence and
Acknowledgement fields are 32bit, there are 2^32= 4,294,967,296 possibilities of generating a different
number!
Q: How do I know which application should receive this data?/How do I know if my data arrived
successfully?
-Data gets routed to the correct computer with the help of IP and other lower layer protocol such as Ethernet.
-One of the job of transport layer protocol is getting the data from one application on one computer to the
correct application on another.
*Multiplexing:
-Multiplexing where multiple source of data such as phone, fax, computer data combine into a single stream over a
single line.
OR
- Multiplexing is the ability of a single host to have multiple sessions open to one or many other hosts.
-TCP and UDP a way of combining or multiplexing data from many application layer protocol into a single stream
with same IP address.
-TCP and UDP use this this software port to route data to appropriate application.
-Transport layer protocol (TCP and UDP) are responsible for supporting multiple network application at the same
instance and these application can send and receive network data simultaneously. Transport layer protocol are
capable of doing this by making use of application level addressing known as port number. The data from different
applications operating on a network device are multiplexed at the sending using port number and demultiplexed at
the receiving device, again using port number.
Source and Destination port identifies the port number which the application is listening at the sending and receiving
device.
Port Number:
-The Transport layer uses an addressing scheme called a port number. Port numbers identify applications and
Application layer services that are the source and destination of data.
--Both TCP and UDP can send data from multiple upper-layer applications at the same time using Port number.
-Port numbers keep track of different conversations crossing the network at any time.
*Example: FTP: 20-21, Telnet: 23, SMTP: 25, DNS: 53, TFTP: 69, SSH: 22, TACACS: 49, DHCP: 67-68, HTTP:
80, SNMP: 161-162, BGP: 179, RIP: 520
-Ports is an internal address that reserved for specific specific application on computer.
-A port can be either TCP or UDP port depending upon whether it links to the TCP or UDP protocol at the transport
layer.
-Port can be any number between 0-65,535.
-Frequently used TCP/IP application are assigned port number under 1024.
-This port is also called Well-Known Ports: 0-1024.
#UDP and TCP Port Number:
Software Port Number
Port 0 – Reserved
Port 1-1023 – Well-Known Port
Port 1024- 49151 – Registered Port
Port 49152-65535 – Dynamic or Private Ports.
*Well-Known Ports: (1-1023)
-They are used only for the most common TCP/IP application.
*Registered Port (1024-49151)
-IANA manage and registers them. Less common TCP/IP application use these port numbers.
*Dynamic or Private Ports (49,152-65535)
- IANA does not manage them.
-Randomly chosen port number in this range are referred to as “ephemeral ports”.
-These ports are not permanently assigned to any publically defined application and are commonly used as the
source port number for the client side of a connection. This allocation is temporary and is valid for the duration of
the connection opened by the application using the protocol.
*Software ports:
-Software ports are specific to a transport layer and are used to route a data to the appropriate application layer
protocol and ultimately correct application program
*Hardware Ports:
-It also knowns as NICs.
-It exist only on layer 1.
*IP interface:
-It exist at layer 3.
*FTP (TCP Port: 20 and 21)
-File Transfer Protocol
-Transfer files with a remote host (typically requires authentication of users credentials).
Or
- File Transfer Protocol (FTP) is a network protocol used to transfer data from one computer to another through a
network, such as over the Internet.
*SSH (TCP Port: 22)
-Secure Shell
-Securely connect to a remote host (typically via a terminal emulator)
*SFTP (TCP Port 22)
-Secure FTP
-Provides FTP file-transfer service over a SSH connection.
*SCP (TCP Port: 22)
-Secure Copy
-Provides a secure file transfer service over a SSH connection and offers a files original date and time information,
which is not available with FTP.
*Telnet (TCP Port: 23)
-Used to connect to a remote host (typically via a terminal emulator)
*SMTP (TCP Port: 25)
-Simple Mail Transfer Protocol
-Used for sending Email.
*DNS (TCP Port: 53, UDP Port: 53)
-Domain Name System.
-Resolves domain names to corresponding IP addresses.
*TFTP (UDP Port: 69)
-Trivial File Transfer Protocol
-Transfer files with remote host (does not require authentication of user credentials)
Or
-The Trivial File Transfer Protocol (TFTP) is a network protocol used to transfer data from one computer to another
through a network, such as over the Internet.
*DHCP (UDP Port: 67)
-Dynamic Host Configuration Protocol
-Dynamically assigns IP address information (for example, IP address, Subnet Mask, DNS Server’s, and Default
Gateway’s IP address) to a network devices.
*HTTP (TCP Port: 80)
-Hypertext Transfer Protocol.
-Retrieves content from a Web Server.
Or
-It is is a communications protocol for the transfer of information on the Internet and the World Wide Web.
*Hypertext Markup Language (HTML)
-Hypertext Markup Language (HTML) is the language the Web browser and Web server use to create and display
Web pages.
* URL
A Uniform Resource Locator (URL) is the address where a file can be accessed on the Internet, for example
http://www.juniper.net/training/index.html. In this example, the protocol is HTTP, the domain is juniper.net, and the
path to the file is training/index.html.
*POP3 (TCP Port: 110)
-Post Office Protocol version 3.
- Retrieves Email from an Email Server.
*NNTP (TCP Port: 119)
-Network News Transport Protocol
-Supports the posting and reading of articles on Usenet news servers.
*NTP (UDP Port: 123)
-Network Time Protocol
-Used by a network device to synchronize its clock with a time server (NTP Server).
*SNTP (UDP Port: 123)
-Simple Network Time Protocol
-Supports time synchronization among network devices, similar to Network Time Protocol (NTP), although SNTP
uses a less complex algorithm in its calculation and is slightly less accurate than NTP.
IMAP4 (TCP Port: 143)
-Internet Message Access Protocol version 4
-- Retrieves Email from an Email Server.
*LDAP (TCP Port: 389)
-Lightweight Directory Access Protocol
-Provides directory services (for example, a user directory – including username, password, e-mail, and phone
number information) to network clients.
*HTTPS (TCP Port: 443)
-Hypertext Transfer Protocol Secure
- Used to securely retrieve content from a Web Server.
*rsh (TCP Port: 514)
-Remote Shell
-Allows command to be executed on a computer from a remote user.
*RSTP (TCP Port: 554, UDP Port: 554)
-Real Time Streaming Protocol
-Communicates with a media server (for example, a video server) and controls the playback of the server’s media
files.
*RDP (TCP Port: 3389)
-Remote Desktop Protocol
-A Microsoft Protocol that allows a user to view and control the desktop of a remote computer.
*SNMP (TCP Port 161-162)
-Simple Network Management Protocol
-SNMP is a network management protocol and used in TCP/IP networks for network monitoring, configure, and
troubleshoot network resources from a centrally located SNMP management system.
*RIP (UDP Port 520)
*BGP (TCP Port 179)
Socket:
-Socket is an IP address combine with respective TCP or UDP port. [combination of IP Address + TCP/UDP Port]
-An application generate a socket by specifying computer’s IP address, type of data transmission (TCP or UDP), and
the port control by the application.
-IP address determine the destination computer and port determines which protocol used.
-Data transfer from a client to a server is referred to as an upload and data from a server to a client as a download.
*Server:
-In a general networking context, any device that responds to requests from client applications is functioning as
a server. A server is usually a computer that contains information to be shared with many client systems. For
example, web pages, documents, databases, pictures, video, and audio files can all be stored on a server and
delivered to requesting clients. In other cases, such as a network printer, the print server delivers the client
print requests to the specified printer.
-Different types of server applications may have different requirements for client access. Some servers may require
authentication of user account information to verify if the user has permission to access the requested data or to use
a particular operation.
-When using an FTP client, for example, if you request to upload data to the FTP server, you may have permission
to write to your individual folder but not to read other files on the site.
-In a client/server network, the server runs a service, or process, sometimes called a server daemon. Like
most services, daemons typically run in the background and are not under an end user's direct control.
Daemons are described as "listening" for a request from a client, because they are programmed to respond
whenever the server receives a request for the service provided by the daemon. When a daemon "hears" a
request from a client, it exchanges appropriate messages with the client, as required by its protocol, and
proceeds to send the requested data to the client in the proper format.
*Peer-to-Peer Applications
-A peer-to-peer application (P2P), unlike a peer-to-peer network, allows a device to act as both a client and a
server within the same communication. In this model, every client is a server and every server a client. Both
can initiate a communication and are considered equal in the communication process. However, peer-to-peer
applications require that each end device provide a user interface and run a background service. When you launch a
specific peer-to-peer application it invokes the required user interface and background services. After that the
devices can communicate directly.
-Peer-to-peer applications can be used on peer-to-peer networks, client/server networks, and across the Internet.
-Example: Instant Message
Both Client: Initiate a message / receive a message
Both clients simultaneously: send / receive
End-to-End Communication:
-End to end is source to destination communication.
or
-An end-to-end connection refers to a connection between two systems across a switched network. For example, the
Internet is made up of a mesh of routers. Packets follow a hop-by-hop path from one router to the next to reach their
destinations. Each hop consists of a physical point-to-point link between routers. Therefore, a routed path consists
of multiplepoint-to-point links. In the ATM and frame relay environment, the end-to-end path is called a virtual
circuit that crosses a predefined set of point-to-point links
Hop-to-Hop Communication:
-The communication between transit device is hop by hop.
or
-- With hop-by-hop transport, chunks of data are forwarded from node to node in a store-and-forward manner.
As hop-by-hop transport involves not only the source and destination node, but rather some or all of the intermediate
nodes as well, it allows data to be forwarded even if the path between source and destination is not permanently
connected during communication.
Flow Control:
-Allow a receiver to tell the sender to slow down its transmission rate.
-Data transfer rate is negotiated to prevent congestion.
Error Control:
-Allow a receiver to detect an error in a received frame and request the sender to retransmit frame.
Mode of Transmission:
*Simplex
-One way communication
*Half Duplex
-Two way communication, but not simultaneously
OR
- Half-duplex data transmission allows for communication in two directions, but only in one direction at a time. That
is, a device cannot receive and transmit data simultaneously. This functionality is simlar to using a walkie talkie
where if you are speaking, you cannot hear the person on the other end.
*Full Duplex
-Simultaneously Two way communication
OR
-- Full-duplex data transmission allows for communication in two directions at the same time. That is, a device can
receive and transmit data simultaneously. This functionality is similar to using a telephone where you can talk and
listen at the same time.
Segmentation:
-It divides the data stream into smaller pieces is called segmentation.
-Segmenting messages has two primary benefits:
1. First, by sending smaller individual pieces from source to destination, many different conversations can be
interleaved on the network.
2. Segmentation can increase the reliability of network communications. The separate pieces of each message need
not travel the same path across the network from source to destination. If a particular path becomes congested with
data or fails, individual pieces of the message can still be directed to the destination using alternate path. If part of
the message fails to make it to the destination, only the missing parts need to be retransmitted.
Fragmentation:
-Fragmentation is the process of breaking an IP packet into smaller chunks or data is fragmented when transmitted
over data link technology with a smaller MTU.
Maximum Transmission Unit (MTU):
-The Maximum Transmission Unit (MTU) is the fixed upper limit on the size of packets can be sent in a single
frame.
Host:
-It refers to any device that is connected to a network.
Server:
-Servers are hosts that have software installed that enables them to provide information and services, like e-mail or
web pages, to other hosts on the network.
Client:
# Clients are hosts that have software installed that enables them to request and display the information obtained
from the server.
Network Media or Medium:
-Communication across a network is carried on a medium. The medium provides the channel over which the
message travels from source to destination.
Encoding/Decoding:
-The process of transforming data from one form to another form.
-On wire, the data is encoded in to electrical signal. Fiber optic, the data is encoded in to pulses of light. In wireless
communication, the data is encoded into electromagnetic waves.
Or
-It is the process of converting the binary data in to signals based on the type of the media.
-Ex:
*Copper media- Electrical Signal
*Wireless Media- Radio Frequency Waves
*Fiber Media – Light Pulses
Internet:
-The Internet is created by the interconnection of networks belonging to Internet Service Providers (ISPs). These ISP
networks connect to each other to provide access for millions of users all over the world.
Intranet:
-A system internal to an organization such as website that is explicitly used by internal employees or students. Can
be accessed internally or remotely.
OR
-It is often used to refer to a private connection of LANs and WANs that belongs to an organization, and is designed
to be accessible only by the organization's members, employees, or others with authorization.
OR
- An intranet is a private, internal network that uses the same IP-based protocols used in the Internet. Intranets often
use IP addresses from the private IP address space.
Channel:
-The medium used to transport information from a sender to a receiver.
Checksum/CRC
-A checksum, also known as a Cyclic Redundancy Check or CRC, is a simple mathematical calculation performed
on each frame to ensure it hasn't been corrupted in transit.
Frame Check Sequence (FCS)
-Frame Check Sequence (FCS) is a 2-byte or 4-byte checksum computed over the frame to provide basic protection
against errors in transmission.
Or
- The Frame Check Sequence (FCS) field is used to determine if errors occurred in the transmission and reception of
the frame.
Or
- The Frame Check Sequence (FCS) field is used to determine if errors occurred in the transmission and
reception of the frame. Error detection is added at the Data Link layer because this is where data is transferred
across the media. The media is a potentially unsafe environment for data. The signals on the media could be subject
to interference, distortion, or loss that would substantially change the bit values that those signals represent. The
error detection mechanism provided by the use of the FCS field discovers most errors caused on the media.
To ensure that the content of the received frame at the destination matches that of the frame that left the
source node, a transmitting node creates a logical summary of the contents of the frame. This is known as the
cyclic redundancy check (CRC) value. This value is placed in the Frame Check Sequence (FCS) field of the
frame to represent the contents of the frame.
When the frame arrives at the destination node, the receiving node calculates its own logical summary, or
CRC, of the frame. The receiving node compares the two CRC values. If the two values are the same, the
frame is considered to have arrived as transmitted. If the CRC value in the FCS differs from the CRC
calculated at the receiving node, the frame is discarded.
CSMA/CD:
-CSMA/CD stands for Carrier Sense Multiple Access with Collision Detection. CSMA/CD is the MAC protocol
used by Ethernet to control access to the physical cable segment. If a device has data to transmit, it listens on the
wire to see if any other device is transmitting. If the wire is idle, the device sends the data. All other devices on the
segment receive the transmission. CSMA/CD allows a network device to either transmit data or receive data, but not
both simultaneously. In some cases, two devices may begin transmitting at the same time, and a data collision may
occur. When a data collision occurs, CSMA/CD provides a way for devices to detect the collision and provides a
protocol for re-transmitting the data until the frame is successfully transmitted.
- If two hosts transmit a frame simultaneously, a collision will occur. This renders the collided frames unreadable.
Once a collision is detected, both hosts will send a 32-bit jam sequence to ensure all transmitting hosts are aware of
the collision. The collided frames are also discarded. Both devices will then wait a random amount of time before
resending their respective frames, to reduce the likelihood of another collision.
Or
-On a half-duplex connection, Ethernet utilizes Carrier Sense Multiple Access with Collision Detect (CSMA/CD)
to control media access. Carrier sense specifies that a host will monitor the physical link, to determine whether a
carrier (or signal) is currently being transmitted. The host will only transmit a frame if the link is idle, and the
Interframe Gap has expired. If two hosts transmit a frame simultaneously, a collision will occur. This renders the
collided frames unreadable. Once a collision is detected, both hosts will send a 32-bit jam sequence to ensure all
transmitting hosts are aware of the collision. The collided frames are also discarded.
-Both devices will then wait a random amount of time before resending their respective frames, to reduce the
likelihood of another collision. This is controlled by a backoff timer process.
*Slot time:
-Hosts must detect a collision before a frame is finished transmitting, otherwise CSMA/CD cannot function reliably.
This is accomplished using a consistent slot time, the time required to send a specific amount of data from one end
of the network and then back, measured in bits.
*Late Collision:
-A host must continue to transmit a frame for a minimum of the slot time. In a properly configured environment, a
collision should always occur within this slot time, as enough time has elapsed for the frame to have reached the far
end of the network and back, and thus all devices should be aware of the transmission. The slot time effectively
limits the physical length of the network – if a network segment is too long, a host may not detect a collision within
the slot time period. A collision that occurs after the slot time is referred to as a late collision.
-For 10 and 100Mbps Ethernet, the slot time was defined as 512 bits, or 64 bytes. Note that this is the equivalent of
the minimum Ethernet frame size of 64 bytes. The slot time actually defines this minimum. For Gigabit Ethernet,
the slot time was defined as 4096 bits.
CSMA/CA:
-In CSMA/Collision Avoidance (CSMA/CA), the device examines the media for the presence of a data signal. If the
media is free, the device sends a notification across the media of its intent to use it. The device then sends the data.
This method is used by 802.11 wireless networking technologies.
Router:
-A router is a Layer 3 device that allows communication between separate broadcast domains or networks. In order
to forward data from one network to another, routers must know how to reach other networks. A router stores
network location information in a routing table. Each entry in the routing table includes the destination network
number and indicates how the destination network may be reached by specifying which port or interface on the
router should be used and what “Next Hop” address should be used. When a router receives a packet, the router uses
the data's Layer 3 destination address and the routing table to make intelligent decisions on where to send the packet
next. Routers can read, but cannot modify, Layer 3 addresses. Routers change Layer 2 addresses in data whenever
they route data.
Routing Table:
-The routing table is where a router stores network location information including all possible destination network
numbers and how to reach them. Each entry in the routing table includes the destination network number, the next
hop along the way to the destination network, and which port or interface on the router should be used to reach the
next hop.
Ethernet:
-Ethernet is the most common set of rules controlling network communications for local area networks. It is a set of
standards that define rules such as frame format as well as how computers communicate with each other over a
single wired shared by all devices on the network. These rules give any new device attached to the wire the ability to
communicate with any other attached device.
Or
* Ethernet is a LAN technologies that provides data-link and physical specifications for controlling access to a
shared network medium.
-This allowed two or more hosts to use the same physical network medium.
OR
*Ethernet is a LAN technology that function at the data link layer. Ethernet uses the Carrier Sense Multiple
Access/Collision detection (CSMA/CD) mechanism to send information in a shared environment.
* CSMA/CD defines how the sending stations can recognize the collisions and retransmit the frame.
*Ethernet standards define both the Layer 2 protocols and the Layer 1 technologies.
*Ethernet operates in the lower two layers of the OSI model: the Data Link layer and the Physical layer.
-Ethernet at Layer 1 involves signals, bit streams that travel on the media, physical components that put signals on
media, and various topologies. Ethernet Layer 1 performs a key role in the communication that takes place between
devices.
-Ethernet at Layer 2 prepare the data for transmission over the media.
Media Access Control:
-Regulating the placement of data frames onto the media is known as media access control.
Forwarding Table/MAC Address Table:
-A forwarding table or MAC address table is where a switch stores address and location information for all devices
connected directly to its ports.
Frame:
-A frame is one unit of data encapsulated at Layer 2, or the Data Link Layer. Each frame is divided into three parts:
the header, the data, and the trailer. The frame header contains the data's destination and source Layer 2 addresses. It
also indicates which Layer 3 protocol should be used to process the data on the receiving computer. (In the examples
in this course, the IP protocol is used.) The frame trailer is a checksum, which is used to verify data integrity.
Packet:
-A packet is one unit of data encapsulated at Layer 3 (also known as the Network Layer in the OSI and Five-Layer
models, or the Internet Layer in the TCP/IP Model). Each packet contains a header followed by the data. The
packet's header specifies the data's source and destination IP addresses. Each packet header also specifies the IP
protocol number, which indicates whether the data should be processed with the UDP or TCP protocol on the
receiving computer.
Segment:
-A segment is one unit of data encapsulated at Layer 4, or the Transport Layer. Each segment is divided into two
parts, a header followed by data. The segment header contains the data's destination port number, which indicates
which application layer protocol should be used to process the data on the receiving computer. It also specifies a
source port number, which uniquely identifies the connection on the sending side, allowing the receiving computer
to carry on multiple sessions with the sending computer without intermixing the data.
Switch:
-A switch is a Layer 2 network device that enables full-duplex data transmission. Because switches dedicate a single
port to each end-user device, collision domains have only two devices—the end-user device and the switch. When
connected to a switch, an end-user device can send and receive data simultaneously. A switch builds a MAC address
table that it uses to manage traffic flow. Switches operate based on reading Layer 2 frame information only. They
cannot change Layer 2 addresses, and they do not have any access to Layer 3 data. In addition to basic Ethernet
connectivity, switches make possible virtual LANs.
Bridge:
-A bridge is a Layer 2 network device that connects two or more physical cable segments to create one larger
network. Each side of the bridge becomes a separate collision domain or network segment. So, a bridge can be used
to break up a large network into separate collision domains. A bridge builds a MAC address table that it uses to
manage traffic flow. When a bridge receives data from an unknown MAC address, it adds that address to its MAC
address table and notes the port associated with that address. Then, if a bridge later receives data for that address, it
will know on which port it should forward the data. If a bridge receives data for an unknown destination address, it
will forward the data on all ports, which is known as flooding. Bridges operate based on reading Layer 2 frame
information only. They cannot change Layer 2 addresses, and they do not have any access to Layer 3 data.
Hub:
-A hub is a Layer 1 device that takes a signal that it receives from one connected device and passes it along or
repeats it to all other connected devices. A hub allows each device to use its own twisted-pair cable to connect to a
port on the hub. If a cable fails, it will impact only one device, and if one device is causing trouble on the network,
that individual device can easily be unplugged. A hub is not an intelligent network device. It does not look at the
MAC addresses or data in the Ethernet frame and does not perform any type of filtering or routing of the data. It is
simply a junction that joins all the different devices together. Even though each device has its own cable connecting
it to the hub, access to the network still operates by CSMA/CD, and collisions can occur on the shared bus inside the
hub.
Repeater:
-A repeater is a physical layer device used to connect two or more separate physical cable segments together,
making it act like one long cable. A repeater is a simple hardware device that regenerates electrical signals, sending
all frames from one physical cable segment to another.
PSH/PUSH Flag:
-When you send data, your TCP buffers it. So if you send a character it won't send it immediately but wait to see if
you've got more. But maybe you want it to go straight on the wire: this is where the PUSH function comes in. If you
PUSH data your TCP will immediately create a segment (or a few segments) andpush them.
But the story doesn't stop here. When the peer TCP receives the data, it will naturally buffer them it won't disturb
the application for each and every byte. Here's where the PSH flag kicks in. If a receiving TCP sees the PSH flag
it will immediately push the data to the application.
TCP State:
*To keep track of all the different events happening during connection establishment, connection termination, and
data transfer, TCP is specified as the finite state machine.
or
TCP State machine:
State Description
CLOSE-WAIT Waits for a connection termination request from the remote host.
CLOSED Represents no connection state at all.
CLOSING Waits for a connection termination request acknowledgment from the remote host.
ESTABLISHE Represents an open connection, data received can be delivered to the user. The normal state for the
D data transfer phase of the connection.
Waits for a connection termination request from the remote host or an acknowledgment of the
FIN-WAIT-1
connection termination request previously sent.
FIN-WAIT-2 Waits for a connection termination request from the remote host.
Waits for an acknowledgment of the connection termination request previously sent to the remote
LAST-ACK
host (which includes an acknowledgment of its connection termination request).
LISTEN Waits for a connection request from any remote TCP and port.
SYN- Waits for a confirming connection request acknowledgment after having both received and sent a
RECEIVED connection request.
SYN-SENT Waits for a matching connection request after having sent a connection request.
Waits for enough time to pass to be sure the remote host received the acknowledgment of its
TIME-WAIT
connection termination request.
Or
LISTEN
-(server) represents waiting for a connection request from any remote TCP and port.
SYN-SENT
-(client) represents waiting for a matching connection request after having sent a connection request.
SYN-RECEIVED
-(server) represents waiting for a confirming connection request acknowledgment after having both received and
sent a connection request.
ESTABLISHED
(both server and client) represents an open connection, data received can be delivered to the user. The normal state
for the data transfer phase of the connection.
FIN-WAIT-1
-(both server and client) represents waiting for a connection termination request from the remote TCP, or an
acknowledgment of the connection termination request previously sent.
FIN-WAIT-2
-(both server and client) represents waiting for a connection termination request from the remote TCP.
CLOSE-WAIT
-(both server and client) represents waiting for a connection termination request from the local user.
CLOSING
-(both server and client) represents waiting for a connection termination request acknowledgment from the remote
TCP.
LAST-ACK
-(both server and client) represents waiting for an acknowledgment of the connection termination request previously
sent to the remote TCP (which includes an acknowledgment of its connection termination request).
TIME-WAIT
(either server or client) represents waiting for enough time to pass to be sure the remote TCP received the
acknowledgment of its connection termination request. [According to RFC 793 a connection can stay in TIME-
WAIT for a maximum of four minutes known as a MSL (maximum segment lifetime).]
CLOSED
-(both server and client) represents no connection state at all.
Or
Header
TCP Header:
*TCP header varies in length. It’s minimum length that is when the option are not used is min 20 bytes and max 40
byte.
1. Source Port [16 bits]:
- Identifies which application is sending the information.
2. Destination Port [16 bits]:
-Identifies which application is to receive- the information.
3. Sequence Number [32 bits]:
-Maintains reliability and sequencing.
4. Acknowledgement Number [32 bits]:
-Used to acknowledge received information and identifies the sequence number the source next expects to receive
from the destination.
5. Header Length/Data Offset [4 bits]:
-This indicates where the data begin.
6. Reserved Field [3 bits]:-
-Which are always set to zero.
7. Code Bits [8 bits]/ Flags:
- Eight 1-bit flags that are used for data flow and connection control.
-To understand the three-way handshake process, it is important to look at the various values that the two hosts
exchange. Within the TCP segment header, there are six 1-bit fields that contain control information used to
manage the TCP processes.
-These fields are referred to as flags, because the value of one of these fields is only 1 bit and, therefore, has only
two values: 1 or 0. When a bit value is set to 1, it indicates what control information is contained in the segment.
1. Congestion Window Reduced (CWR)
-The sender reduced its sending rate.
2. ECN-Echo (ECE)
-The sender received an earlier congestion notification.
3. Urgent (URG)
- The URG flag is used to inform a receiving station that certain data within a segment is urgent and should be
prioritized. If the URG flag is set, the receiving station evaluates the urgent pointer. This pointer indicates how much
of the data in the segment, counting from the first byte, is urgent.
4. Acknowledgment (ACK)
-When set to 1, indicates that this segment is carrying an Acknowledgment, and the value of the Acknowledgement
Number field is valid and carrying the next sequence expected from the destination of this segment.
5. Push (PSH)
-The receiver should pass the data to the application as soon as possible.
6. Reset (RST)
-The sender has encountered a problem and wants to reset the connection.
7. Synchronize (SYN)
-Synchronize sequence number to initiate a connection.
8. Final (FIN)
-The sender of the segment is requesting that the connection be closed.
>For example:
-Option-Kind byte of 0x01 indicates that this is a No-Op option used only for padding, and does not have an Option-
Length or Option-Data byte following it.
-An Option-Kind byte of 0 is the End Of Options option, and is also only one byte.
-An Option-Kind byte of 0x02 indicates that this is the Maximum Segment Size option, and will be followed by a
byte specifying the length of the MSS field (should be 0x04). Note that this length is the total length of the given
options field, including Option-Kind and Option-Length bytes. So while the MSS value is typically expressed in two
bytes, the length of the field will be 4 bytes (+2 bytes of kind and length). In short, an MSS option field with a value
of 0x05B4 will show up as (0x02 0x04 0x05B4) in the TCP options section.
-Some options may only be sent when SYN is set; they are indicated below as [SYN].
0 (8 bits) – End of options list
1 byte – 8 bit - No operation (NOP, Padding) This may be used to align option fields on 32-bit
boundaries for better performance.
32 bits/16 bits – 2 byte – Maximum segment size [SYN]
24 bits / 16 bits- 2 byte – Window scale [SYN]
16 bits – 2 byte – Selective Acknowledgement permitted. [SYN] 8,10,TTTT,EEEE (80 bits)- Timestamp
and echo of previous timestamp (24 bits) – TCP Alternate Checksum Request. [SYN]
(variable bits) – TCP Alternate Checksum Data.
Or
-it used as padding which tells end of option.
No Operation (NOP) (1 Byte)
-The no-operation (NOP) option is also a 1-byte option used as a filler. However, it normally comes before another
option to help align it in a four-word slot. For example, in Figure 15.43 it is used to align one 3-byte option such as
the window scale factor and one 10-byte option such as the timestamp.
-It is a byte which we add in between options to align them in 4 words slot. We can use multiple NOP to make it a
word.
#When we are using padding bytes in between options we call it NOP byte. When we are using padding bytes at the
end of option where the data begins we call it EOP bytes.
-The value of the window scale factor can be determined only during connection establishment; it does not change
during the connection.
PAWS
-The timestamp option has another application, protection against wrapped sequence numbers (PAWS)
Example 1:
-Let us see how the SACK option is used to list out-of-order blocks. In Figure 15.49 an end has received five
segments of data.
-The first and second segments are in consecutive order. An accumulative acknowledgment can be sent to report the
reception of these two segments.
-Segments 3, 4, and 5, however, are out of order with a gap between the second and third and a gap between the
fourth and the fifth.
-An ACK and a SACK together can easily clear the situation for the sender.
-The value of ACK is 2001, which means that the sender need not worry about bytes 1 to 2000.
-The SACK has two blocks. The first block announces that bytes 4001 to 6000 have arrived out of order. -The
second block shows that bytes 8001 to 9000 have also arrived out of order.
-This means that bytes 2001 to 4000 and bytes 6001 to 8000 are lost or discarded. The sender can resend only these
bytes.
Example 2:
-Figure shows how a duplicate segment can be detected with a combination of ACK and SACK. In this case, we
have some out-of-order segments (in one block) and one duplicate segment.
-To show both out-of-order and duplicate data, SACK uses the first block, in this case, to show the duplicate data
and other blocks to show out-of-order data.
-Note that only the first block can be used for duplicate data. The natural question is how the sender, when it
receives these ACK and SACK values, knows that the first block is for duplicate data (compare this example with
the previous example).
-The answer is that the bytes in the first block are already acknowledged in the ACK field; therefore, this block must
be a duplicate.
Example 3:
Figure 15.51 shows what happens if one of the segments in the out-of-order section is also duplicated.
In this example, one of the segments (4001:5000) is duplicated.
-The SACK option announces this duplicate data first and then the out-of-order block. This time, however, the
duplicated block is not yet acknowledged by ACK, but because it is part of the out-of-order block (4001:5000 is part
of 4001:6000), it is understood by the sender that it defines the duplicate data.
Options. 0 to 40 bytes.
Options occupy space at the end of the TCP header. All options are included in the checksum. An option may begin
on any byte boundary. The TCP header must be padded with zeros to make the header length a multiple of 32 bits.
When IPv6 is used as the network protcol, the MSS is calculated as the maximum packet size minus 60 bytes. An
MSS of 65535 should be interpreted as infinity.
SACK Permitted
The TCP SACK permitted option may be sent in a SYN by a TCP that has been extended to receive the SACK
option once the connection has opened. It MUST NOT be sent on non-SYN segments.
The TCP End of Option List option is used to indicate the last option in the list has been reached.
No Operation
The TCP Maximum Segment Size option can be used to specify the maximum segment size that the receiver should
use.
RFC 1323, pg 8:
The window scale extension expands the definition of the TCP window to 32 bits and then uses a scale factor to
carry this 32 bit value in the 16 bit Window field of the TCP header (SEG.WND in RFC-793). The scale factor is
carried in a new TCP option, Window Scale. This option is sent only in a SYN segment (a segment with the SYN bit
on), hence the window scale is fixed in each direction when a connection is opened. (Another design choice would
be to specify the window scale in every TCP segment. It would be incorrect to send a window scale option only
when the scale factor changed, since a TCP option in an acknowledgement segment will not be delivered reliably
(unless the ACK happens to be piggy-backed on data in the other direction). Fixing the scale when the connection is
opened has the advantage of lower overhead but the disadvantage that the scale factor cannot be changed during the
connection.
RFC 1323, pg 9:
The three-byte Window Scale option may be sent in a SYN segment by a TCP. It has two purposes: (1) indicate that
the TCP is prepared to do both send and receive window scaling, and (2) communicate a scale factor to be applied to
its receive window. Thus, a TCP that is prepared to scale windows should send the option, even if its own scale
factor is 1. The scale factor is limited to a power of two and encoded logarithmically, so it may be implemented by
binary shift operations.
Timestamps
The TCP Timestamp option obsoletes the TCP Echo request and Echo reply options.
The timestamps are used for two distinct mechanisms: RTTM (Round Trip Time Measurement) and PAWS (Protect
Against Wrapped Sequences).
*1. 12 byte TCP Pseudo Header is created before checksum calculation. This Pseudo Header contain information
from both TCP Header and IP Header in to which TCP segment will be encaptulated.
2. Once this 96-bit [12 byte] header has been formed, it is placed in a buffer, following which the TCP
segment itself is placed. Then, the checksum is computed over the entire set of data (pseudo header plus TCP
segment). The value of the checksum is placed into the Checksum field of the TCP header, and the pseudo
header is discarded—it is not an actual part of the TCP segment and is not transmitted.
3. When the TCP segment arrives at its destination, the receiving TCP software performs the same calculation. It
forms the pseudo header, prepends it to the actual TCP segment, and then performs the checksum (setting
the Checksum field to zero for the calculation as before). If there is a mismatch between its calculation and the value
the source device put in the Checksum field, this indicates that an error of some sort occurred and the segment is
normally discarded.
To provide basic protection against errors in transmission, TCP includes a 16-bit Checksum field in its header.
The idea behind a checksum is very straight-forward: take a string of data bytes and add them all together.
Then send this sum with the data stream and have the receiver check the sum.
In TCP, a special algorithm is used to calculate this checksum by the device sending the segment; the same
algorithm is then employed by the recipient to check the data it received and ensure that there were no
errors.
Instead of computing the checksum over only the actual data fields of the TCP segment, a 12-byte TCP pseudo
header is created prior to checksum calculation. This header contains important information taken from
fields in both the TCP header and the IP datagram into which the TCP segment will be encapsulated. The
TCP pseudo header has the format shown
Once this 96-bit [12 byte] header has been formed, it is placed in a buffer, following which the TCP segment
itself is placed. Then, the checksum is computed over the entire set of data (pseudo header plus TCP
segment). The value of the checksum is placed into the Checksum field of the TCP header, and the pseudo
header is discarded—it is not an actual part of the TCP segment and is not transmitted.
[Note: To calculate the TCP segment header’s Checksum field, the TCP pseudo header is first constructed and
placed, logically, before the TCP segment. The checksum is then calculated over both the pseudo header and the
TCP segment. The pseudo header is then discarded.]
When the TCP segment arrives at its destination, the receiving TCP software performs the same calculation. It forms
the pseudo header, prepends it to the actual TCP segment, and then performs the checksum (setting
the Checksum field to zero for the calculation as before). If there is a mismatch between its calculation and the value
the source device put in the Checksum field, this indicates that an error of some sort occurred and the segment is
normally discarded.
*********************************************************************************************
**
Sequence Number:
-32 bit number used for byte level numbering of TCP segments. If you are using TCP, each byte of data is assigned
a sequence number. If SYN flag is set (during the initial 3 way handshake connection initiation) then this is the
initial sequence number. The sequence of the actual first data byte will then be this sequence number plus one.
-For example, let the first byte of data by a device in a particular TCP header will have its sequence number in this
field 50000. If this packet has 500 bytes of data in it, then the next packet sent by this device will have the sequence
number of 50000 + 500 + 1 = 50501.
Or
-The data bytes receive from upper layer will be given sequence numbers. The first byte number will be sequence
number of the segment.
Or
-It identifies data within a segment rather than the segment itself.
-It allow the receiving host to reassemble the data from multiple segment in correct order, upon arrival.
-It allow receipt of data in a segment to be acknowledged.
-When establishing a connection, a host will choose a 32-bit Initial Sequence Number (ISN).
-Receiver responds to this sequence number with acknowledgement number, set to sequence number +1.
Or
-TCP provides a reliable session between devices is by using sequence numbers and acknowledgments. Every TCP
segment sent has a sequence number in it. This not only helps the destination reorder any incoming segments that
arrived out of order, but it also provides a method of verifying whether all the sent segments were received. The
destination responds to the source with an acknowledgment indicating receipt of the sent segments . Before TCP can
provide a reliable session, it has to go through a synchronization phase—the three-way handshake.
Acknowledgement Number:
-32 bit number field which indicates the next sequence number that the sending device is expecting from the other
device.
Or
It indicates the next sequence number of the segment that the sending device is expecting from the other device.
-TCP keeps track of different information about each connection. TCP set up a complex data structure known as
Transmission Control Block (TCB) to do this, which maintain information about the local and remote socket
numbers, the send and receive buffers, security and priority values, and the current segment in the queue. The
Transmission Control block also manages send and receive sequence numbers.
TCP Window:
A TCP window is the amount of unacknowledged data a sender can send on a particular connection before it gets an
acknowledgment back from the receiver, that it has received some of the data.
- The larger the window size for a session, the less number of acknowledgments sent, thus making the session more
efficient. Too small a window size can affect throughput, since a host has to send a small number of segments, wait
for an acknowledgment, send another bunch of small segments, and wait again.
[Note: Reducing the window size increases reliability but reduces throughput.]
- TCP allows the regulation of the flow of segments, ensuring that one host doesn’t flood another host with too many
segments, overflowing its receiving buffer. TCP uses a sliding windowing mechanism to assist with flow control.
-For example, if the window size is 1, a host can send only one segment and must then wait for a corresponding
acknowledgment before sending the next segment. If the window size is 20, a host can send 20 segments and must
wait for the single acknowledgment of the sent 20 segments before sending 20 additional segments.
-It increasing the window size as long as receiver can receive total number of segments in window.
The working of the TCP sliding window mechanism can be explained as below:
The sending device can send all segment within the TCP window size (as specified in the TCP header) receiving an
ACK, and should start a timeout timer for each of them.
The receiving device should acknowledge each segment it received, indicating the sequence number of the last well
received packet. After receiving the ACK from the receiving device, the sending device slides the window to right
side.
Example:
In this case, the sending device can send up to 5 TCP segments without receiving an acknowledgement from the
receiving device. After receiving the acknowledgement for segment 6 from the receiving device, the sending device
can slide its window one TCP segment to the right side and the sending device can transmit segment 6 also.
If any TCP segment lost while its journey to the destination, the receiving device can not acknowledge the sender.
Considered while transmission, all other segments reached the destination except segment 3. The receiving device
can acknowledge up to segment 2. At the sending device, a timeout will occur and it will retransmit the lost segment
3. Now the Receiving device has received all the segment, since only segment 3 was lost. Now the receiving device
will send the ACK for segment 5, because it has received all the segment 5.
Acknowledgement (ACK) for segment 5 endures the sender the receiver has successfully received all the segment
up to 5.
TCP uses a byte level numbering system for communication. If the sequence number for a TCP segment at
any instance was 5000 and the segment carry 500 bytes, the sequence number for the next segment will be
5000+500+1. That means TCP segment only carries the sequence number of the first byte in the segment.
The Window size is expressed in number of bytes and is determined by the receiving device when the connection is
established and can very later. You might have noticed when transferring big files from one window machine to
another, initially the time remaining calculation will show a large value and will come down later.
Windowing:
-TCP provide a Windowing system to regulate the flow of data between computers. With this system of flow
control, the receiving computer can notify the sending computer when to speed up or slow down the
transmission of data.
Or
Sliding Window:
-It increasing the window size as long as receiver can receive total number of segments in window.
-In this case, the sending device can send up to 5 TCP segments without receiving an acknowledgement from the
receiving device. After receiving the acknowledgement for segment 6 from the receiving device, the sending device
can slide its window one TCP segment to the right side and the sending device can transmit segment 6 also.
Or
-In Sliding window mechanism the sending host will keep increasing its window size as long as the receiver is
capable of receiving window. For example, in this mechanism if host A is sending 3 segment in a window and the
receiver sends cumulative acknowledgment indicating all three segments have been received, host A will slide the
window to right that is it will increase size of window by one segment. In this way it keep increasing the window
size until the receiver is acknowledging all the segment in the window. If receiver sends negative acknowledgment
indicating it has not received particular segments then the sender will reduce the window size that is it will slide
window towards left and send reduced window size.
Window Size:
-It is used for flow control and Indicates the number of segments allowed to be sent before waiting for an
acknowledgement from the destination.
-Window size can never exceed (Maximum Segment Size) MSS, which is 536 bytes.
-The window size can be dynamically changed for flow control, preventing buffer congestion on the receiving host.
Or
This field is used by the receiver to indicate to the sender the amount of data that it is able to accept.
The Window size field is the key to efficient data transfers and flow control.
Bandwidth-Delay Product:
-The maximum number of bits that can be present on a network segment at any on time.
Example: We have FastEthernet Link = 100 mbps / Latency = 100msec = 0.1 sec
Formula:
Bandwidth-Delay Product= A Segment’s BW (in bits/sec) X Latency a packet experience on the segment (in
sec) [PING – Round trip time response gives delay/latency]
= 10,000,000 bits
-Maximum of 10,000,000 bit, 10 million bits can be on this network segment at any one time.
Window Scaling:
- The TCP window scale option is an option to increase the receive window size allowed in Transmission Control
Protocol above its former maximum value of 65,535 bytes.
- The window scaling option may be sent only once during a connection by each host, in its SYN packet.
Or
For more efficient use of high bandwidth networks, a larger TCP window size may be used. The TCP window size
field controls the flow of data and its value is limited to between 2 and 65,535 bytes.
The TCP window scale option is an option used to increase the maximum window size from 65,535 bytes to 1
gigabyte. Scaling up to larger window sizes is a part of what is necessary for TCP tuning.
The window scale option is used only during the TCP 3-way handshake.
Lastly, for those who deal with Cisco routers, you might be interested to know that you are able to configure the
Window size on Cisco routers running the Cisco IOS v9 and greater. Routers with versions 12.2(8)T and above
support Window Scaling, a feature that's automatically enabled for Window sizes above 65,535, with a maximum
value of 1,073,741,823 bytes!
-This allow, segments to be put back in the correct order if they arrive out of order.
Or
-It identifies data within a segment rather than the segment itself.
-It allow the receiving host to reassemble the data from multiple segment in correct order, upon arrival.
or
*Sequence number are used to:
Acknowledgment Number:
-It indicates the next sequence number that receiver expect to receive.
Direct Transmission:
TCP connection are logical point to point connection between two application layer protocols. This type of
communication is also refer to as a direct transmission.
-If we get to point where we sending to aggressively and packet does get drop, the receiver when send ACK and say
I acknowledge, I receive this segment and I want this next segment and then sender could see that says that I already
sent this segment, it must have been dropped. I must be sending to aggressively that can cause the sender to reduce
its window size. That’s something called TCP slow start.
Or
Retransmission Timer:
-Every time a segment is sent, the sending host starts a retransmission timer, dynamically determined (and adjusted)
based on the round-trip time between the two hosts.
-If ACK is not received before the retransmission timer expire, the segment is resent, and ensuring guaranteed
delivery, even when segments are lost.
Or
-If the sending host doesn’t receive an ACK for the remaining packet within the interval set by the retransmission
timer, the packet are retransmitted. But retransmission increase the network load.
-TCP employs a positive acknowledgment with retransmission (PAR) mechanism to recover from lost segments.
The same segment will be repeatedly re-sent, with a delay between each segment, until an acknowledgment is
received from the destination. The acknowledgment contains the sequence number of the segment received
and verifies receipt of all segments sent prior to the retransmission process. This eliminates the need for
multiple acknowledgments and resending acknowledgments.
Or
-For reason of efficiency, ACK are sent only for correctly receive sequences and not for each individual packet.
Or
-TCP utilizes PAR to control data flow and confirm data delivery.
-If timer expires before source receives ACK, source retransmits the packet and restarts the timer.
Positive Acknowledgement:
-The receiver explicitly notifies the sender which segments were received correctly. Positive Acknowledgement
therefore also implicitly informs the sender which packets were not received and provides detail on packets which
need to be retransmitted.
-Positive Acknowledgment with Re-Transmission (PAR), is a method used by TCP to verify receipt of transmitted
data. PAR operates by re-transmitting data at an established period of time until the receiving host acknowledges
reception of the data.
Negative Acknowledgement:
-The receiver explicitly notifies the sender which segments were received incorrectly and thus may need to be
retransmitted
Selective Acknowledgment:
-TCP may experience poor performance when multiple packets are lost from one window of data, but when receiver
send cumulative acknowledgement, it is limited information, because the tcp sender can only learn about single
packet per round trip time. In such case sender could choose to retransmit lost packets early before it received
acknowledgment for remaining packets. However, the retransmitted segments may have already been successfully
received.
To resolve this problem, with selective acknowledgments, the data receiver can inform the sender about all segments
that have arrived successfully, so the sender need retransmit only the segments that have actually been lost.
For example, suppose 10,000 bytes are sent in 10 different TCP packets, and the first packet is lost during
transmission. In a pure cumulative acknowledgment protocol, the receiver cannot say that it received bytes 1,000 to
9,999 successfully, but failed to receive the first packet, containing bytes 0 to 999. Thus the sender may then have to
resend all 10,000 bytes.
To solve this problem TCP employs the selective acknowledgment (SACK) option which allows the receiver to
acknowledge discontinuous blocks of packets which were received correctly, in addition to the sequence number
of the last contiguous byte received successively, as in the basic TCP acknowledgment. The acknowledgement can
specify a number of SACK blocks, where each SACK block is conveyed by the starting and ending sequence
numbers of a contiguous range that the receiver correctly received. In the example above, the receiver would send
SACK with sequence numbers 1000 and 9999. The sender thus retransmits only the first packet, bytes 0 to 999.
-The SACK-permitted option of two bytes is used only during connection establishment. The host that sends the
SYN segment adds this option to show that it can support the SACK option. If the other end, in its SYN + ACK
segment, also includes this option, then the two ends can use the SACK option during data transfer. Note that the
SACK-permitted option is not allowed during the data transfer phase.
- The SACK option, of variable length, is used during data transfer only if both ends agree (if they have exchanged
SACK-permitted options during connection establishment). The option includes a list for blocks arriving out of
order.
Example 1:
-Let us see how the SACK option is used to list out-of-order blocks. In Figure 15.49 an end has received five
segments of data.
-The first and second segments are in consecutive order. An accumulative acknowledgment can be sent to report the
reception of these two segments.
-Segments 3, 4, and 5, however, are out of order with a gap between the second and third and a gap between the
fourth and the fifth.
-An ACK and a SACK together can easily clear the situation for the sender.
-The value of ACK is 2001, which means that the sender need not worry about bytes 1 to 2000.
-The SACK has two blocks. The first block announces that bytes 4001 to 6000 have arrived out of order. -The
second block shows that bytes 8001 to 9000 have also arrived out of order.
-This means that bytes 2001 to 4000 and bytes 6001 to 8000 are lost or discarded. The sender can resend only these
bytes.
Example 2:
-Figure shows how a duplicate segment can be detected with a combination of ACK and SACK. In this case, we
have some out-of-order segments (in one block) and one duplicate segment.
-To show both out-of-order and duplicate data, SACK uses the first block, in this case, to show the duplicate data
and other blocks to show out-of-order data.
-Note that only the first block can be used for duplicate data. The natural question is how the sender, when it
receives these ACK and SACK values, knows that the first block is for duplicate data (compare this example with
the previous example).
-The answer is that the bytes in the first block are already acknowledged in the ACK field; therefore, this block must
be a duplicate.
Example 3:
Figure 15.51 shows what happens if one of the segments in the out-of-order section is also duplicated.
In this example, one of the segments (4001:5000) is duplicated.
-The SACK option announces this duplicate data first and then the out-of-order block. This time, however, the
duplicated block is not yet acknowledged by ACK, but because it is part of the out-of-order block (4001:5000 is part
of 4001:6000), it is understood by the sender that it defines the duplicate data.
Cumulative Acknowledgement:
-The receiver acknowledges that it correctly received segment in a stream which implicitly informs the sender that
the previous packets were received correctly. TCP uses cumulative acknowledgment with its TCP sliding window.
Or
- TCP was originally designed to acknowledge receipt of segments cumulatively. The receiver advertises the next
byte it expects to receive, ignoring all segments received and stored out of order. This is sometimes referred to as
positive cumulative acknowledgment or ACK.
-The word “positive” indicates that no feedback is provided for discarded, lost, or duplicate segments. The 32-bit
ACK field in the TCP header is used for cumulative acknowledgments and its value is valid only when the ACK
flag bit is set to 1.
Or
Delayed Acknowledgement:
-This means that when a segment arrives, it is not acknowledged immediately. The receiver waits until there is a
decent amount of space in its incoming buffer before acknowledging the arrived segments. The delayed
acknowledgment prevents the sending TCP from sliding its window. After the sending TCP has sent the data in the
window, it stops. This kills the syndrome.
-Delayed acknowledgment also has another advantage: it reduces traffic. The receiver does not have to acknowledge
each segment. However, there also is a disadvantage in that the delayed acknowledgment may result in the sender
unnecessarily retransmitting the unacknowledged segments. TCP balances the advantages and disadvantages. It now
defines that the acknowledgment should not be delayed by more than 500 ms.
Or
-The receiver need not ACK receive segment immediately. It can wait for further segment as long as there is space
in receive buffer. This is called delayed acknowledgement. So called delayed acknowledgement timer is running
when this is reaches to 0 all segment in receiver buffer must be acknowledge.
TCP Zero Window is when the Window size in a machine remains at zero for a specified amount of time.
This means that a client is not able to receive further information at the moment, and the TCP transmission is halted
until it can process the information in its receive buffer.
TCP Window size is the amount of information that a machine can receive during a TCP session and still be able to
process the data. Think if it like a TCP receive buffer. When a machine initiates a TCP connection to a server, it will
let the server know how much data it can receive by the Window Size.
In many Windows machines, this value is around 64512 bytes. As the TCP session is initiated and the server begins
sending data, the client will decrement it's Window Size as this buffer fills. At the same time, the client is processing
the data in the buffer, and is emptying it, making room for more data. Through TCP ACK frames, the client informs
the server of how much room is in this buffer. If the TCP Window Size goes down to 0, the client will not be able to
receive any more data until it processes and opens the buffer up again. In this case, Protocol Expert will alert a "Zero
Window" in Expert View.
Troubleshooting a Zero Window For one reason or another, the machine alerting the Zero Window will not receive
any more data from the host. Reason: It could be that the machine is running too many processes at that moment,
and its processor is maxed. Or it could be that there is an error in the TCP receiver, like a Windows registry
misconfiguration. Try to determine what the client was doing when the TCP Zero Window happened.
-When a host receives a TCP segment from a host that it does not have a connection with.
-When a host receive a segment with incorrect Sequence Number or Acknowledgement Number.
-A TCP connection can become half-open, indicating that one host is an establish state while other is not. Half-open
connection can result from interruption by an intermediary device (such as a firewall), or from a software or
hardware issue.
TCP Half Close:
-In TCP, one end can stop sending data while still receiving data. This is called a Half-Close.
TCP timestamps
TCP timestamps can help TCP determine in which order packets were sent.
A. To use PING, we can do a PING from one side of segment to the other side of segment, take the average and
devide by 2 because PING result a Round Trip Timer (RTT).
Media MTU
Ethernet 1500
Ethernet Jumbo Frame 1500-9000
802.11 2272
802.5 4464
FDDI 4500
Segmentation:
-It divides the data stream into smaller pieces is called segmentation.
-Segmenting messages has two primary benefits:
1. First, by sending smaller individual pieces from source to destination, many different conversations can be
interleaved on the network.
2. Segmentation can increase the reliability of network communications. The separate pieces of each message need
not travel the same path across the network from source to destination. If a particular path becomes congested with
data or fails, individual pieces of the message can still be directed to the destination using alternate path. If part of
the message fails to make it to the destination, only the missing parts need to be retransmitted.
Fragmentation:
-Fragmentation is the process of breaking an IP packet into smaller chunks or data is fragmented when transmitted
over data link technology with a smaller MTU.
Or
-MSS decides how much data we can put in a TCP segment, MTU decide how much data we can put in a frame.
- The TCP Maximum Segment Size (MSS) defines the maximum amount of data that a host is willing to accept in a
single TCP/IP datagram. This TCP/IP datagram may be fragmented at the IP layer. The MSS value is sent as a TCP
header option only in TCP SYN segments. Each side of a TCP connection reports its MSS value to the other side.
Contrary to popular belief, the MSS value is not negotiated between hosts. The sending host is required to limit the
size of data in a single TCP segment to a value less than or equal to the MSS reported by the receiving host.
- The way MSS now works is that each host will first compare its outgoing interface MTU with its own buffer and
choose the lowest value as the MSS to send. The hosts will then compare the MSS size received against their own
interface MTU and again choose the lower of the two values.
-Example: illustrates this additional step taken by the sender to avoid fragmentation on the local and remote wires.
Notice how the MTU of the outgoing interface is taken into account by each host (before the hosts send each other
their MSS values) and how this helps to avoid fragmentation.
Scenario 2
1. Host A compares its MSS buffer (16K) and its MTU (1500 - 40 = 1460) and uses the lower value as the
MSS (1460) to send to Host B.
2. Host B receives Host A's send MSS (1460) and compares it to the value of its outbound interface MTU - 40
(4422).
3. Host B sets the lower value (1460) as the MSS for sending IP datagrams to Host A.
4. Host B compares its MSS buffer (8K) and its MTU (4462-40 = 4422) and uses 4422 as the MSS to send to
Host A.
5. Host A receives Host B's send MSS (4422) and compares it to the value of its outbound interface MTU -40
(1460).
6. Host A sets the lower value (1460) as the MSS for sending IP datagrams to Host B.
1460 is the value chosen by both hosts as the send MSS for each other. Often the send MSS value will be the same
on each end of a TCP connection.
-The maximum segment size (MSS) is the largest amount of data, specified in bytes, that TCP is willing to receive
in a single segment. For best performance, the MSS should be set small enough to avoid IP fragmentation, which
can lead to packet loss and excessive retransmissions. To try to accomplish this, typically the MSS is announced by
each side using the MSS option when the TCP connection is established, in which case it is derived from
the maximum transmission unit (MTU) size of the data link layer of the networks to which the sender and receiver
are directly attached.
-Furthermore, TCP senders can use path MTU discovery to infer the minimum MTU along the network path
between the sender and receiver, and use this to dynamically adjust the MSS to avoid IP fragmentation within the
network.
MTU Path Discovery: [PMTUD]
-To find the lowest MTU along the network path so that the frames won’t be fragmented or dropped (In case of DF
bit is set) Or to avoid fragmentation.
--When datagram is sent that is too large to be forwarded by a router over a physical link and it has DF (Don’t
Fragment) bit set to prevent fragmentation, a destination unreachable message is sent and the packet is discarded.
Or
- One of the message types defined in ICMPv4 is the Destination Unreachable message, which is returned under
various conditions where an IP datagram cannot be delivered. One of these situations is when a datagram is sent that
is too large to be forwarded by a router over a physical link but which has its Don’t Fragment (DF) flag set to
prevent fragmentation. In this case, the datagram must be discarded and a Destination Unreachable message sent
back to the source. A device can exploit this capability by testing the path with datagrams of different sizes, to see
how large they must be before they are rejected.
-The source node typically sends a datagram that has the MTU of its local physical link, since that represents an
upper bound on the MTU of any path to or from that device. If this goes through without any errors, it knows it can
use that value for future datagrams to that destination. If it gets back any Destination Unreachable - Fragmentation
Needed and DF Set messages, this means some other link between it and the destination has a smaller MTU. It tries
again using a smaller datagram size, and continues until it finds the largest MTU that can be used on the path.
UDP Header:
* UDP provides an unreliable connection. UDP doesn’t go through a three-way handshake to set up a connection—it
simply begins sending the data. Likewise, UDP doesn’t check to see whether sent segments were received by a
destination; in other words, it doesn’t use an acknowledgment process.
* UDP Header has fixed length of only 8 bytes and consisting of 4 fields.
Header Details:
1. Source Port [16 bits]:
- Identifies the sending application.
2. Destination Port [16 bits]:
-Identifies the receiving application.
3. Length [16 bits]:
-Defines the size of the UDP segment.
4. Checksum [16 bits]:
-Used for Error Checking and Provides a CRC on the complete UDP segment.
IP Header
* IP header is between 20-60 bytes long. The last 40 bytes can be filled with IP option. This are not vital. They are
require sometimes for control processing and can provides function which are not normally contain in IP header.
-IPv4: 0100
-IPv6: 0110
5. Identification (2 bytes):
-It reassemble and identify the fragment that belonging to a particular IP datagram.
-When datagram is fragmented by router, the sending host set the value of this field and assigned to each fragment. -
This field is used by the receiver to reassemble fragment without accidentally mixing fragments from different
datagram. This is needed because fragments may arrive from multiple datagram mixed together, since IP datagrams
can be received out of order from any device.
Or
-While the function of the Fragment Offset Field is to identify the relative position of each fragment, it is the
Identification Field that serves to allow the receiving device to sort out which fragments comprise what block of
data
6. Flags (3 bits):
-Three control flags, two of which are used to manage fragmentation.
1. Reserved: Not used.
2. DF [Don’t Fragment]
-When set to 1, specifies that the datagram should not be fragmented. Since the fragmentation process is generally
“Invisible” to higher layers, most protocols don’t care about this and don’t set this flag. It is, however, used for
testing the maximum transmission unit (MTU) of a link.
3. MF [More Fragments]:
-When set to 0, indicates the last fragment in a datagram; when set to 1, indicates that more fragments are yet to
come. If no fragmentation is used for a datagram, then of course there is only one “fragment” (the whole datagram),
and this flag is 0. If fragmentation is used, all fragments but the last set this flag to 1 so the receiver knows when all
fragments have been sent.
Or
-3 bit field contains the flags that specify the function of the frame in terms of whether fragmentation has been
employed, additional fragments are coming, or this is the final fragment.
0xx Reserved
Or
0= May Fragment
9.Protocol (1 byte):
-Identifies the higher-layer protocol contain in the payload (TCP or UDP).
10. Header Checksum (2 byte):
-A checksum computed over the header to provide basic protection against corruption in transmission. -It is
calculated by dividing the header bytes into words (a word is two bytes) and then adding them together.
-The data is not check summed, only the header. At each hop the device receiving the datagram does the same
checksum calculation and on a mismatch, discards the datagram as damaged.
Or
-Can have fields used for special purposes like loose source routing, strict source routing, record route, timestamping
etc. But IP options are rarely used these days.
or
-One or more of several types of options may be included after the standard headers in certain IP datagrams.
IP Options
There may, or may not be an option field. If there is one, it can vary in length.
The option field contains an Option-Type octet, an Option-Length octet and a variable number of Option-
data octets.
Option-Type
o Copied Flag - 0 indicates that the option is NOT to be copied to each fragment if the datagram is
o Option Class - 0 is used for Control (used normally) and 2 is used for debugging and
o Option Number
0 - Special case indicating the end of the option list, in this case the option field is just
1 - No Operation, again the option field is just one octet with no length or data fields.
2 - Security the length is 11 octets and the various security codes can be found in RFC
791.
3 - Loose Source Routing: which is IP routing based on information supplied by the
source station where the routers can forward the datagram to any number of intermediate
4 - Internet Timestamp
source station where the routers can only forward the datagram to a directly connected
router in order to get to the next hop indicated in the source route path.
Option-Length - variable and not present for the NOP and the end of Option List
Option-Data - variable and not present for the NOP and the end of Option List. See RFC 791 for the detail
IP Options are not often used today, you may come across IP source-routing (loose or strict) on Unix machines and
the like, perhaps for load balancing traffic where modern routing protocols are not being used.
Fragmentation:
-Fragmentation is the process of breaking an IP packet into smaller chunks. Data is fragmented when transmitted
over data link technology with a smaller MTU.
Example: Since some physical networks on the path between devices may have a smaller MTU than others, it may
be necessary to fragment more than once. For example, suppose the source device wants to send an IP message
12,000 bytes long. Its local connection has an MTU of 3,300 bytes. It will have to divide this message into four
fragments for transmission: three that are about 3,300 bytes long and a fourth remnant about 2,100 bytes long.
-In this simple example, Device A is sending to Device B over a small internetwork consisting of one router and two
physical links. The link from A to the router has an MTU of 3,300 bytes, but from the router to B it is only 1,300
bytes. Thus, any IP datagrams over 1,300 bytes will need to be fragmented.
Fragmentation is necessary to implement a network-layer internet that is independent of lower layer details, but
introduces significant complexity to IP.
*Remember that IP is an unreliable, connectionless protocol. IP datagrams can take any of several routes on their
way from the source to the destination, and some may not even make it to the destination at all. When we fragment a
message we make a single datagram into many, which introduces several new issues to be concerned with:
o Sequencing and Placement: The fragments will typically be sent in sequential order from the beginning of
the message to the end, but they won't necessarily show up in the order in which they were sent. The
receiving device must be able to determine the sequence of the fragments to reassemble them in the correct
order. In fact, some implementations send the last fragment first, so the receiving device will immediately
know the full size of the original complete datagram. This makes keeping track of the order of segments
even more essential.
o Separation of Fragmented Messages: A source device may need to send more than one fragmented
message at a time; or, it may send multiple datagrams that are fragmented en route. This means the
destination may be receiving multiple sets of fragments that must be put back together. Imagine a box into
which the pieces from two, three or more jigsaw puzzles have been mixed and you understand this issue.
o Completion: The destination device has to be able to tell when it has received all of the fragments so it
knows when to start reassembly (or when to give up if it didn't get all the pieces).
To address these concerns and allow the proper reassembly of the fragmented message, IP includes several fields in
the IP format header that convey information from the source to the destination about the fragments.
0xx Reserved
IP Fragmentation
Regardless of what situation occurs that requires IP Fragmentation, the procedure followed by the device performing
the fragmentation must be as follows:
1. The device attempting to transmit the block of data will first examine the Flag field to see if the field is set
to the value of (x0x or x1x) (May Fragment or Do not Fragment). If the value is equal to (x1x) this
indicates that the data may not be fragmented, forcing the transmitting device to discard that data.
Depending on the specific configuration of the device, an Internet Control Message Protocol (ICMP)
Destination Unreachable -> Fragmentation required and Do Not Fragment Bit Set message may be
generated.
2. Assuming the flag field is set to (x0x), the device computes the number of fragments required to transmit
the amount of data in by dividing the amount of data by the MTU. This will result in "X" number of frames
with all but the final frame being equal to the MTU for that network.
3. It will then create the required number of IP packets and copies the IP header into each of these packets so
that each packet will have the same identifying information, including the Identification Field.
4. The Flag field in the first packet, and all subsequent packets except the final packet, will be set to "More
Fragments." The final packets Flag Field will instead be set to "Last Fragment."
5. The Fragment Offset will be set for each packet to record the relative position of the data contained within
that packet.
6. The packets will then be transmitted according to the rules for that network architecture.
IP Fragment Reassembly
If a receiving device detects that IP Fragmentation has been employed, the procedure followed by the device
performing the Reassembly must be as follows:
1. The device receiving the data detects the Flag Field set to "More Fragments."
2. It will then examine all incoming packets for the same Identification number contained in the packet.
3. It will store all of these identified fragments in a buffer in the sequence specified by the Fragment Offset
Field.
4. Once the final fragment, as indicated by the Flag Field, is set to "Last Fragment," the device will attempt to
reassemble that data in offset order.
5. If reassembly is successful, the packet is then sent to the ULP in accordance with the rules for that device.
6. If reassembly is unsuccessful, perhaps due to one or more lost fragments, the device will eventually time
out and all of the fragments will be discarded.
7. The transmitting device will than have to attempt to retransmit the data in accordance with its own
procedures.
******************************************************************************************
Fragmentation-Related IP Datagram Header Fields
-When a sending device or router fragments a datagram, it must provide information that will allow the receiving
device to be able to identify the fragments and reassemble them into the datagram that was originally sent. This
information is recorded by the fragmenting device in a number of fields in the IP datagram header.
Total Length
After fragmenting, this field indicates the length of each fragment, not the length of the overall message.
Identification
A unique identifier is assigned to each message being fragmented.
More Fragments
This flag is set to a 1 for all fragments except the last one, which has it set to 0. When the fragment with a value of 0
in the More Fragments flag is seen, the destination knows it has received the last fragment of the message.
Fragment Offset
This field solves the problem of sequencing fragments by indicating to the recipient device where in the overall
message each particular fragment should be placed.
*********************************************************************************************
***
-56 bits of alternating 1’s and 0’s that synchronizes communication on an Ethernet network.
Or
Or
--The Preamble (7 bytes) and Start Frame Delimiter (SFD) (1 byte) are used for synchronization between the
sending and receiving devices. The first 8 bytes of the frame are used to get the attention of the receiving nodes.
Essentially, the first few bytes tell the receivers to get ready to receive a new frame.
*The preamble and the start of frame are not considered part of the actual frame, or calculated as part of the
total frame size.
-Value to indicate which upper layer protocol will receive the data after the Ethernet process is complete
Or
Length: (2 byte)
-It is used to indicate which protocol is encapsulated in the payload of an Ethernet Frame.
Ethertype Protocol
0*0800 IPv4
0*0806 ARP
0*86DD IPv6
0*8808 Ethernet Flow Control
0*8847 MPLS Unicast
0*8848 MPLS multicast
0*88CC Link Layer Discovery Protocol (LLDP)
0*8906 Fiber Channel over Ethernet (FC0E)
0*9100 Q-IN-Q
-this is the PDU, typically an IPv4 packet, that is to be transported over the media.
Or
-The absolute minimum frame size for Ethernet is 64 bytes (or 512 bits)(46 Data + 18 layer 2 header) including
headers.
*Runt:
-A frame that is smaller than 64 bytes will be discarded as a runt.
-The required fields in an Ethernet header add up to 18 bytes – thus, the frame payload must be a minimum of 46
bytes, to equal the minimum 64-byte frame size. If the payload does not meet this minimum, the payload is padded
with 0 bits until the minimum is met.
Note: If the optional 4-byte 802.1Q tag is used, the Ethernet header size will total 22 bytes, requiring a minimum
payload of 42 bytes.
-By default, the maximum frame size for Ethernet is 1518 bytes – 18 bytes of header fields, and 1500 bytes of
payload or 1522 bytes with the 802.1Q tag.
*Giant:
-A frame that is larger than the maximum will be discarded as a giant.
-With both runts and giants, the receiving host will not notify the sender that the frame was dropped. Ethernet relies
on higher-layer protocols, such as TCP, to provide retransmission of discarded frames.
-Some Ethernet devices support jumbo frames of 9216 bytes, which provide less overhead due to fewer frames.
Jumbo frames must be explicitly enabled on all devices in the traffic path to prevent the frames from being dropped.
-The 32-bit Cycle Redundancy Check (CRC) field is used for error detection. A frame with an invalid CRC will be
discarded by the receiving device. This field is a trailer, and not a header, as it follows the payload.
-The 96-bit Interframe Gap is a required idle period between frame transmissions, allowing hosts time to prepare
for the next frame.
Ethernet Frame:
The minimum size of an Ethernet frame is 64 bytes. The breakup of this size between the fields is: Destination
Address (6 bytes) + Source Address (6 bytes) + Frame Type (2 bytes) + Data (46 bytes) + CRC Checksum (4 bytes).
The minimum number of bytes passed as data in a frame must be 46 bytes. If the size of the data to be passed is less
than this, then padding bytes are added.
-The maximum size of an Ethernet frame is 1518 bytes. The breakup of this size between the fields is:
Destination Address (6 bytes) + Source Address (6 bytes) + Frame Type (2 bytes) + Data (1500 bytes) +
CRCChecksum(4bytes).
The maximum number of bytes of data that can be passed in a single frame is 1500 bytes.
OR
The maximum size of an Ethernet frame is 1518 bytes. The breakup of this size between the fields is: Destination
Address (6 bytes) + Source Address (6 bytes) + Frame Type (2 bytes) + Data (1500 bytes) + CRC Checksum (4
bytes). The maximum number of bytes of data that can be passed in a single frame is 1500 bytes.
ARP
ARP Header:
* ARP Packet Size/Header Length: 28 bytes
- The broadcast address means that all devices on the data link will receive the frame and examine the
encapsulated packet. All devices except the target will recognize that the packet is not for them and will drop the
packet. The target will send an ARP Reply to the source address, supplying its MAC address.
* Devices on a LAN need a way to discover their neighbors so that frames might be transmitted to the correct
destination.
Q. What is the target IP address in ARP request and ARP reply packet?
A. Target IP Address in ARP Request= Destination IP address of the device.
- Target IP Address in ARP Reply = IP address of the device who has generated the ARP request
Q. What is the target MAC address in ARP request and ARP reply packet?
A. Target MAC address in ARP Request = 00:00:00:00:00:00
- Target MAC Address in ARP Reply = MAC of the device who has generated the ARP request
Protocol Type:
1 ICMP 0*01
2 IGMP 0*02
4 IPv4 0*04
6 TCP 0*06
17 UDP 0*11
8 EGP 0*08
9 IGP 0*09
88 EIGRP 0*58
89 OSPF 0*59
-Specifies the length of the data link identifiers. MAC addresses would be [6].
-Specifies whether the packet is an ARP Request (1) or an ARP Reply (2). Other values might also be found here,
indicating other uses for the ARP packet. Examples are Reverse ARP Request (3), Reverse ARP Reply (4),
Inverse ARP Request (8), and Inverse ARP Reply (9).
-The protocol address (IPv4 address) of the device sending the message
ARP:
- Address Resolution Protocol (ARP) is to resolve an IPv4 address (32 bit Logical Address) to the physical address
(48 bit MAC Address).
- The purpose of Address Resolution Protocol (ARP) is to find out the MAC address of a device in your Local
Area Network (LAN), for the corresponding IPv4 address, which network application is trying to
communicate.
-ARP uses a local broadcast (255.255.255.255) at layer 3 and FF:FF:FF:FF:FF:FF at layer 2 to discover
neighboring devices.
- Basically stated, you have the IP address you want to reach, but you need a physical (MAC) address to send the
frame to the destination at layer 2. ARP resolves an IP address of a destination to the MAC address of the
destination on the same data link layer medium, such as Ethernet.
-Remember that for two devices to talk to each other in Ethernet (as with most layer 2 technologies), the data
link layer uses a physical address (MAC) to differentiate the machines on the segment. When Ethernet
devices talk to each other at the data link layer, they need to know each other’s MAC addresses.
*Why ARP?
-Internetworked device will communicate logically using layer 3 address, but the actual transmissions between
devices take place using layer 2 address (hardware address). For this address resolution is required.
-Two type of address mapping. 1. Static 2. Dynamic.
-AN IP packet may pass through different physical network, that’s why we need both IP and MAC address.
-Static mapping entries have some drawbacks, such as a machine could change its NIC or the machine can be moved
from one physical network to other which needs static mapping table to be updated periodically.
-ARP is combination of Request/Reply.
-In Wireshark capture we see that there is no IP field after Ethernet, its ARP field directly.
-ARP packet is encapsulated in Ethernet frame.
-ARP Request – is broadcast & ARP Reply – is unicast.
Or
-ARP is layer 2 protocol. Why? Beacause it doesn’t define any property of layer 3. Such as ARP packet is never
routed across internetwork nodes.
-If you check a wireshark trace, we can see after an Ethernet frame there is directly an ARP packet.
Proxy ARP:
-When two devices are in same layer 3 network/subnet, but separated with two physical networks by a router. Both
the devices will think they are on same local network.
-So when device A want to send IP datagram to device B, it will send an ARP broadcast, thinking B is in same
network. However router stops broadcast so B will not receive the A’s request.
[In normal case, when both devices are in different network they first check the IP address, they found the
destination device is in different network. So they will send IP datagram to its default gateway/MAC address of
default gateway]
-To overcome this problem a router is configured as “Proxy ARP” device who will respond to device A on behalf of
device B with its own interface MAC address and vice versa for device B.
-Enable By Default.
[In case of static route with exit interface, next router will do Proxy ARP whenever it receives an ARP
request for other end interface network]
Or
-Proxy ARP allows the router to respond with its own MAC address in an ARP reply for a device on a different
network segment. Proxy ARP is used when you need to move a device from one segment to another but cannot
change its current IP addressing information.
Advantage/Disadvantage:
-The main advantage of proxying is that it is transparent to the hosts on the different physical network segment.
-There is a serious downside to using Proxy ARP. Using Proxy ARP will definitely increase the amount of traffic on
your network segment, and hosts will have a larger ARP table than usual in order to handle all the IP-to-MAC-
address mapping. And Proxy ARP is configured on all Cisco router by default-you should disable it if you
don’t think you are going to use it.
Or
The main advantage of proxy ARP is that it can be added to a single router on a network and does not disturb the
routing tables of the other routers on the network.
Proxy ARP must be used on the network where IP hosts are not configured with a default gateway or do not have
any routing intelligence.
Disadvantages of Proxy ARP
Hosts need larger ARP tables in order to handle IP-to-MAC address mappings.
It does not work for networks that do not use ARP for address resolution.
Or
*The ARP table for three devices connected to the same network: a Cisco router, a Microsoft Windows host,
and a Linux host.
Martha#show arp
Protocol Address Age (min) Hardware Addr Type Interface
Internet 10.158.43.34 2 0002.6779.0f4c ARPA Ethernet0
Internet 10.158.43.1 - 0000.0c0a.2aa9 ARPA Ethernet0
Internet 10.158.43.25 18 00a0.24a8.a1a5 ARPA Ethernet0
Internet 10.158.43.100 6 0000.0c0a.2c51 ARPA Ethernet0
Martha#
AGE:
-Notice the Age column. As this column would indicate, ARP information is removed from the table after a certain
time to prevent the table from becoming congested with old information. Cisco routers hold ARP entries for four
hours (14,400 seconds); this default can be changed. The following example changes the ARP timeout to 30
minutes (1800 seconds):
Interface: 148.158.43.25
Internet Address Physical Address Type
10.158.43.1 00-00-0c-0a-2a-a9 dynamic
10.158.43.34 00-02-67-79-0f-4c dynamic
10.158.43.100 00-00-0c-0a-2c-51 dynamic
_________________________________________________________________________
Linux:~# arp -a
Address HW type HW address Flags Mask
10.158.43.1 10Mbps Ethernet 00:00:0C:0A:2A:A9 C *
10.158.43.100 10Mbps Ethernet 00:00:0C:0A:2C:51 C *
10.158.43.25 10Mbps Ethernet 00:A0:24:A8:A1:A5 C *
Linux:~#
ARP entries might also be permanently placed in the table. To statically map 172.21.5.131 to hardware address
0000.00a4.b74c, with a SNAP (Subnetwork Access Protocol) encapsulation type, use the following:
The command clear arp-cache forces a deletion of all dynamic entries from the ARP table. It also clears the fast-
switching cache and the IP route cache.
Gratuitous ARP:
--A device can generate what is called a gratuitous ARP. A gratuitous ARP is an ARP reply that is generated
without a corresponding ARP request. This is commonly used when a device might change its IP address or MAC
address and wants to notify all other devices on the segment about the change so that the other devices have the
correct information in their local ARP tables.
-It is a ARP request with same source and destination IP used to find IP conflict in a network.
-Gratuitous ARP is useful for:
1. It helps detect IP conflict:-
-When a device receive “ARP request” containing a source IP that matches its own, then it knows there is an IP
conflict.
3. To inform switches, that a particular MAC is at a particular swichport.
2. When we move one IP from one machine to other, other machine maintain an ARP table entry; the MAC with an
IP (old entry). So when we change IP of a machine, it is being associated with new MAC and it sends a broadcast.
Gratuitous “ARP Reply” to inform neighbor machine about the MAC address change for the IP.
-A host might occasionally issue an ARP Request with its own IPv4 address as the target address. These ARP
Requests, known as gratuitous ARPs, have several uses:
A gratuitous ARP might be used for duplicate address checks. A device that issues an ARP Request
with its own IPv4 address as the target and receives an ARP Reply from another device will know
that the address is a duplicate.
A router running Hot Standby Router Protocol (HSRP) that has just taken over as the active router
from another router on a subnet issues a gratuitous ARP to update the ARP caches of the subnet's
hosts.
A gratuitous ARP might be used to advertise a new data-link identifier. This use takes advantage of the fact
that when a device receives an ARP Request for an IPv4 address that is already in its ARP cache, the cache
will be updated with the sender's new hardware address.
- It is disabled by default in IOS but can be enabled with the command ip gratuitous-arps.
Or
In more advanced networking situations you may run across something known as
Gratuitous ARP (GARP). A gratuitous arp something that is often performed by a computer
when it is first booted up. When a NIC’s is first powered on, it will do what’s known as a
gratuitous ARP and automatically ARP out it’s MAC address to the entire network. This
allows any switches to know the location of the physical devices and DHCP servers to know
where to send an IP address if needed and requested.
Gratuitous ARP is also used by many high availability routing and load balancing devices.
Routers or load balancers are often configured in an HA (high availability) pair to provide
optimum reliability and maximum uptime. Usually these devices will be configured in an
Active/Standby pair. One device will be active while the second will be sleeping waiting for
the active device to fail. Think of it as an understudy for the lead role in a movie. If the
leading lady gets sick, the understudy will gladly and quickly take her place in the lime light.
When a failure occurs, the standby device will assert itself as the new active device and
issue a gratuitous ARP out to the network instructing all other devices to send traffic to it’s
MAC address instead of the failed device.
As you can see ARP and it’s cousin play a vital role in helping the network run smoothly and
packets finding their way across the network.
Have more questions about ARP it's other brothers? Leave a comment below and let's talk
about ARP!
Or
They can help detect IP conflicts. When a machine receives an ARP request containing a source
IP that matches its own, then it knows there is an IP conflict.
They assist in the updating of other machines' ARP tables. Clustering solutions utilize this when
they move an IP from one NIC to another, or from one machine to another. Other machines maintain an
ARP table that contains the MAC associated with an IP. When the cluster needs to move the IP to a
different NIC, be it on the same machine or a different one, it reconfigures the NICs appropriately then
broadcasts a gratuitous ARP reply to inform the neighboring machines about the change in MAC for the
IP. Machines receiving the ARP packet then update their ARP tables with the new MAC.
They inform switches of the MAC address of the machine on a given switch port, so that the
switch knows that it should transmit packets sent to that MAC address on that switch port.
Every time an IP interface or link goes up, the driver for that interface will typically send a
gratuitous ARP to preload the ARP tables of all other local hosts. Thus, a gratuitous ARP will tell us that
that host just has had a link up event, such as a link bounce, a machine just being rebooted or the
user/sysadmin on that host just configuring the interface up. If we see multiple gratuitous ARPs from the
same host frequently, it can be an indication of bad Ethernet hardware/cabling resulting in frequent link
bounces.
Two nodes in a cluster are configured to share a common IP address 192.168.1.1. Node A has a
hardware address of 01:01:01:01:01:01 and node B has a hardware address
of 02:02:02:02:02:02.
Assume that node A currently has IP address 192.168.1.1 already configured on its NIC. At this
point, neighboring devices know to contact 192.168.1.1 using the MAC01:01:01:01:01:01.
Using the heartbeat protocol, node B determines that node A has died.
Node B configures a secondary IP on an interface with ifconfig eth0:1 192.168.1.1.
Node B issues a gratuitous ARP
withsend_arp eth0 192.168.1.1 02:02:02:02:02:02 192.168.1.255. All devices receiving
this ARP update their table to point to 02:02:02:02:02:02 for the IP address 192.168.1.1.
As a conclusion, GARP is mainly used for avoid IP Conflict, maintaining ARP cache entries with proper
mac address applying ARP announcement, whenever interface/server/port got replied with new hardware
assigning same IP Address.
Or
The following scenarios describe the manner in which GARP packets are generated, based on the
default configuration settings for transmission of GARP packets and the network topology:
Three GARP packets are sent when you configure a new primary or secondary IP address on an IP interface.
Three GARP packets are transmitted when an IP interface state transitions from the down state to the up state.
Three GARP packets are sent for each IP address of the numbered interface when a new unnumbered interface associated with the numbered interface is
created.
Three GARP packets are sent for all the unnumbered interfaces whenever any secondary IP address on the numbered interface that it is associated with is
modified.
Three GARP packets are sent for all the unnumbered interfaces for all the IP addresses whenever the primary IP address of the numbered interface that it is
associated with is modified.
In all of the these scenarios, you can modify the number of GARP packets to be transmitted to be
less than three by using the ip gratuitous-arps command.
The following two scenarios describe the method of transmission of GARP packets, regardless of
whether the sending of GARP packets is disabled. In such cases, even if you configure the no ip
gratuitous-arps command to disable sending GARPs, these packets are sent to denote the changes
in system and interface conditions.
One GARP packet is always sent for each virtual address of a VRRP interface. If you configure VRRP on a virtual router and associate the IP address with
the VRRP instance ID (VRID) using the ip vrrp command in Interface Configuration mode, one GARP packet is always transmitted for each virtual
address of the interface enabled for VRRP.
Three GARP packets are always sent when a failover occurs to the secondary link of the redundant port on GE-2 and GE-HDE line modules that are paired
with GE-2 SFP I/O modules, 2xGE APS I/O SFP modules, and GE-2 APS I/O SFP modules, with physical link redundancy.
Or
Gratuitous ARP is a sort of "advance notification", it updates the ARP cache of other systems before
they ask for it (no ARP request) or to update outdated information.
When talking about gratuitous ARP, the packets are actually special ARP request packets, not ARP
reply packets as one would perhaps expect. Some reasons for this are explained in RFC 5227.
The gratuitous ARP packet has the following characteristics:
Both source and destination IP in the packet are the IP of the host issuing the gratuitous ARP
No reply is expected
Gratuitous ARP is used for some reasons:
Update ARP tables after a MAC address for an IP changes (failover, new NIC, etc.)
Update MAC address tables on L2 devices (switches) that a MAC address is now on a different
port
Send gratuitous ARP when interface goes up to notify other hosts about new MAC/IP bindings
in advance so that they don't have to use ARP requests to find out
When a reply to a gratuitous ARP request is received you know that you have an IP address
conflict in your network
As for the second part of your question, HSRP, VRRP etc. use gratuitous ARP to update the MAC
address tables on L2 devices (switches). Also there is the option to use the burned-in MAC address
for HSRP instead of the "virtual"one. In that case the gratuitous ARP would also update the ARP
tables on L3 devices/hosts.
Or
-Most hosts on a network will send out a Gratuitous ARP when they are initialising their IP stack. This Gratuitous
ARP is an ARP request for their own IP address and is used to check for a duplicate IP address . If there is a
duplicate address then the stack does not complete initialisation.
Or
Gratuitous ARP
Examples
Discussion
-The '02' byte at the start of the MAC indicates that this is a 'locally administered
address' which has been set by the local user or system. Most normal ethernet
devices are allocated a MAC with 00 as the most significant byte.
I updated the article to differentiate between gratuitous ARP request and reply.
Note that some devices will respond to the gratuitous request and some will
respond to the gratuitous reply. If one is trying to write software for moving IP
addresses around that works with all routers, switches and IP stacks, it is best
to send both the request and the reply. These are documented by RFC 2002
and RFC 826. Software implementing the gratuitious ARP function can be
found in the Linux-HA source tree. A requestmay be preceded by a probe to
avoid polluting the address space. For an ARP Probe the Sender IP address field
is 0.0.0.0. ARP probes were not considered by the original ARP RFC.
-Does the target MAC address ever matter in requests? I gather Solaris
uses ff:ff:ff:ff:ff:ff in its standard ARP requests and most other OSes
use00:00:00:00:00:00 instead. Is the use of the ff:ff:ff:ff:ff:ff MAC in the
target address above significant in any way? Obviously having a destination
address of ff:ff:ff:ff:ff:ff is critical.yes
- How can we explain if the source Ethernet MAC address is different from
sender's MAC address in a GARP packet? The ARP packet value is for the ARP
machine, the Ethernet value is for the Ethernet machine. Originally, they were
intended to be redundant information, targeted at different layers. It is possible
to consider a hypothetical network appliance that routes ARP packets, where
the source Ethernet MAC address changes as the packet is routed, but normally
ARP packets are not routed.
Q. What is the target IP address in GARP request and GARP reply packet?
A. Target IP address in GARP Request: IP of the machine issuing the packet
Target IP address in GARP Reply:
*A gratuitous ARP request is anAddressResolutionProtocol request packet
where the source and destination IP are both set to the IP of the machine issuing
the packet and the destination MAC is the broadcast
address ff:ff:ff:ff:ff:ff. Ordinarily, no reply packet will occur. A gratuitous
ARP reply is a reply to which no request has been made.
Or
Gratuitous ARP could indicate either a gratuitous ARP request or gratuitous ARP (GARP) reply. A gratuitous ARP
request is an ARP request packet, in which the source and destination IP are both set to the IP of the machine, which
is issuing the packet and the destination MAC is the ff:ff:ff:ff:ff:ff broadcast address. Ordinarily, the reply packet will
not occur.
Example of GARP request traffic:
Frame 1: 42 bytes on wire (336 bits), 42 bytes captured (336 bits)
Ethernet II, Src: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5), Dst: Broadcast
(ff:ff:ff:ff:ff:ff)
Address Resolution Protocol (request/gratuitous ARP)
Hardware type: Ethernet (1)
Protocol type: IP (0x0800)
Hardware size: 6
Protocol size: 4
Opcode: request (1)
[Is gratuitous: True]
Sender MAC address: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5)
Sender IP address: 192.168.10.2 (192.168.10.2)
Target MAC address: Broadcast (ff:ff:ff:ff:ff:ff)
Target IP address: 192.168.10.2 (192.168.10.2)
0000 ff ff ff ff ff ff 00 c0 9f 38 a3 d5 08 06 00 01 .........8......
0010 08 00 06 04 00 01 00 c0 9f 38 a3 d5 c0 a8 0a 02 .........8......
0020 ff ff ff ff ff ff c0 a8 0a 02 ..........
A gratuitous ARP reply is an ARP reply packet, in which the source and destination IP are both set to the IP of the
machine, which is issuing the packet and the target MAC is the sender MAC. A gratuitous ARP reply is a reply, to
which no request has been made.
Example of GARP reply traffic:
Frame 1: 42 bytes on wire (336 bits), 42 bytes captured (336 bits)
Ethernet II, Src: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5), Dst: Broadcast
(ff:ff:ff:ff:ff:ff)
Address Resolution Protocol (reply/gratuitous ARP)
Hardware type: Ethernet (1)
Protocol type: IP (0x0800)
Hardware size: 6
Protocol size: 4
Opcode: reply (2)
[Is gratuitous: True]
Sender MAC address: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5)
Sender IP address: 192.168.10.2 (192.168.10.2)
Target MAC address: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5)
Target IP address: 192.168.10.2 (192.168.10.2)
0000 ff ff ff ff ff ff 00 c0 9f 38 a3 d5 08 06 00 01 .........8......
0010 08 00 06 04 00 02 00 c0 9f 38 a3 d5 c0 a8 0a 02 .........8......
0020 00 c0 9f 38 a3 d5 c0 a8 0a 02 ...8......
Gratuitous ARPs are useful for the following reasons:
They can help to detect IP conflicts.
They assist in the updating of ARP tables of other machines. Clustering solutions utilize this when they move an IP
from one NIC to another or from one machine to another. Other machines maintain an ARP table, which contains the
MAC address associated with an IP address.
When the cluster needs to move the IP to a different NIC, either on the same machine or a different one, it re-
configures the NICs appropriately and then broadcasts a gratuitous ARP reply to inform the neighboring machines
about the change in the MAC address for the IP address. Machines that receive the ARP packet then update their
ARP tables with the new MAC address.
They inform the switches of the MAC address of the machine on a given switch port; so that the switch knows that it
should transmit packets that are sent to the MAC address on the switch port.
Every time an IP interface or link goes up, the driver for that interface will typically send a gratuitous ARP to preload
the ARP tables of all the other local hosts.
CAUSE:
By default, the received gratuitous ARP reply on SRX devices will not update the ARP cache.
SOLUTION:
To enable the updating of the ARP cache for received gratuitous ARP replies, configure gratuitous-arp-
reply under the interfaces hierarchy level. For example:
[edit]
root@FW_GL_QH_SRX1400# show interfaces
ge-0/0/0 {
gratuitous-arp-reply;
unit 0 {
family inet {
address 192.168.10.1/24;
}
}
}
PURPOSE:
Configuration
Implementation
Troubleshooting
RELATED LINKS:
Or
Well... after a while away from my computer (in fact, away from any computer) due to some medical issues I´m
back! Don´t worry, nothing bad, it was scheduled already, and I had the company of my wife, and guess who?!
Yeah! Him! Mr. Jeff Doyle, not in person, but in his book version!
Books like TCP/IP Vol. 1 and 2 MUST be read from cover to cover! Always a good thing to learn!
Some of you may think, ARP, too basic... Yeah, I think too, but there were more than 10, 20 times that people
who were supposed to know this asked me HOW it works... so here (with mr. Doyle´s help) you´ll find ARP
ARP
Address Resolution Protocol (ARP) is used to map a known IP Address to a unkown data-link identifier (for
Destination data-link identifier (MAC Address in our example) will be set to 00:00:00:00:00:00.
RARP
RARP is the opposite of ARP, it maps an IPv4 Address to a know MAC Address, for example, old workstations
(dumb terminals) could have it´s firmware programmed to send a RARP request as soon as it was powered up,
and a RARP Server would answer this RARP request with the workstation´s IP Address (Airline Companies
used it ALOT in the past). Hmmm.. looks like DHCP right?! Yeah.. it looks, but it ISN´T ok?! ;)
Source and Destination data-link identifier (MAC Address in this example) will be the local host MAC
Address;
Proxy ARP
A Proxy ARP enabled Router answers ARP requests intended for another machine, it does that by making the
local host believe that the Router is the "owner" of that IP Address, local host will forward the traffic to the
Router and the Router will be responsible to "route" the packets to the real destination.
For example, a Host in Subnet A wants to send traffic to Host in Subnet B, Host A and Host B are in the same
subnet, but in different broadcast domains. Host A will send an ARP Request with Host B IP Address, the
Router connected to both subnets will answer to Host A request using it´s own MAC Address instead of Host B
MAC Address.
Now when Host A wants to transmit traffic to Host B, it´ll send to the Router MAC Address and the Router will
It´s used on networks where the hosts are not configured with a default-gateway.
Oh yeah... it´s enabled by default in the Cisco IOS, and you can disable it on a per-interface basis with the
Gratuitous ARP
In some circunstances a Host (Router, Switch, Computer, etc) might send an ARP Request with it´s own
address as the target address... But, to his own address?! Why a host would do that!?
It´s use to update other devices ARP Table (when a device receives an ARP Request with an IP that it´s
already in it´s cache, the cache will be updated with the new information;
HSRP Routers that takes over the control will send Gratuitous ARP out the network to update the
To check for duplicate addresses (if the host receives a response, it´ll know that somebody is using the
same IP Address).
You can check this Gratuitous ARP traffic captured with Wireshark (the best opensource sniffer out there):
IP Redirect:
IP Redirect is used by routers to notify hosts of another router on the data link that should be used for a
particular destination.
For example, Router A and Router B are connected to the same Ethernet Segment, so as Host C. Host C has
Router A set as default-gateway, Host C will send the packets to Router A, and Router A sees that the
destination address of the packet is reachable via Router B, so Router A must forward the packets out the same
interface it has received to Router B. Router A does that, and also, sends an ICMP Redirect to Host C informing
IP Redirect is enable by default in IOS Routers and can be disabled on a per interface basis with the
command: no ip redirects.
That´s it! I´ll lie down a while, my head is a little fuzzy right now!
RARP:
-It resolve the mac address to IP address.
-RARP is sort of the reverse of an ARP. In an ARP, the device knows the layer 3 address, but not the data link layer
address. With a RARP, the device doesn’t have an IP address and wants to acquire one. The only address that this
device has is a MAC address. Common protocols that use RARP are BOOTP and DHCP is a replacement of
RARP.
-In this example, PC-D doesn’t have an IP address and wants to acquire one. It generates a data link layer broadcast
(FF:FF:FF:FF:FF:FF) with an encapsulated RARP request. This example assumes that the RARP is associated with
BOOTP. If there is a BOOTP server on the segment, and if it has an IP address for this machine, it will respond. In
this example, the BOOTP server, 10.1.1.5, has an address (10.1.1.4) and assigns this to PC-D, sending this address
as a response to PC-D.
Inverse-ARP:
-It resolve the IP address from DLCI used in Frame Relay.
-Inverse ARP allows a router to send a Frame Relay frame across a VC with its layer 3 addressing information. The
destination can then use this, along with the incoming DLCI number, to reach the advertiser.
OR
-The Inverse ARP obtains layer 3 addresses of other station from layer address, such as the DLCI in Frame Relay
network. It is primarily used in Frame Relay and ATM networks. Whereas ARP translates layer 3 addresses to layer
2 addresses. Inverse ARP does the opposite.
Dynamic Mapping:
-Dynamic address mapping relies on inverse ARP to resolve a next hop network protocol address to local DLCI
value. The Frame Relay router sends out Inverse ARP requests on its PVC to discover the protocol address of the
remote device connected to the Frame Relay network.
or
Now you might be wondering why the receiver machine has all the IP addresses of all the machines on the
network. The reason is, you need a RARP client program to make a request and a RARP server program to
respond to the requests.
Now for a small shocker – RARP is almost obsolete :O What!? So, I did all this typing and you did all that
reading for nothing!? Not at all. It’s nice to know about RARP because we get to understand
what DHCP (which is one of the protocols replacing RARP) does even better.
ICMP Header:
All ICMP packets have an 8-byte header and variable-sized data section. The first 4 bytes of the
header have fixed format, while the last 4 bytes depend on the type/code of that ICMP packet.
Header Details:
0 Network Unreachable
1 Host Unreachable
2 Protocol Unreachable
3 Port Unreachable
4 Fragmentation Needed and Don't Fragment Flag Set
5 Source Route Failed
Table 1-6. ICMP packet types and code fields.
*Analyzer captures of two of the most well-known ICMP messagesEcho Request and Echo Reply, which are
used by the ping function.
Example 1-11. ICMP Echo message, shown with its IPv4 header.
Internet Protocol, Src Addr: 172.16.1.21 (172.16.1.21),
Dst Addr: 198.133.219.25 (198.133.219.25)
Version: 4
Header length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
Total Length: 84
Identification: 0xabc3 (43971)
Flags: 0x00
Fragment offset: 0
Time to live: 64
Protocol: ICMP (0x01)
Header checksum: 0x8021 (correct)
Source: 172.16.1.21 (172.16.1.21)
Destination: 198.133.219.25 (198.133.219.25)
Internet Control Message Protocol
Type: 8 (Echo (ping) request)
Code: 0
Checksum: 0xa297 (correct)
Identifier: 0x0a40
Sequence number: 0x0000
Data (56 bytes)
Example 1-12. ICMP Echo Reply.
Internet Protocol, Src Addr: 198.133.219.25 (198.133.219.25),
Dst Addr: 172.16.1.21 (172.16.1.21)
Version: 4
Header length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
Total Length: 84
Identification: 0xabc3 (43971)
Flags: 0x00
Fragment offset: 0
Time to live: 242
Protocol: ICMP (0x01)
Header checksum: 0xce20 (correct)
Source: 198.133.219.25 (198.133.219.25)
Destination: 172.16.1.21 (172.16.1.21)
Internet Control Message Protocol
Type: 0 (Echo (ping) reply)
Code: 0
Checksum: 0xaa97 (correct)
Identifier: 0x0a40
Sequence number: 0x0000
Data (56 bytes)
Or
The Type field is used to identify the type of message and each type uses the Code field differently. The Variable
field may contain an Identification and a Sequencenumber plus information such as subnet masks, IP addresses etc.
again depending on the type of message.
Message Types
All the ICMP messages are listed below (notice the gaps, this does not mean that some are missing!) along with any
additions within the Variable field:
Type 0 - Echo Reply - this is the Echo reply from the end station which is sent as a result of the Type 8
Echo. The Variable field is made up of a 2 octetIdentifier and a 2 octet Sequence Number. The Identifier
matches the Echo with the Echo Reply and the sequence number normally increments by one for each Echo
sent. These two numbers are sent back to the Echo issuer in the Echo Reply.
Type 3 - Destination Unreachable - the source is told that a problem has occurred when delivering a
o Code 0 - Net Unreachable - sent by a router to a host if the router does not know a route to a
requested network.
o Code 1 - Host Unreachable - sent by a router to a host if the router can see the requested network
o Code 2 - Protocol Unreachable - this would only occur if the destination host was reached but
running but a particular service such as a web server that uses a specific port was not running.
o Code 4 - Cannot Fragment - sent by a router if the router needed to fragment a packet but the Do
Type 4 - Source Quench - the source is sending data too fast for the receiver (Code 0), the buffer has filled
Type 5 - Redirect - the source is told that there is another router with a better route for a particular packet
i.e. this gateway checks its routing table and sees that another router exists on the same network with a
o Code 2 - Redirect datagrams for the Type of Service and the network
o Code 3 - Redirect datagrams for the Type of Service and the host
All 4 octets of the Variable Field are used for the gateway IP address where this better router resides and
Type 8 - Echo Request - this is sent by Ping (Packet Internet Groper) to a destination in order to check
connectivity. The Variable field is made up of a 2 octetIdentifier and a 2 octet Sequence Number. The
Identifier matches the Echo with the Echo Reply and the sequence number normally increments by one for
each Echo sent. These two numbers are sent back to the Echo issuer in the Echo Reply.
Type 11 - Time Exceeded - the packet has been discarded as it has taken too long to be delivered. This
examines the TTL field in the IP header and the TTL exceeded code is one of the two codes used for this
type. Trace under UDP, uses the TTL field to good effect. A Code value of 0 means that the Time to
Livewas exceeded whilst the datagram was in transit. A value of 1 means that the Fragment Reassembly
Type 12 - Parameter Problem - identifies an incorrect parameter on the datagram (Code 0). There is then
a 1 octet Pointer field created in the Variable part of the ICMP packet. This pointer indicates the octet
within the IP header where an error occurred. The numbering starts at 1 for the TOS field.
Type 13 - Timestamp request - this gives the round trip time to a particular destination. The Variable
o Originate Timestamp - Time in milliseconds since midnight within the request as it was sent out.
o Receive Timestamp - Time in milliseconds since midnight as the receiver receives the message.
o Transmit Timestamp - Time in milliseconds since midnight within the reply as it was sent out.
The Identifier and Sequence Number field are used to match timestamp requests with replies.
Type 14 - Timestamp reply - this gives the round trip time to a particular destination.
Type 15 - Information Request - this allows a host to learn the network part of an IP address on its subnet
by sending a message with the source address in the IP header filled and all zeros in the destination address
field. Uses the two 16-bit Identifier and Sequence Number fields.
Type 16 - Information Reply - this is the reply containing the network portion. These two are an
alternative to RARP. Uses the two 16-bit Identifier andSequence Number fields.
Type 17 - Address mask request - request for the correct subnet mask to be used.
Type 18 - Address mask response - reply with the correct subnet mask to be used.
You can ping an IP broadcast address e.g. for the 10.1.1.0/24 subnet the broadcast address would be 10.1.1.255. You
will then receive replies from any stations that are live on that subnet.
ICMP:
-ICMP is used to send error and control information between TCP/IP devices at the Internet layer.
-ICMP includes many different messages that devices can generate or respond to. Here is a brief list of these
messages: Address Reply, Address Request, Destination Unreachable, Echo, Echo Reply, Information Reply,
Information Request, Parameter Problem, Redirect, Subnet Mask Request, Time Exceeded, Timestamp, and
Timestamp Reply.
-Two common applications that use ICMP are ping and traceroute.
-Ping uses an ICMP echo message to test connectivity to a remote device.
ICMP messages are used to allow the communication of different types of information between IP devices on an
internetwork. The messages themselves are used for a wide variety of purposes.
ICMP messages are divided into two classes:
1. Error Messages
-Error messages that are used to report problem conditions.
2. Informational Messages
-Informational messages that are used for diagnostics, testing and other purposes.
#
1. Error Messages
-Destination Unreachable Messages
-Source Quench Messages
-Time Exceeded Messages
-Redirect Messages
-Parameter Problem Messages
2. Informational Messages
-Echo (Request) and Echo Reply Messages
-Timestamp (Request) and Timestamp Reply Messages
-Router Advertisement and Router Solicitation Messages
-Address Mask Request and Reply Messages
-Traceroute Messages
Host Confirmation
An ICMP Echo Message can be used to determine if a host is operational. The local host sends an ICMP Echo
Request to a host. The host receiving the echo message replies with the ICMP Echo Reply, as shown in the figure.
This use of the ICMP Echo messages is the basis of the ping utility.
Unreachable Destination or Service
The ICMP Destination Unreachable can used to notify a host that the destination or service is unreachable. When a
host or gateway receives a packet that it cannot deliver, it may send an ICMP Destination Unreachable packet to the
host originating the packet. The Destination Unreachable packet will contain codes that indicate why the packet
could not be delivered.
Among the Destination Unreachable codes are:
0 = net unreachable
1 = host unreachable
2 = protocol unreachable
3 = port unreachable
>Codes for net unreachable and host unreachable are responses from a router when it cannot forward a packet.
Network Unreachable:
-If a router receives a packet for which it does not have a route, it may respond with an ICMP Destination
Unreachable with a code = 0, indicating net unreachable.
Host Unreachable:
- If a router receives a packet for which it has an attached route but is unable to deliver the packet to the host on the
attached network, the router may respond with an ICMP Destination Unreachable with a code = 1, indicating that
the network is known but the host is unreachable.
Route Redirection
-A router may use the ICMP Redirect Message to notify the hosts on a network that a better route is available for a
particular destination. This message may only be used when the source host is on the same physical network as both
gateways. If a router receives a packet for which it has a route and for which the next hop is attached to the same
interface as the packet arrived, the router may send an ICMP Redirect Message to the source host. This message will
inform the source host of the next hop contained in a route in the routing table.
Source Quench
The ICMP Source Quench message can be used to tell the source to temporarily stop sending packets. If a router
does not have enough buffer space to receive incoming packets, a router will discard the packets. If the router has to
do so, it may also send an ICMP Source Quench message to source hosts for every message that it discards.
A destination host may also send a source quench message if datagrams arrive too fast to be processed.
When a host receives an ICMP Source Quench message, it reports it to the Transport layer. The source host can then
use the TCP flow control mechanisms to adjust the transmission.
Q. When we get destination unreachable message and when request time out?
2. Destination Unreachable when router does not have route for particular network.
*Scenario 2: R2 does not have route to 70.1.1.0/24 network
-ping 70.1.1.2 >> UUUUU >> host unreachable generated by R2 (type 3, code 1) if route to the destination host on
directly connected network is not available. (As well as when host is not there and therefore do not respond to ARP
request)
3. Destination Unreachable when Access list is configured
*Scenario 3: ICMP ping to 70.1.1.2 is denied on R3 interface fa1/0 outbound.
#access-list 107 deny icmp host 70.1.1.1 host 70.1.1.2
Ping 70.1.1.2 >> UUUUU >> (Type 3, Code 13) administratively prohibited unreachable from 60.1.1.1 or 2
Or
-One of the most common applications that uses ICMP is ping. Ping uses a few ICMP messages, including echo,
echo request, destination unreachable, and others.
-Ping is used to test whether or not a destination is available. A source generates an ICMP echo packet.
-If the destination is available, it will respond with an echo reply packet.
-If an intermediate router doesn’t know how to reach the destination, it will respond with a destination
unreachable message.
-However, if the router knows how to reach the destination, but the destination host doesn’t respond to the
echo packets, you’ll see a request timed out message.
Or
* Ping use multiple sets of Echo and Echo Reply messages, along with considerable internal logic, to allow an
administrator to determine all of the following, and more:
- Whether or not the two devices can communicate;
- Whether congestion or other problems exist that might allow communication to succeed sometimes but
cause it to fail in others, seen as packet loss—if so, how bad the loss is;
- How much time it takes to send a simple ICMP message between devices, which gives an indication of the
overall latency between the hosts, and also indicates if there are certain types of problems.
When the utility is invoked with no additional options, default values are used for parameters such as what
size message to send, how many messages to be sent, how long to wait for a reply, and so on. The utility will
transmit a series of Echo messages to the host and report back whether or not a reply was received for each;
if a reply is seen, it will also indicate how long it took for the response to be received. When the program is
done, it will provide a statistical summary showing what percentage of the Echo messages received a reply,
and the average amount of time for them to be received.
-Below shows an example using the ping command on a Windows XP computer (mine!), which by default sends
four 32-byte Echo messages and allows four seconds (4 sec) before considering an Echo message lost.
Traceroute:
-Traceroute, sometimes called trace, is an application that will list the IP addresses of the routers along the
way to the destination, displaying the path the packet took to reach the destination.
-Some traceroute applications (Windows OS) use ICMP messages , while others (Linux) use UDP to transport their
messages.
Or
Ping is used to indicate the connectivity between two hosts. Traceroute (tracert) is a utility that allows us to observe
the path between these hosts. The trace generates a list of hops that were successfully reached along the path.
*This list can provide us with important verification and troubleshooting information.
- If the data reaches the destination, then the trace lists the interface on every router in the path.
-If the data fails at some hop along the way, we have the address of the last router that responded to the trace. This is
an indication of where the problem or security restrictions are.
-To discover routes that packets actually take when travelling to their destination. By sending sequence of UDP
datagrams to invalid port address (from 33434 to 33534) at the remote host.
-First 3 datagram are sent with TTL=1, when it hits first router it timeouts TTL value to 0, and the router then
responds with ICMP time exceeded message (type 11, code 0)
-In this way source keeps sending 3 datagram each with TTL value increased to 2,3,4 and so on.
-Since these datagrams are sent trying to reach invalid port, when the packet reach destination, ICMP port
unreachable message is returned. (Type 3, Code 3)
Or
Note:
-Not all traceroute utility implementations use the technique described above.
- Microsoft’s tracert works not by sending UDP packets but rather ICMP Echomessages with increasing TTL
values. It knows it has reached the final host when it gets back an Echo Reply message.
-What traceroute does is to force each router in a route to report back to it by intentionally setting the TTL value in
test datagrams to a value too low to allow them to reach their destination.
-Suppose we have device A and device B, which are separated by routers R1 and R2—three hops total. If you
do a traceroute from device A to device B, here’s what happens:
1. The traceroute utility sends a dummy UDP message (sometimes called a probe) to a port number
(from 33434 to 33534) that is intentionally selected to be invalid. The TTL field of the IP datagram is
set to 1. When R1 receives the message, it decrements the field, which will make its value 0. That
router discards the probe and sends an ICMP Time Exceeded message back to device A.
2. Device A then sends a second UDP message with the TTL field set to 2. This time, R1 reduces
the TTL value to 1 and sends it to R2, which reduces the TTL field to 0 and sends a Time
Exceeded message back to A.
3. Device A sends a third UDP message, with the TTL field set to 3. This time, the message will pass
through both routers and be received by device B. However, since the port number was invalid, the
message is rejected by device B, which sends back a Destination Unreachable/Port Unreachable
message to device A.
- Traceroute sends out three packets per TTL increment. Each column corresponds to the time
is took to get one packet back (round-trip-time).
Or
In Windows, the traceroute tool will give you the hop number, three columns showing the
network latency between you and the hop (so you can average them if you like), as well as the
IP address (or hostname if it has a reverse DNS entry) of the hop.
Hop number: The specific hop number in the path from the sender to the destination.
Round Trip Time (RTT): The time it takes for a packet to get to a hop and back, displayed in milliseconds
(ms). By default, tracert sends three packets to each hop, so the output lists three roundtrip times per hop.
RTT is sometimes also referred to as latency. An important factor that may impact RTT is the physical
distance between hops.
Name: The fully qualified domain name (FQDN) of the system. Many times the FQDN may provide an
indication of where the hop is physically located. If the Name doesn’t appear in the output, the FQDN wasn’t
found. It isn’t necessarily indicative of a problem, if an FQDN isn’t found.
IP Address: The Internet Protocol (IP) address of that specific router or host associated with the Name.
Three asterisks followed by the “Request timed out” message may appear for several reasons :
•The destination’s firewall or other security device is blocking the request.
•There could be a problem on the return path from the target system. Remember the round trip time
measures the time it takes for a packet to travel from your system to a destination system and back.
The forward route and the return route often follow different paths. If there is a problem on the
return route, it may not be evident in the command output.
•There may be a connection problem at that particular system or the next system.
Traceroute results that show increased latency on a middle hop, which remains similar all the way through
to the destination, do not indicate a network problem.
A traceroute that shows dramatically increased latency on a middle hop, which then increases steadily
through to the destination, can indicate a potential network issue. Packet loss or asterisks (*) on many of
the middle hops may also indicate a possible network level issue.
A steady trend of increasing latency is typically an indication of congestion or a problem between two points
in the network and it requires one or more parties to correct the problem.
How DHCP discover message is being forwarded by router when it is a broadcast message?
FTP ( Active & What is the difference between Active and Passive FTP?
Passive)
What is the important of port command?
Explain MIB?