0% found this document useful (0 votes)
92 views

Week 5 TCP and SSL

This document discusses TCP flags and congestion control in TCP connections. It describes: - The main TCP flags (SYN, ACK, FIN, RST, PSH, URG) and what they are used for in establishing and terminating connections or controlling data flow. - TCP congestion control uses a congestion window size and slow start/congestion avoidance phases to gradually increase data transmission rates and avoid overwhelming the network. - When packet loss is detected, the congestion window is reduced to decrease transmission rate and relieve network congestion.

Uploaded by

ManmeetSinghDua
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

Week 5 TCP and SSL

This document discusses TCP flags and congestion control in TCP connections. It describes: - The main TCP flags (SYN, ACK, FIN, RST, PSH, URG) and what they are used for in establishing and terminating connections or controlling data flow. - TCP congestion control uses a congestion window size and slow start/congestion avoidance phases to gradually increase data transmission rates and avoid overwhelming the network. - When packet loss is detected, the congestion window is reduced to decrease transmission rate and relieve network congestion.

Uploaded by

ManmeetSinghDua
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 63

TCP – Flags, Flow Control

• In TCP connection, flags are used to indicate a


particular state of connection or to provide
some additional useful information like
troubleshooting purposes or to handle a
control of a particular connection. Most
commonly used flags are “SYN”, “ACK” and
“FIN”. Each flag corresponds to 1 bit
information.
TCP – Flags
• 1st Flag - URG
• 2nd Flag - ACK
• 3rd Flag - PUSH
• 4th Flag - RST
• 5th Flag - SYN
• 6th Flag - FIN
TCP Flags - Urgent
• Urgent
• This flag is used to identify incoming data as 'urgent'. Such incoming segments do
not have to wait until the previous segments are consumed by the receiving end
but are sent directly and processed immediately.
• An Urgent Pointer could be used during a stream of data transfer where a host is
sending data to an application running on a remote machine. If a problem
appears, the host machine needs to abort the data transfer and stop the data
processing on the other end. Under normal circumstances, the abort signal will
be sent and queued at the remote machine until all previously sent data is
processed, however, in this case, we need the abort signal to be processed
immediately.
• By setting the abort signal's segment Urgent Pointer flag to '1', the remote
machine will not wait till all queued data is processed and then execute the abort.
Instead, it will give that specific segment priority, processing it immediately and
stopping all further data processing.
TCP Flags – Urgent (Example)
• At your local post office, hundreds of trucks are unloading bags of letters from all over
the world. Because the amount of trucks entering the post office building are
abundant, they line up one behind the other, waiting for their turn to unload their
bags.
• As a result, the queue ends up being quite long. However, a truck with a big red flag
suddenly joins the queue and the security officer, whose job it is to make sure no truck
skips the queue, sees the red flag and knows it's carrying very important letters that
need to get to their destination urgently. By following the normal procedures, the
security officer signals to the truck to skip the queue and go all the way up to the
front, giving it priority over the other trucks.
• In this example, the trucks represent the segments that arrive at their destination and
are queued in the buffer waiting to be processed, while the truck with the red flag is
the segment with the Urgent Pointer flag set.
• A further point to note is the existence of theUrgent Pointer field. When the Urgent
Pointer flag is set to '1' (that's the one we are analysing here), then the Urgent Pointer
field specifies the position in the segment where urgent data ends.
TCP Flags – Push
• Push (PSH) – Transport layer by default waits for some time for
application layer to send enough data equal to maximum segment
size so that the number of packets transmitted on network
minimizes which is not desirable by some application like interactive
applications(chatting). Similarly transport layer at receiver end
buffers packets and transmit to application layer if it meets certain
criteria. This problem is solved by using PSH. Transport layer sets
PSH = 1 and immediately sends the segment to network layer as
soon as it receives signal from application layer. Receiver transport
layer, on seeing PSH = 1 immediately forwards the data to
application layer.
In general, it tells the receiver to process these packets as they are
received instead of buffering them.
TCP Flags - SYN
• Synchronization (SYN) – It is used in first step
of connection establishment phase or 3-way
handshake process between the two hosts.
Only the first packet from sender as well as
receiver should have this flag set. This is used
for synchronizing sequence number i.e. to tell
the other end which sequence number they
should except.
TCP Flags - SYN
TCP Flags - ACK
• Acknowledgement (ACK) – It is used to acknowledge packets which are
successfully received by the host. The flag is set if the acknowledgement
number field contains a valid acknowledgement number.
• The ACKnowledgement flag is used to acknowledge the successful
receipt of packets.
• If you run a packet sniffer while transferring data using the TCP, you will
notice that, in most cases, for every packet you send or receive, an
ACKnowledgement follows. So if you received a packet from a remote
host, then your workstation will most probably send one back with the
ACK field set to "1".
• In some cases where the sender requires one ACKnowledgement for
every 3 packets sent, the receiving end will send the ACK expected once.
TCP Flags - ACK
TCP Flags - Fin
• Finish (FIN) – It is used to request for 
connection termination i.e. when there is no more
data from the sender, it requests for connection
termination. This is the last packet sent by sender. It
frees the reserved resources and gracefully
terminate the connection.
• This flag is used to tear down the virtual connections
created using the previous flag (SYN), so because of
this reason, the FIN flag always appears when the
last packets are exchanged between a connection.
TCP Flags - Fin
TCP Flags - RST
• Reset (RST) – It is used to terminate the connection if the RST
sender feels something is wrong with the TCP connection or
that the conversation should not exist. It can get send from
receiver side when packet is send to particular host that was
not expecting it.
• The reset flag is used when a segment arrives that is not
intended for the current connection. In other words, if you
were to send a packet to a host in order to establish a
connection, and there was no such service waiting to answer at
the remote host, then the host would automatically reject your
request and then send you a reply with the RST flag set. This
indicates that the remote host has reset the connection.
TCP Congestion Control
Congestion in Network
Congestion refers to a network state where –
The message traffic becomes so heavy that it
slows down the network response time.
• Congestion is an important issue that can arise in 
Packet Switched Network.
• Congestion leads to the loss of packets in transit.
• So, it is necessary to control the congestion in network.
• It is not possible to completely avoid the congestion.
TCP Congestion Control
Congestion Control-
Congestion control refers to techniques and
mechanisms that can :-
• Either prevent congestion before it happens
• Or remove congestion after it has happened
TCP Congestion Control
TCP reacts to congestion by reducing the sender
window size.
The size of the sender window is determined by
the following two factors-
• Receiver window size
• Congestion window size
TCP Congestion Control
1. Receiver Window Size
Receiver window size is an advertisement of –
How much data (in bytes) the receiver can
receive without acknowledgement.
• Sender should not send data greater than receiver window size.
• Otherwise, it leads to dropping the TCP segments which causes 
TCP Retransmission.
• So, sender should always send data less than or equal to
receiver window size.
• Receiver dictates its window size to the sender through 
TCP Header.
TCP Congestion Control
2. Congestion Window-
• Sender should not send data greater than congestion window size.
• Otherwise, it leads to dropping the TCP segments which causes
TCP Retransmission.
• So, sender should always send data less than or equal to
congestion window size.
• Different variants of TCP use different approaches to calculate the
size of congestion window.
• Congestion window is known only to the sender and is not sent
over the links.
TCP Congestion Control
TCP Congestion Control
1. Slow Start Phase-
•  Initially, sender sets congestion window size =
Maximum Segment Size (1 MSS).
• After receiving each acknowledgment, sender
increases the congestion window size by 1
MSS.
• In this phase, the size of congestion window
increases exponentially.
TCP Congestion Control
TCP Congestion Control
2. Congestion Avoidance Phase-
• After reaching the threshold,
• Sender increases the congestion window size
linearly to avoid the congestion.
• On receiving each acknowledgement, sender
increments the congestion window size by 1.
• The followed formula is-
Congestion window size=Congestion window size+1
TCP Congestion Control
TCP Congestion Control
3. Congestion Detection Phase-
When sender detects the loss of segments, it
reacts in different ways depending on how the
loss is detected-
Time Out expires before receiving the
acknowledgement for a segment.
This suggests the stronger possibility of congestion
in the network.
There are chances that a segment has been
dropped in the network.
States of TCP
A TCP connection progresses through a series of states during its lifetime. 
• CLOSED: There is no connection.
• LISTEN: represents waiting for a connection request from any remote TCP and port..
• SYN-Sent : Waiting for an acknowledgement from the remote endpoint after having
sent a connection request. Results after Step 1 of the 3 way handshake.
• SYN-Received : This endpoint has received a connection request and sent an
acknowledgement. This endpoint is waiting for final acknowledgement. Results after
Step 2 of the 3 way handshake.
• ESTABLISHED: The third step of the three-way connection handshake was performed.
The connection is open for data transfer.
• FIN-WAIT-1: The first step of an active close (four-way handshake) was performed.
The local end-point has sent a connection termination request to the remote end-
point.
• FIN-WAIT-2: Waiting for a connection termination request from the remote TCP after
the endpoint has sent its connection termination request.
States of TCP (Contd)
• CLOSE-WAIT: The local end-point has received a connection termination
request and acknowledged, the local end-point is waiting for a connection
termination request from the local application, needs to perform an active
close to leave this state.
• LAST-ACK: The local end-point has performed a passive close and has initiated
an active close by sending a connection termination request to the remote
end-point.
• CLOSING: represents waiting for a connection termination request
acknowledgment from the remote TCP. This state is entered when this
endpoint receives a close request from the local application, sends a
termination request to the remote endpoint, and receives a termination
request before it receives the acknowledgement from the remote endpoint.
• TIME-WAIT: represents waiting for enough time to pass to be sure the remote
TCP received the acknowledgment of its connection termination request.
States of TCP
TCP Windowing
• TCP (Transmission Control Protocol) is a
connection oriented protocol which means
that we keep track of how much data has
been transmitted. The sender will transmit
some data and the receiver has to
acknowledge it. When we don’t receive the
acknowledgment in time then the sender will
re-transmit the data.
TCP Windowing
• TCP uses “windowing” which means that a
sender will send one or more data segments
and the receiver will acknowledge one or all
segments. When we start a TCP connection,
the hosts will use a receive buffer where we
temporarily store data before the application
can process it.
TCP Windowing
• When the receiver sends an acknowledgment,
it will tell the sender how much data it can
transmit before the receiver will send an
acknowledgment. We call this the window
size. Basically, the window size indicates the
size of the receive buffer.
TCP Windowing
TCP Windowing
TCP Windowing
TCP Windowing
How to fix TCP windowing
• The TCP window size is controlled by the end devices, not by the routers, switches, or firewalls that
happen to be in the middle. The devices actively and dynamically negotiate the window size
throughout the session.
• But as mentioned earlier, the TCP mechanism was designed for network bandwidth that’s orders of
magnitude slower than what we have today. So some implementations still enforce a maximum
window size of 64KB. You can get around this by enabling TCP windows scaling, which allows windows
of up to 1GB.
• Windows scaling was introduced in RFC 1323 to solve the problem of TCP windowing on fast, reliable
networks. It’s available as an option in any modern TCP implementation. The only question is whether
it’s been enabled properly.
• In all recent Microsoft Windows implementations, windows scaling is enabled by default. You ‘ll find
places on the Internet telling you to change registry values to increase your window size, but depending
on the Windows version you’re using, these changes will have no effect. The values may no longer even
exist. Bottom line, you don’t need to fix TCP windowing in Windows, either clients or servers.
• On Linux systems, you can check that full TCP window scaling is enabled by looking at the value in
/proc/sys/net/ipv4/tcp_window_scaling.
• On Cisco devices, you can adjust the window size using the global configuration command, “ip tcp
window-size”. This command only affects sessions to the Cisco device itself. Network devices generally
won’t change the parameters for sessions that merely pass through them.
TCP Timers
• The 4 important timers used by a TCP
implementation are-
• Retransmission Time Out
• Time Wait
• Keep Alive
• Persistence
TCP Timers
Retransmission Time Out Timer
• TCP uses a time out timer for retransmission of lost segments.
• Sender starts a time out timer after transmitting a TCP
segment to the receiver.
• If sender receives an acknowledgement before the timer goes
off, it stops the timer.
• If sender does not receives any acknowledgement and the
timer goes off, then TCP Retransmission occurs.
• Sender retransmits the same segment and resets the timer.
• The value of time out timer is dynamic and changes with the
amount of traffic in the network.
• Time out timer is also called as Retransmission Timer.
TCP Timers
Time Wait Timer
• TCP uses a time wait timer during connection termination.
• Sender starts the time wait timer after sending the ACK for the
second FIN segment.
• It allows to resend the final acknowledgement if it gets lost.
• It prevents the just closed port from reopening again quickly to
some other application.
• It ensures that all the segments heading towards the just
closed port are discarded.
• The value of time wait timer is usually set to twice the lifetime
of a TCP segment.
TCP Timers
Keep Alive Timer
• TCP uses a keep alive timer to prevent long idle TCP
connections.
• Each time server hears from the client, it resets the
keep alive timer to 2 hours.
• If server does not hear from the client for 2 hours, it
sends 10 probe segments to the client.
• These probe segments are sent at a gap of 75 seconds.
• If server receives no response after sending 10 probe
segments, it assumes that the client is down.
• Then, server terminates the connection automatically.
TCP Timers
• Persistent Timer
• Why TCP Persist Timer?
• Furthermore, when the receiver advertises a zero window, the sender
TCP will stop transmitting data. Further, when the receiver has space
available will provide window update indicating to the sender that it
will accept data. Because such updates are not pure ACKs and are not
reliable. Therefore, TCP implements a mechanism that would handle
these window update if lost.
• For instance, if acknowledgment containing the window update is lost,
it could end up having both sides in stuck mode. Means, receiver
waiting to receive data as it has provided the window update. And
sender would wait for a signal from the receiver to send data. TCP
persist timer isolate this type of deadlock.
TCP Timers
Persistent Timer
• TCP uses a persistent timer to deal with a zero-window-size deadlock situation.
• It keeps the window size information flowing even if the other end closes its
receiver window.
• Sender starts the persistent timer on receiving an ACK from the receiver with a
zero window size.
• When persistent timer goes off, sender sends a special segment to the receiver.
• This special segment is called as probe segment and contains only 1 byte of new
data.
• Response sent by the receiver to the probe segment gives the updated window
size.
• If the updated window size is non-zero, it means data can be sent now.
• If the updated window size is still zero, the persistent timer is set again and the
cycle repeats.
SSL Certificates
What is an SSL certificate?
SSL is the backbone of our secure Internet and it
protects your sensitive information as it travels
across the world's computer networks. SSL is
essential for protecting your website. It provides
privacy, critical security and data integrity for
both your websites and your users personal
information.
SSL Certificates
SSL Encrypts Sensitive Information
• The primary reason why SSL is used is to keep sensitive
information sent across the Internet encrypted so that only the
intended recipient can access it. This is important because the
information you send on the Internet is passed from computer
to computer to get to the destination server. Any computer in
between you and the server can see your credit card numbers,
usernames and passwords, and other sensitive information if it
is not encrypted with an SSL certificate. When an SSL certificate
is used, the information becomes unreadable to everyone
except for the server you are sending the information to. This
protects it from hackers and thieves.
SSL Certificates
SSL Provides Authentication
• In addition to encryption, a proper SSL certificate also provides
authentication. This means you can be sure that you are sending
information to the right server and not to an imposter trying to steal
your information. Why is this important? The nature of the Internet
means that your customers will often be sending information through
several computers. Any of these computers could pretend to be your
website and trick your users into sending them personal information. 
It is only possible to avoid this by getting an SSL Certificate from a
trusted SSL provider.
• Why are SSL providers important? Trusted SSL providers will only
issue an SSL certificate to a verified company that has gone through
several identity checks.
SSL Certificates
SSL Provides Trust
• Web browsers give visual cues, such as a lock
icon or a green bar, to make sure visitors know
when their connection is secured. This means
that they will trust your website more when
they see these cues and will be more likely to
buy from you. SSL providers will also give you
a trust seal that instills more trust in your
customers.
SSL Certificates
How does a website obtain an SSL certificate?
• For an SSL certificate to be valid, domains need to obtain it from
a certificate authority (CA). A CA is an outside organization, a
trusted third party, that generates and gives out SSL certificates.
The CA will also digitally sign the certificate with their own
private key, allowing client devices to verify it.
• Once the certificate is issued, it needs to be installed and
activated on the website's origin server. Web hosting services
can usually handle this for website operators. Once it's activated
on the origin server, the website will be able to load over HTTPS
and all traffic to and from the website will be encrypted and
secure.
SSL Certificates
SSL - Handshake mechanism
• If you have ever browsed an HTTPS URL through a
browser, you have experienced the SSL handshake.
Even though might not notice it, the browser and the
website is creating an HTTPS connection using one-
way SSL handshake.
• The main purpose of an SSL handshake is to provide
privacy and data integrity for communication
between a server and a client. During the Handshake,
server and client will exchange important information
required to establish a secure connection.
SSL - Handshake mechanism
• There are two types of SSL handshakes described as one-
way SSL and two-way SSL (Mutual SSL). The difference
between those two is that in one -way SSL, only the
client validates the identity of the server whereas in two-
way SSL, both server and client validate the identity of
each other. Usually, when we browse an HTTPS website,
one-way SSL is being used where only our browser
(client) validates the identity of the website (server).
Two-way SSL is mostly used in server to server
communication where both parties need to validate the
identity of each other.
SSL - Handshake mechanism
• TLS handshakes are a series of datagrams, or
messages, exchanged by a client and a server. A TLS
handshake involves multiple steps, as the client and
server exchange the information necessary for
completing the handshake and making further
conversation possible.
• The exact steps within a TLS handshake will vary
depending upon the kind of key exchange algorithm
used and the cipher suites supported by both sides.
The RSA key exchange algorithm is used most often.
SSL - Handshake mechanism
During an SSL handshake, the server and the client
follow the below set of steps.
1. Client Hello
• The client initiates the handshake by sending a
"hello" message to the server. The message will
include which TLS version the client supports, the
cipher suites supported, and a string of random
bytes known as the "client random."
SSL - Handshake mechanism
2. Server Hello
• Sever will select the TLS version and the highest
supported by the server.
• In reply to the client hello message, the server
sends a message containing the server's 
SSL certificate, the server's chosen cipher suite,
and the "server random," another random
string of bytes that's generated by the server.
SSL - Handshake mechanism
3. Authentication
• The client verifies the server's SSL certificate
with the certificate authority that issued it.
This confirms that the server is who it says it
is, and that the client is interacting with the
actual owner of the domain.
SSL - Handshake mechanism
4. The premaster secret
• The client sends one more random string of
bytes, the "premaster secret." The premaster
secret is encrypted with the public key and
can only be decrypted with the private key by
the server. (The client gets the public key from
the server's SSL certificate.)
SSL - Handshake mechanism
5. Private key used
• The server decrypts the premaster secret.
SSL - Handshake mechanism
6. Session keys created
• Both client and server generate session keys
from the client random, the server random,
and the premaster secret. They should arrive
at the same results.
SSL - Handshake mechanism
7. Client is ready
• The client sends a "finished" message that is
encrypted with a session key.
SSL - Handshake mechanism
8. Server is ready
• The server sends a "finished" message
encrypted with a session key.
SSL - Handshake mechanism
Secure symmetric encryption achieved
• The handshake is completed, and
communication continues using the session
keys.
SNI
• Server Name Indication (SNI) allows the server to safely host multiple TLS
Certificates for multiple sites, all under a single IP address. It adds the hostname
of the server (website) in the TLS handshake as an extension in the CLIENT HELLO
message. This way the server knows which website to present when using shared
IPs. Typically, companies that use shared IPs under one server are hosting
companies and they face the everyday problem: ‘how do I get my server to select
and present the correct certificate?’
• Here’s a closer look at the problem. On an HTTP site, a server uses HTTP HOST
headers to determine which HTTP website it should present. However, when
using TLS (the protocol behind HTTPS), the secure session needs to be created
before the HTTP session can be established and until then, no host header is
available.
• So here’s the dilemma: with HTTPS, that TLS handshake in the browser can’t be
completed without a TLS Certificate in the first place, but the server doesn’t know
which certificate to present, because it can’t access the host header.
SNI
• When SNI is used, the hostname of the server is included in the TLS handshake, which
enables HTTPS websites to have unique TLS Certificates, even if they are on a shared IP
address.
• You may wonder why this is so important. Why do we need a way to support unique
certificates for shared IPs?
• The Dilemma of IPv4 Shortage
• With SNI, the server can safely host multiple TLS Certificates for multiple sites, all under one
single IP address.
• SNI also makes it a lot easier to configure and manage multiple websites, as they can all
point to the same IP address. Besides that, it has the advantage that simply scanning an IP
address doesn’t reveal the certificate, which adds to the general security.
• SNI is not supported in legacy browsers and web servers. Modern browsers (generally less
than 6 years old) can all handle SNI well, but the two biggest legacy browsers that struggle
with SNI are Internet Explorer on Windows XP
• One way to get around this issue is to use Multi-Domain TLS as the default certificate so
that you can list all the domains on the shared IP in one certificate 
SAN (Subject Alternative Names)
• A Multi-Domain Certificate (sometimes referred to as a SAN Certificate) is another
way to deal with the IPv4 exhaustion. Multi-Domain TLS Certificates don’t have
the disadvantages of compatibility with browsers or servers like SNI does. They
work like any other TLS Certificate and cover up to 200 domains in a single
certificate. SNI, on the other hand, can easily host millions of domains on the
same IP address.
• Why don’t we just use Multi-Domain Certificates then?
• On a Multi-Domain Certificate all domains need to be added to the one
certificate. These domains are listed as Subject Alternative Names, or SANs. If a
SAN needs to be added or removed, or a certificate needs to be revoked or
renewed, the certificate needs to be replaced and redeployed for all domains. It’s
also visible who shares the same certificate. For companies that get their
certificates from a hosting company, for example, that uses Multi-Domain
Certificates for their customers, that company might not be best pleased to share
a certificate with its competitor.  
SNI vs SAN
• Overall, while SNI and Multi-Domain Certificates
achieve the same thing – namely reducing the
amount of IPv4 addresses needed – they achieve
this in an opposite kind of way:
• SNI means you can have unique certificates for each
domain (i.e. many certificates) while those domains
share the same IP. Multi-Domain Certificates, on the
other hand, simply use one certificate for many
domains, which in return also means one IP for
many domains.
Clientless SSLVPN Configuration
Clientless VPN
• conf t
• aaa new-model
• aaa authentication login default local
• username SSLUSER pass CISCO
• webvpn gateway SSLGW
• ip address 200.200.200.10 port 443
• http-redirect port 80
• ssl encryption rc4-md5
• inservice (Enable WebVPN Gateway)
• webvpn context SSLCTX
• gateway SSLGW (Associate Gateway)
• url-list INTERNALWEB
• heading WEBSERVERS
• url-text R1WEB url-value http://192.168.100.1
• policy-group PFLIST
• url-list INTERNALWEB
• default-group-policy SSLGRP
• inservice
SVC Client Configuration
• SVC Client
• webvpn install svc flash:/sslclient-win-1.1.4.176.pkg
• ip local pool SVCPOOL 10.10.10.1 10.10.10.50
• webvpn context SSLCTX
• policy-group SSLGRP
• function svc-enabled
• svc address-pool SVCPOOL
• svc default-domain ipsol.net
• svc keep-client-installed
• svc split include 192.168.100.0 255.255.255.0

• Go to Command prompt from your Test PC and ping R1 192.168.100.1

• show webvpn session context SSLCTX

You might also like