ENT-08 Actions
ENT-08 Actions
Actions
ACTE Training (Enterprise Track)
Taking Actions
Network service performance and user satisfaction are key to business success and
profitability. To deliver optimal QoE you must be able to identify and classify every
packet on your network. You can then prioritize critical applications over those that
hog bandwidth, extracting more value from your infrastructure and deferring the
need for costly upgrades.
Data consumption continues to grow exponentially while revenue per bit keeps
dropping. After investing in network and cloud applications critical to the success of
your business, nothing is more frustrating than recurring complaints about apps that
don’t perform as expected. More and more applications used by businesses and
employees consume greater bandwidth, including many that may not be business-
critical, such as social media, Webmail, video, and games. Unchecked recreational
and personal traffic can have a significant impact on business application
performance over the WAN and the Internet, impairing quality of experience for
customers, as well as reducing productivity.
Classification Enforcement
Conditions Actions
In this module we will focus on the enforcement actions in the policy table.
Analyze
• Access Control Business Objectives
o Accept
o Bypass
o Reject Monitor
and Analyze
o Drop
Once traffic has been classified into Lines, Pipes and Virtual Channels, we can take
actions on that specific traffic. This represents the “A” in Allot’s Dynamic Actionable
Recognition (DART) Technology.
In this module we will look at the different actions which can be taken on specific
traffic flows. We begin by looking at the access control mechanism, which consists of
three pre-defined values, before examining the Quality of Service engine which is
used to shape traffic running through the Service Gateway.
Access Control
Access Control mechanism is applied to the received traffic before any other action is taken
• Accept – let the traffic pass though the DART engine DART
Engine
SG
• Bypass – let the traffic through but without
entering the DART engine DART
Engine
SG
• Reject – drop the traffic and close the connection
by sending TCP FIN packet DART
Engine
TCP Fin
SG
The access control mechanism is applied to traffic before the actions which have
been assigned from the action catalogs.
The Service Gateway first identifies traffic that should be accepted. By default, it
accepts the traffic and allows it to pass through. Three other options exist:
• It may be bypassed, in which case the traffic will pass through the system but
without undergoing DPI and QoS. In the Service Gateway, this means that traffic
will use minimal processing resources. All traffic meeting the Bypass condition
does not appear in any monitoring graphs.
• It can be rejected. In this case, the Service Gateway will attempt to close the
connection by sending a TCP FIN. It should be used only for TCP traffic.
• It may be dropped, in which case the Service Gateway simply ignores the
sender and throws the packets away, much like a firewall. The Drop option is
provided for environments such as UDP, in which a client does not expect
acknowledgements (ACKs). You can also drop a TCP connection - it will just take
longer for the client to realize that the conversation with its desired server can’t
be established.
Access Control is used for specific policy rules, based on the condition catalogs
defined for a rule.
Access
There are two ways to associate an access control action with a policy rule:
1. Open the policy table, and double click the access control tab. This will open a
radio button list. Chose the desired access control option.
2. Double click a policy rule name. This will open the rule properties update window.
The first action will be the access control. Click the drill down menu and choose
the desired option. Click OK.
Remember to Save the policy so that the change will take effect.
Analyze
• Access Control Business Objectives
Before we see how to define QoS catalogs, lets first look at why we need to enforce
quality of service at all.
Web Gaming
Network service performance and user satisfaction are key to business success and
profitability. To deliver optimal QoE you must be able to identify and classify every
packet on your network. You can then prioritize critical applications over those that
hog bandwidth, extracting more value from your infrastructure and deferring the
need for costly upgrades. Data consumption continues to grow exponentially. Much
of the data is video and most of the video is encrypted. Application control is about
aligning network performance to business priorities and ensuring that your network
serves the business first.
On the next slides we will discuss these and other QoS techniques
9
There are several tools to optimize the bandwidth for each policy element output.
You can set High Priority to the more important one and Low Priority to the least
important output.
You can Expedite the RT sensitive traffic or Drop the traffic that is required to be
blocked.
You can set a Maximum limit for a traffic that should not cross a specific BW value or
assure Minimum BW allocation if needed.
On the next slides we will discuss these and other QoS techniques.
Transmit
QoS Buffer
Engine Drop
• Based on user definitions, QoS engine makes a decision for each frame whether to:
• Transmit the frame to the network
• Store the frame in the buffer
• Drop the frame
10
At the heart of Allot’s quality of Service mechanism lies the QoS Engine. Based on
user definitions, the QoS engine makes a decision for each frame whether to transmit
it to the network, store it in the buffer or drop it.
11
One can think of the QoS mechanism as of opening and closing of the tap.
There is a limited amount of water that can go through the pipe-line. The QoS
mechanism helps to determine which of the pipes at the end will emit a great
amount of water, and in which the current will be very weak.
With each tap (rule) at each level (Line/Pipe/VC), you can control the flow of water
(bandwidth). The sum total of the water (bandwidth) at each level (e.g: VC) cannot be
greater than the total flowing from the tap at the level above (e.g: Pipe). The amount
of water flowing through a tap may affect the amount flowing through another tap at
the same level.
Maximum (Limit)
Area where
priority takes
effect
Minimum (guarantee)
Can be used by
others temporarily
Time
12
Let’s examine the concepts of minimum and maximum bandwidth in more detail.
Applying a minimum to certain traffic guarantees that the traffic receives at least the
amount of bandwidth requested. For example, a QoS entry with a minimum of
64kbps specifies that the traffic receives a guaranteed bandwidth of at least 64kbps.
It may very well receive more than 64kbps, depending on the amount of available
bandwidth and other QoS settings. Minimum bandwidth guarantees are only
enforced in cases of congestion. If the minimum guaranteed bandwidth is not in use
at a particular moment, it can be used temporarily by another Line, Pipe or Virtual
Channel.
Applying a maximum simply limits the amount of bandwidth. For example, a QoS
entry with a maximum of 64kbps specifies that the traffic is limited to 64kbp and
receives no more than 64kbps. Just as a minimum definition may receive more that
the defined value, a maximum definition may receive less. Traffic to which a
maximum value is assigned is smoothed throughout the course of a second while
small portions of the traffic are allowed to be transmitted before their time.
Bandwidth that is assigned between the minimum guaranteed and the maximum
applied is calculated in competition with the other rules based on the priority levels
set.
13
Quality of Service can be assigned per Line, Pipe and Virtual Channel. These
assignments can be combined to provide a more granular quality of service.
Minimum and Maximum Bandwidth can be configured for any line, pipe or VC.
Neither of these fields are mandatory.
• In the Minimum Bandwidth (Kbits/sec) field, enter the minimum bandwidth that
will be assigned to the channel. As long as there is traffic requiring bandwidth in
this channel, the bandwidth allocated will never be lower than this limit. Getting
bandwidth above the minimum, however, depends on the traffic priority, should
there be competition for the bandwidth.
• In the Maximum Bandwidth field, you may opt to assign this channel the maximum
Bandwidth allowed or to enter the maximum bandwidth that will be assigned in
Kbits/sec. The total bandwidth of all traffic allocated in this channel will not exceed
this limit.
15
To define different policies for each direction, simply select each direction defined
separately from the Line-Based QoS Coverage field in the Quality of Service Catalog
editor. Two dialog boxes are displayed: one for inbound traffic, and one for outbound
traffic.
Inbound and Outbound Defined the Same: Define QoS for both the inbound and
outbound traffic together. This option is normally used in a symmetric environment
where inbound and outbound traffic requirements are identical.
Each Direction Defined Separately: Define QoS for the inbound and outbound traffic
individually (instead of the General tab, the Inbound tab and the Outbound tab
appear).
The priority mechanism becomes relevant where there is not enough available
bandwidth to meet the demand. Take for example a case where a line is limited to a
maximum of 1Mbps, but the incoming bandwidth reaches 100Mbps. In this case, how
is bandwidth divided between the different channels?
There are two mechanisms for traffic prioritization in this scenario. The default
setting is “best effort” priority. The administrator can however set the priority level
on any entity to one of 4 levels.
Note that if there is enough room for all bandwidth demand, there is no competition
over BW between different policy elements, and each gets the portion of the BW it
demands. Priority only comes into play only if there is congestion, e.g: bandwidth
demand is greater than the configured limits.
17
Best Effort Priority is used for Bandwidth division based on the Ingress Traffic of a
specific entity. The higher the ingress traffic for a specific policy entity, the more
bandwidth will be allocated to it, subject to the amount of free bandwidth available.
If all objects in the same Enforcement Policy level are set to “Best Effort” there will be
no prioritization between objects.
18
By default, each policy entity will have “Normal Line/Pipe/VC QoS” assigned. The
“normal Qos” catalogs have “Best Effort” priority pre-configured.
(100/300)x120
19
When using the Best Effort prioritization method, the egress bandwidth for a specific
policy level is proportional to its ingress bandwidth. This means that the traffic with
higher bandwidth at the input of the system, will get the higher bandwidth at the
output. The bandwidth division is proportional to all entities at this policy level.
Priorities
(P1-P4)
Available for all policy levels Priority mechanism only starts working if
there is not enough bandwidth available 20
Priorities are used for prioritizing the traffic according to its importance among other
types of traffic. 4 level of priorities are available: P1 is the lowest priority and P4 is
the highest one.
When using the Priority Level prioritization method, the ingress bandwidth is not
important. The egress BW will be allocated according to the priority that was set to
this specific entity relatively to other priority entities. The higher the priority, the
higher bandwidth at the output for a specific entity.
23
In the example here we can see how can we reach a state of bandwidth starvation
when best effort and priorities are defined for entities at the same hierarchy level.
Let’s consider we have 3 VCs for streaming applications. The total bandwidth of the
Streaming Applications pipe is 60Mbps.
In this case we define Apple TV+ and Netflix with priority levels, and for YouTube Live
- Best Effort. Traffic will be divided per priorities, leaving nothing to the entity with
Best Effort.
24
When setting some elements to have priority, and some to “best effort” at the same
policy level this could cause a situation known as “bandwidth starvation”. When the
traffic assigned by each rule competes for the spare bandwidth, the rules set to “best
effort” is the last in line to receive any bandwidth. This can result with zero allocated
bandwidth, meaning that no bandwidth will be allocated to this rule at all.
In case you try to save such a policy, where at the same policy level you are
combining priorities and best effort QoS, the NetXplorer will alert you to this. Click
Yes to save the policy as it is, or No to adjust the policy.
Important Note: Default QoS values (Normal Line QoS, Normal Pipe QoS and Normal
Virtual Channel QoS) are all set to Best Effort. Make sure not to mix them with
priority QoS on the same level.
25
If both Minimum and Priorities are defined on the same entity, the BW division is as
follows:
1. First BW is assigned to the guaranteed Minimum to each element with a
configured Minimum
2. The rest of the available Bandwidth will be treated according to the configured
Priorities
26
Lets now see another example which includes minimum bandwidth guarantees. In this
case we have 200Mbps of HTTP and 200Mpbs of FTP traffic which must be shaped into
a Pipe with a maximum of 150Mbps. The operator wishes to ensure that HTTP receives
a guaranteed minimum of 80Mbps, while FTP receives a guaranteed minimum of
10Mbps.
• For the VC with the default “best effort” priority alongside the minimum, bandwidth
would be assigned firstly to satisfy the minimum requirements. Remaining Pipe
bandwidth will then be divided in proportion to the ingress traffic. The minimum for
both VCs is 90Mbps (80M+10M). This leaves us 60Mbps out of the Pipe max to
assign proportionately to the ingress with “Best Effort” priorities. Each VC will
receive the sum of its Minimum plus the proportion of the remaining bandwidth.
HTTP: Total Bandwidth = 80+(200/300)*60=120Mbps
FTP: Total Bandwidth = 10+(100/300)*60=30Mbps
• For the VC with a priority level in addition to the minimum, bandwidth would be
assigned firstly to satisfy the minimum requirements. The remaining Pipe bandwidth
will be divided in proportion to the priority weighting. The minimum for both VCs is
90Mbps (80M+10M). This leaves us 60Mbps out of the Pipe Max to assign
proportionately to the priority weighting. The Remaining bandwidth after min
allocation =150-(80+10)=60Mbps;
HTTP: Total Bandwidth = 80+(4/5)*60=128Mbps
FTP: Total Bandwidth = 10+(1/5)*60=22Mbps
27
Let’s now see a final example which demonstrates what happens if there is free
bandwidth in one of the pipes.
In this case we have 10Mbps of Apple TV+, 50Mpbs of YouTube traffic and 40Mbps of
Netflix traffic which must be shaped into a Pipe with a maximum of 90Mbps.
The NetXplorer operator assigns a priority of 2 to all VCs. The Apple TV+ VC will be
assigned 10Mbps and other VCs will be assigned 40Mbps each.
The following example demonstrates the QoS principles in action. We have defined a
policy with a line called “Min QoS Demo” which is limited to 4Mbps. All pipes inside
this line will have Priority 4, but 2 specific pipes “Web” and “Streaming” will also have
a Minimum of 4Mbps.
The output graph below shows the bandwidth shaping within the line. As you can
see, the graph is always limited to 4Mbps, as this is a Maximum QoS for the whole
line. The bandwidth within varies between the pipes. When streaming requires
4Mbps, the system will allocate the whole 4Mbps to it, but when the Streaming does
not require 4Mbps, the remaining traffic is allocated to other pipes according to their
priorities.
• Expedited Forwarding
configures the exact BW that
would be allocated for this VC
• The allocation is strict –
no priorities are defined
• May be treated as:
Minimum = Maximum
Expedited
Forwarding
• Spare traffic (if exists) will not
be available for other VCs,
but will be dropped
Expedited forwarding enables first-class service level for real time traffic which is loss-
sensitive, delay-sensitive and jitter-sensitive. The mechanism can also handle traffic
bursts efficiently. With this feature, users can ensure QoE for real-time applications.
Allot’s implementation of this feature is in accordance with RFC2598.
Expedited forwarding traffic (unlike traffic for which a regular “maximum” is defined)
is not smoothed. The Expedited forwarding defined rate can be allocated entirely in
the first millisecond (burst). When starting to transmit in the middle of the second
the traffic is allowed to breach the maximum of the hierarchy object above. In order
to provide the specified rate to be Expedited, the QoS engine provides this rate at the
expense of other VC’s in the subsequent second.
Here we see how to define Expedited forwarding. This is a feature supported at the
VC level only and is intended for services such as VoIP and IPTV which are sensitive to
loss, delay and jitter. When traffic is allocated to the “EF” quality of service, no
buffering is used in order to minimize jitter and delay. Therefore, minimum and
maximum are defined with the same value. Traffic that cannot be assigned the EF
required bandwidth, will be dropped.
Note: Expedited forwarding should be used for UDP traffic only. When used with TCP
this can lead to performance degradation.
Drop Precedence
• Packet that is not transmitted to the
network will be dropped or buffered
• Drop Precedence = the importance
of the packet.
• High = discard these packets first
• Low = discard other packets first,
and only if no other choice,
discard packets from that VC
• Application Based = Drop Precedence Default
is predefined by SW according Drop
to the Application Type Precedence
• No Buffering = no buffer packets from
this policy level at all.
Drop immediately.
If a packet is not transmitted to the network, it will be dropped or buffered. The drop
precedence value determines the importance of the packet before making the
decision to buffer or not. Packets with higher drop precedence values are discarded
before packets with lower drop precedence values. The Allot QoS engine uses the
standard WRED algorithm to make this decision. The feature is available at the
Enhanced VC QoS level only.
The default drop precedence value is “Application Based”, whereby high, medium
and low values are pre-defined in the software code per application type. This
ensures for example that buffering is used for TCP traffic which is “adaptive” by
nature, while no-buffering is used for UDP traffic which is “non-adaptive”. This setting
is designed for real network environments.
When set to “No Buffering” no packets will be buffered at all for the policy rule. This
for example should be selected if you are working in a lab environment, using traffic
from a traffic generator which is non-adaptive by nature. If you were to buffer such
non-adaptive traffic, the buffers would quickly fill leading to packet loss.
Problem: Over-Subscription
Cannot connect new Pipe
with minimum bandwidth
Each pipe is
configured with
Min 256 Kbps
31
The use of “minimum” bandwidth may sometimes lead to a situation where more
bandwidth has been guaranteed than is available.
This may even be done on purpose, with the network administrator working on the
assumption that not all users are connected at all times and that the bandwidth will
usually be sufficient to accommodate everyone who is connected.
Each channel is assigned a Pipe with guaranteed minimum bandwidth. But what
happens when the minimum guaranteed bandwidth cannot be allocated? When
more users than expected try to connect, the total bandwidth allocated for the Pipes
may exceed the maximum bandwidth defined for the line. If this happens,
connections for new users cannot be established.
In such a case, you can use conditional admission to determine what to do. You can
instruct the Service Gateway to drop or accept according to the priority value. If you
choose to Admit by Priority, the SG will accept the new connection, but will not
assign the minimum bandwidth. The new connection gets bandwidth per priority. If
you choose to Drop, all new packets will be dropped. The user is disconnected and
may see the message Connection Timed-Out.
Ignore QoS
• Ignore QoS option will skip the
QoS engine of the SG.
It is enforcing no limitation, Ignore QoS
policy not applied.
In some circumstances, traffic that does not require any shaping must pass through
the Service Gateway. It is recommended to use Ignore QoS for traffic going from the
LAN to the DMZ and vice versa, or other LAN traffic passing through the Service
Gateway that does not require shaping.
Note: The Allot default policy applies ‘ignore QoS’ on the Network Operation VC. This
VC includes network protocols that are mostly used for network operation and
interconnection, and not by the users. For example, DNS, Radius, ARP, SNMP and
other routing network protocols should pass through the Service Gateway without
any QoS shaping on them.
Bypass
Monitoring Entity Normal QoS Ignore QoS
(Access Control)
Policy is enforced Yes No No
Statistics are collected Yes Yes No
Appears in traffic Yes Yes Yes
measurements (acmon)
Appears in connections Yes Yes Yes
listings (acstat -ix)
Appears in policy listings Yes Yes No
(acstat -lvc)
34
The table compares three techniques: Normal QoS (best effort) as opposed to the
Ignore QoS and as opposed to the Bypass option (defined in access control).
As you can see, in Normal QoS, a policy action is enforced (best-effort QoS) and traffic
is fully monitored.. When using the Ignore QoS option, the connection is listed and its
statistics are collected, meaning that it can be presented in reports. However, the
Bypassed traffic is not listed per policy, and its statistics are not collected. This means
that bypassed traffic cannot be presented in Allot analytic tools.
yes
Can remaining bandwidth be allocated?
no
buffer
What is user-defined policy?
Decision transmit
Best Effort
QoS Engine
35
This flow diagram summarizes the allocation of bandwidth in the Allot Enhanced QoS
engine. The total bandwidth available is first allocated to policy elements with a
“Minimum” or “Expedited Forwarding” setting. The spare bandwidth will be divided
among the rest of the policy elements as required. If there is insufficient bandwidth
available, the bandwidth will be divided based on priority settings, unless the user
has chosen to “drop” all bandwidth not allocated by minimum.
36
The QoS catalog is the heart of all Action Catalogs. This is the catalog which allows
you to apply all the different principles we have reviewed throughout this module.
With the various options of this catalog you can tailor your bandwidth to maximize
every single bit of traffic.
For example:
• Set a minimum bandwidth level in order to ensure available bandwidth for
important traffic
• Set maximum Bandwidth in order to limit P2P traffic
• Set expedited forwarding to ensure VoIP quality
• Set priority to ensure desired precedence in times of congestion
Can you think of other uses for QoS control? Share your ideas with your trainer and
the training class.
Analyze
Business Objectives
• Access Control
• Quality Of Service (QoS) Monitor
and Analyze
• More Actions…
Conditions
Define Create
Policies Actions Catalogs
37
The actions that were discuss in this chapter are the most usable and important.
In this section, we will see what other actions are possible with the Allot system.
More Actions
HTTP/Captive
Portal Redirection
ToS
What additional
actions can be
applied?
WebSafe
HTTP Header
Enrichment
Steering and
Mirroring
In addition to the actions described in this module, more actions are available.
See additional ACTE course materials to learn about those features.
SG SG
External External
Server Server
• Find out more about Steering and Mirroring in ACTE Course Appendices 39
It is possible for an Allot Service Gateway to steer traffic to an internal or third party
service. This traffic then usually returns to the Service Gateway and continues to its
destination. In some cases, traffic is “mirrored” or duplicated, with the duplicated
traffic being steered to an internal or third party service while the original traffic is
routed to its destination.
WebSafe
WebSafe is Allot’s carrier grade service for HTTP URL/domain & HTTPS domain
filtering which is supported in Service Gateway Platforms.
WebSafe provides the capability to block specific Web Content identified by HTTP
URL/URL and HTTPS domain as per the demands of regulatory agencies via Filters
that utilize Blacklists and Whitelists. This provides flexibility to block access,
slowdown access or redirect access to a third-party portal. Multiple Filters may be
created so that different Blacklists may redirect traffic to a variety of portals.
WebSafe supports the filtering of HTTPS domains based on the SNI Server Name
Index value which contains the server_name, which is exchanged between client-
server as a part of the initial hello messages.
URL filtering supports the following actions:
• Traffic blocking – blocks user access to illicit web content by discarding the
web traffic. This can be done with or without a notification.
• Traffic redirecting – provides the users with a substitute web page instead of
the illicit content. The substitute web page can be the main page of the same
domain, a warning message or any other web page. Redirection to warning
page is supported for HTTP URL or Domain only.
The Type of Service, or TOS field, is one of the fields (8 bit) of the IPv4 header. It can
be used to differentiate traffic flows from each other. It was originally designed to
support classification of different services by the designers of the IP protocol.
Assured Forwarding, which was defined in RFC 2597, was designed to provide
different levels of forwarding assurances for customer traffic. There are 4 classes
defined – Class 1, 2, 3 and 4 and each class has specific resources allocated such as
buffers or bandwidth. Within each class, packets are marked with 3 levels of drop
precedence – low, medium and high. These 4-class levels and 3 drop precedence
levels offer 12 possible values for assured forwarding (AF). Layer 4 network elements
can allocate different resources to each level. The NetXplorer ToS catalog comes with
these 12 values pre-defined. In addition, you can use the NetXplorer to define any
value you wish, based on any combination of the 8 bits including the last two.
• Find out more about DoS Action Catalog in ACTE Course Appendices 42
The DoS (Denial of Service) action catalog enables the user to control the number of
connections and the rate of connections established (CER) per policy.
Connections above that limit will simply be dropped.
This feature can be as a very simple way to control DDOS attacks.
• Find out more about HTTP Header Enrichment in ACTE Course Appendices 43
HTTP Header Enrichment (HHE) refers to the ability for the Service Gateway, to
change supplementary content of HTTP transactions.
It enables modification of an HTTP request (GET, POST, HEAD).
HHE enables new headers to be added to the initial HTTP request or for a specific
string of the HTTP request to be replaced by another predefined string value.
WWW
SG
www.portal.com
• Find out more about Captive Portal Redirection in ACTE Course Appendices 44
The Captive Portal action is used to divert specific HTTP traffic to a specific URL
(portal).
This is usually done where the Enterprise administrator wishes to restrict a user’s
activities or make a notification.
For example, when trying to connect to an airport or hotel network, the user may be
redirected to a company portal.
For security reasons, the portal itself may utilize a secured connection (SSL /
HTTPS). In such scenarios, the administrator should set the Redirection Protocol to
HTTPS. Note that this means the PORTAL will be HTTPS, but HTTPS traffic cannot be
redirected, only HTTP. Up to 64 Captive Portals may be created.
Review Question
45
Connect the access control values on the left with their definitions on the right.
Review Question
Look at the Policy Table.
1. Is P2P Important for this Company?
2. What will happen to Fallback VC in EMEA Office Pipe
in case of congestion?
46
To summarize this section of the module, have a look at this Policy table, which
implements the example from the previous slide.
Please discuss the following:
1. Is P2P Important for this company?
2. What will happen to Fallback VC in EMEA Office Pipe in case of congestion?
Review Question
Fill the Output BW in the Table:
Pipe Max: 15Mbps
HTTP
HTTP: 20Mbps
SG
FTP: 10Mbps FTP
1 BE BE 10Mbps
? ?
5Mbps
2 Priority 1 Priority 1 ?
7.5Mbps ?
7.5Mbps
3 Priority 4 Priority 1 ?
12Mbps 3Mbps
?
Min 5Mbps
4 Priority 1 ?
13Mbps ?
2Mbps
Priority 4
47
For each of the 4 different enhanced QoS settings, what bandwidth would you expect
to be allocated to the HTTP VC and the VoIP VC in the Service Gateway in the picture?
Bear in mind that the Pipe in which these VCs reside is set to a maximum of 15Mbps,
HTTP traffic is being received at 20Mbps and VoIP at 10Mbps.
Exercise
Actions
48
49