CP R81 ClusterXL AdminGuide
CP R81 ClusterXL AdminGuide
CLUSTERXL
R81
Administration Guide
[Classification: Protected]
Check Point Copyright Notice
© 2020 Check Point Software Technologies Ltd.
All rights reserved. This product and related documentation are protected by copyright and distributed
under licensing restricting their use, copying, distribution, and decompilation. No part of this product or
related documentation may be reproduced in any form or by any means without prior written authorization
of Check Point. While every precaution has been taken in the preparation of this book, Check Point
assumes no responsibility for errors or omissions. This publication and features described herein are
subject to change without notice.
TRADEMARKS:
Refer to the Copyright page for a list of our trademarks.
Refer to the Third Party copyright notices for a list of relevant copyrights and third-party licenses.
ClusterXL R81 Administration Guide
Important Information
Latest Software
We recommend that you install the most recent software release to stay up-to-date with the
latest functional improvements, stability fixes, security enhancements and protection
against new and evolving attacks.
Certifications
For third party independent certification of Check Point products, see the Check Point
Certifications page.
Feedback
Check Point is engaged in a continuous effort to improve its documentation.
Please help us by sending your comments.
Revision History
Date Description
Table of Contents
Glossary 12
Introduction to ClusterXL 35
The Need for Clusters 35
ClusterXL Solution 35
How ClusterXL Works 35
The Cluster Control Protocol 36
ClusterXL Requirements and Compatibility 37
Check Point Appliances and Open Servers 37
Supported Number of Cluster Members 38
Hardware Requirements for Cluster Members 38
Software Requirements for Cluster Members 38
VMAC Mode 39
Supported Topologies for Synchronization Network 41
Clock Synchronization in ClusterXL 44
IPv6 Support for ClusterXL 44
Synchronized Cluster Restrictions 44
High Availability and Load Sharing Modes in ClusterXL 46
Introduction to High Availability and Load Sharing modes 46
High Availability 46
Load Sharing 47
Example ClusterXL Topology 48
Example Diagram 48
Defining the Cluster Member IP Addresses 49
Defining the Cluster Virtual IP Addresses 49
Defining the Synchronization Network 49
Configuring Cluster Addresses on Different Subnets 50
ClusterXL Mode Considerations 51
Choosing High Availability, Load Sharing, or Active-Active mode 51
Considerations for the Load Sharing Mode 51
IP Address Migration 51
ClusterXL Modes 52
High Availability Mode 52
Load Sharing Modes 55
Load Sharing Multicast Mode 55
Load Sharing Unicast Mode 56
ClusterXL Mode Comparison 58
Cluster Failover 59
What is Failover? 59
When Does a Failover Occur? 59
What Happens When a Cluster Member Recovers? 60
How a Recovered Cluster Member Obtains the Security Policy 60
General Failover Limitations 61
Active-Active Mode in ClusterXL 62
Introduction 62
Configuring Active-Active mode 63
Dynamic Routing Failover 66
Limitations 69
Synchronizing Connections in the Cluster 70
The Synchronization Network 71
How State Synchronization Works 72
Configuring Services to Synchronize After a Delay 73
Configuring Services not to Synchronize 74
Sticky Connections 76
Introduction to Sticky Connections 76
VPN Tunnels with 3rd Party Peers and Load Sharing 76
Third-Party Gateways in Hub and Spoke VPN Deployments 77
Configuring a Third-Party Gateway in a Hub and Spoke VPN Deployment 78
Non-Sticky Connections 80
Non-Sticky Connection Example: TCP 3-Way Handshake 80
Synchronizing Non-Sticky Connections 81
Synchronizing Clusters on a Wide Area Network 82
Synchronized Cluster Restrictions 84
Configuring ClusterXL 85
Glossary
3
Active
State of a Cluster Member that is fully operational: (1) In ClusterXL, this applies to the
state of the Security Gateway component (2) In 3rd party / OPSEC cluster, this applies
to the state of the cluster State Synchronization mechanism.
Active-Active
A cluster mode (in R80.40 and higher versions), where cluster members are located in
different geographical areas (different sites, different cloud availability zones). This
mode supports the configuration of IP addresses from different subnets on all cluster
interfaces, including the Sync interfaces. Each cluster member inspects all traffic routed
to it and synchronizes the recorded connections to its peer cluster members. The traffic
is not balanced between the cluster members.
Active Up
ClusterXL in High Availability mode that was configured as Maintain current active
Cluster Member in the cluster object in SmartConsole: (1) If the current Active member
fails for some reason, or is rebooted (for example, Member_A), then failover occurs
between Cluster Members - another Standby member will be promoted to be Active (for
example, Member_B). (2) When former Active member (Member_A) recovers from a
failure, or boots, the former Standby member (Member_B) will remain to be in Active
state (and Member_A will assume the Standby state).
Active(!)
In ClusterXL, state of the Active Cluster Member that suffers from a failure. A problem
was detected, but the Cluster Member still forwards packets, because it is the only
member in the cluster, or because there are no other Active members in the cluster. In
any other situation, the state of the member is Down. Possible states: ACTIVE(!),
ACTIVE(!F) - Cluster Member is in the freeze state, ACTIVE(!P) - This is the Pivot
Cluster Member in Load Sharing Unicast mode, ACTIVE(!FP) - This is the Pivot Cluster
Member in Load Sharing Unicast mode and it is in the freeze state.
Active/Active
See "Load Sharing".
Active/Standby
See "High Availability".
Administrator
A user with permissions to manage Check Point security products and the network
environment.
API
In computer programming, an application programming interface (API) is a set of
subroutine definitions, protocols, and tools for building application software. In general
terms, it is a set of clearly defined methods of communication between various software
components.
Appliance
A physical computer manufactured and distributed by Check Point.
ARP Forwarding
Forwarding of ARP Request and ARP Reply packets between Cluster Members by
encapsulating them in Cluster Control Protocol (CCP) packets. Introduced in R80.10
version. For details, see sk111956.
Backup
(1) In VRRP Cluster on Gaia OS - State of a Cluster Member that is ready to be
promoted to Master state (if Master member fails). (2) In VSX Cluster configured in
Virtual System Load Sharing mode with three or more Cluster Members - State of a
Virtual System on a third (and so on) VSX Cluster Member. (3) A Cluster Member or
Virtual System in this state does not process any traffic passing through cluster.
Blocking Mode
Cluster operation mode, in which Cluster Member does not forward any traffic (for
example, caused by a failure).
Bond
A virtual interface that contains (enslaves) two or more physical interfaces for
redundancy and load sharing. The physical interfaces share one IP address and one
MAC address. See "Link Aggregation".
Bonding
See "Link Aggregation".
Bridge Mode
A Security Gateway or Virtual System that works as a Layer 2 bridge device for easy
deployment in an existing topology.
CA
Certificate Authority. Issues certificates to gateways, users, or computers, to identify
itself to connecting entities with Distinguished Name, public key, and sometimes IP
address. After certificate validation, entities can send encrypted data using the public
keys in the certificates.
CCP
See "Cluster Control Protocol".
Certificate
An electronic document that uses a digital signature to bind a cryptographic public key
to a specific identity. The identity can be an individual, organization, or software entity.
The certificate is used to authenticate one identity to another.
CGNAT
Carrier Grade NAT. Extending the traditional Hide NAT solution, CGNAT uses
improved port allocation techniques and a more efficient method for logging. A CGNAT
rule defines a range of original source IP addresses and a range of translated IP
addresses. Each IP address in the original range is automatically allocated a range of
translated source ports, based on the number of original IP addresses and the size of
the translated range. CGNAT port allocation is Stateless and is performed during policy
installation. See sk120296.
Cluster
Two or more Security Gateways that work together in a redundant configuration - High
Availability, or Load Sharing.
Cluster Interface
An interface on a Cluster Member, whose Network Type was set as Cluster in
SmartConsole in cluster object. This interface is monitored by cluster, and failure on this
interface will cause cluster failover.
Cluster Member
A Security Gateway that is part of a cluster.
Cluster Mode
Configuration of Cluster Members to work in these redundant modes: (1) One Cluster
Member processes all the traffic - High Availability or VRRP mode (2) All traffic is
processed in parallel by all Cluster Members - Load Sharing.
Cluster Topology
Set of interfaces on all members of a cluster and their settings (Network Objective, IP
address/Net Mask, Topology, Anti-Spoofing, and so on).
ClusterXL
Cluster of Check Point Security Gateways that work together in a redundant
configuration. The ClusterXL both handles the traffic and performs State
Synchronization. These Check Point Security Gateways are installed on Gaia OS: (1)
ClusterXL supports up to 5 Cluster Members, (2) VRRP Cluster supports up to 2 Cluster
Members, (3) VSX VSLS cluster supports up to 13 Cluster Members. Note: In ClusterXL
Load Sharing mode, configuring more than 4 Cluster Members significantly decreases
the cluster performance due to amount of Delta Sync traffic.
CoreXL
A performance-enhancing technology for Security Gateways on multi-core processing
platforms. Multiple Check Point Firewall instances are running in parallel on multiple
CPU cores.
CoreXL SND
Secure Network Distributer. Part of CoreXL that is responsible for: Processing incoming
traffic from the network interfaces; Securely accelerating authorized packets (if
SecureXL is enabled); Distributing non-accelerated packets between Firewall kernel
instances (SND maintains global dispatching table, which maps connections that were
assigned to CoreXL Firewall instances). Traffic distribution between CoreXL Firewall
instances is statically based on Source IP addresses, Destination IP addresses, and the
IP 'Protocol' type. The CoreXL SND does not really "touch" packets. The decision to
stick to a particular FWK daemon is done at the first packet of connection on a very high
level, before anything else. Depending on the SecureXL settings, and in most of the
cases, the SecureXL can be offloading decryption calculations. However, in some other
cases, such as with Route-Based VPN, it is done by FWK daemon.
CPHA
General term in Check Point Cluster that stands for Check Point High Availability
(historic fact: the first release of ClusterXL supported only High Availability) that is used
only for internal references (for example, inside kernel debug) to designate ClusterXL
infrastructure.
CPUSE
Check Point Upgrade Service Engine for Gaia Operating System. With CPUSE, you
can automatically update Check Point products for the Gaia OS, and the Gaia OS itself.
For details, see sk92449.
Critical Device
Also known as a Problem Notification, or pnote. A special software device on each
Cluster Member, through which the critical aspects for cluster operation are monitored.
When the critical monitored component on a Cluster Member fails to report its state on
time, or when its state is reported as problematic, the state of that member is
immediately changed to Down. The complete list of the configured critical devices
(pnotes) is printed by the 'cphaprob -ia list' command or 'show cluster members pnotes
all' command.
DAIP Gateway
A Dynamically Assigned IP (DAIP) Security Gateway is a Security Gateway where the
IP address of the external interface is assigned dynamically by the ISP.
Data Type
A classification of data. The Firewall classifies incoming and outgoing traffic according
to Data Types, and enforces the Policy accordingly.
Database
The Check Point database includes all objects, including network objects, users,
services, servers, and protection profiles.
Dead
State reported by a Cluster Member when it goes out of the cluster (due to 'cphastop'
command (which is a part of 'cpstop'), or reboot).
Decision Function
A special cluster algorithm applied by each Cluster Member on the incoming traffic in
order to decide, which Cluster Member should process the received packet. Each
Cluster Members maintains a table of hash values generated based on connections
tuple (source and destination IP addresses/Ports, and Protocol number).
Delta Sync
Synchronization of kernel tables between all working Cluster Members - exchange of
CCP packets that carry pieces of information about different connections and operations
that should be performed on these connections in relevant kernel tables. This Delta
Sync process is performed directly by Check Point kernel. While performing Full Sync,
the Delta Sync updates are not processed and saved in kernel memory. After Full Sync
is complete, the Delta Sync packets stored during the Full Sync phase are applied by
order of arrival.
Distributed Deployment
The Check Point Security Gateway and Security Management Server products are
deployed on different computers.
Domain
A network or a collection of networks related to an entity, such as a company, business
unit or geographical location.
Down
State of a Cluster Member during a failure when one of the Critical Devices reports its
state as "problem": In ClusterXL, applies to the state of the Security Gateway
component; in 3rd party / OPSEC cluster, applies to the state of the State
Synchronization mechanism. A Cluster Member in this state does not process any traffic
passing through cluster.
Dying
State of a Cluster Member as assumed by peer members, if it did not report its state for
0.7 second.
Expert Mode
The name of the full command line shell that gives full system root permissions in the
Check Point Gaia operating system.
External Network
Computers and networks that are outside of the protected network.
External Users
Users defined on external servers. External users are not defined in the Security
Management Server database or on an LDAP server. External user profiles tell the
system how to identify and authenticate externally defined users.
Failback in Cluster
Also, Fallback. Recovery of a Cluster Member that suffered from a failure. The state of a
recovered Cluster Member is changed from Down to either Active, or Standby
(depending on Cluster Mode).
Failed Member
A Cluster Member that cannot send or accept traffic because of a hardware or software
problem.
Failover
Also, Fail-over. Transferring of a control over traffic (packet filtering) from a Cluster
Member that suffered a failure to another Cluster Member (based on internal cluster
algorithms).
Failure
A hardware or software problem that causes a Security Gateway to be unable to serve
as a Cluster Member (for example, one of cluster interface has failed, or one of the
monitored daemon has crashed). Cluster Member that suffered from a failure is declared
as failed, and its state is changed to Down (a physical interface is considered Down
only if all configured VLANs on that physical interface are Down).
Firewall
The software and hardware that protects a computer network by analyzing the incoming
and outgoing network traffic (packets).
Flapping
Consequent changes in the state of either cluster interfaces (cluster interface flapping),
or Cluster Members (Cluster Member flapping). Such consequent changes in the state
are seen in the 'Logs & Monitor' > 'Logs' (if in SmartConsole > cluster object, the cluster
administrator set the 'Track changes in the status of cluster members' to 'Log').
Forwarding
Process of transferring of an incoming traffic from one Cluster Member to another
Cluster Member for processing. There are two types of forwarding the incoming traffic
between Cluster Members - Packet forwarding and Chain forwarding. Also see
"Forwarding Layer in Cluster" and "ARP Forwarding in Cluster".
Forwarding Layer
The Forwarding Layer is a ClusterXL mechanism that allows a Cluster Member to pass
packets to peer Cluster Members, after they have been locally inspected by the firewall.
This feature allows connections to be opened from a Cluster Member to an external
host. Packets originated by Cluster Members are hidden behind the Cluster Virtual IP
address. Thus, a reply from an external host is sent to the cluster, and not directly to the
source Cluster Member. This can pose problems in the following situations: (1) The
cluster is working in High Availability mode, and the connection is opened from the
Standby Cluster Member. All packets from the external host are handled by the Active
Cluster Member, instead. (2) The cluster is working in a Load Sharing mode, and the
decision function has selected another Cluster Member to handle this connection. This
can happen since packets directed at a Cluster IP address are distributed between
Cluster Members as with any other connection. If a Cluster Member decides, upon the
completion of the firewall inspection process, that a packet is intended for another
Cluster Member, it can use the Forwarding Layer to hand the packet over to that Cluster
Member. In High Availability mode, packets are forwarded over a Synchronization
network directly to peer Cluster Members. It is important to use secured networks only,
as encrypted packets are decrypted during the inspection process, and are forwarded
as clear-text (unencrypted) data. In Load Sharing mode, packets are forwarded over a
regular traffic network. Packets that are sent on the Forwarding Layer use a special
source MAC address to inform the receiving Cluster Member that they have already
been inspected by another Cluster Member. Thus, the receiving Cluster Member can
safely hand over these packets to the local Operating System, without further inspection.
Full Sync
Process of full synchronization of applicable kernel tables by a Cluster Member from the
working Cluster Member(s) when it tries to join the existing cluster. This process is
meant to fetch a"snapshot" of the applicable kernel tables of already Active Cluster
Member(s). Full Sync is performed during the initialization of Check Point software
(during boot process, the first time the Cluster Member runs policy installation, during
'cpstart', during 'cphastart'). Until the Full Sync process completes successfully, this
Cluster Member remains in the Down state, because until it is fully synchronized with
other Cluster Members, it cannot function as a Cluster Member. Meanwhile, the Delta
Sync packets continue to arrive, and the Cluster Member that tries to join the existing
cluster, stores them in the kernel memory until the Full Sync completes. The whole Full
Sync process is performed by fwd daemons on TCP port 256 over the Sync network (if it
fails over the Sync network, it tries the other cluster interfaces). The information is sent
by fwd daemons in chunks, while making sure they confirm getting the information
before sending the next chunk. Also see "Delta Sync".
Gaia
Check Point security operating system that combines the strengths of both
SecurePlatform and IPSO operating systems.
Gaia Clish
The name of the default command line shell in Check Point Gaia operating system. This
is a restrictive shell (role-based administration controls the number of commands
available in the shell).
Gaia Portal
Web interface for Check Point Gaia operating system.
Geo Cluster
A High Availability cluster mode (in R81 and higher versions), where cluster members
are located in different geographical areas (different sites, different cloud availability
zones). This mode supports the configuration of IP addresses from different subnets on
all cluster interfaces, including the Sync interfaces. The Active cluster member inspects
all traffic routed to the cluster and synchronizes the recorded connections to its peer
cluster members. The traffic is not balanced between the cluster members. See "High
Availability".
HA not started
Output of the 'cphaprob <flag>' command or 'show cluster <option>' command on the
Cluster Member. This output means that Check Point clustering software is not started
on this Security Gateway (for example, this machine is not a part of a cluster, or
'cphastop' command was run, or some failure occurred that prevented the ClusterXL
product from starting correctly).
High Availability
A redundant cluster mode, where only one Cluster Member (Active member) processes
all the traffic, while other Cluster Members (Standby members) are ready to be promoted
to Active state if the current Active member fails. In the High Availability mode, the
Cluster Virtual IP address (that represents the cluster on that network) is associated: (1)
With physical MAC Address of Active member (2) With virtual MAC Address (see
sk50840). Acronym: HA.
Hotfix
A piece of software installed on top of the current software in order to fix some wrong or
undesired behavior.
HTU
Stands for "HA Time Unit". All internal time in ClusterXL is measured in HTUs (the
times in cluster debug also appear in HTUs). Formula in the Check Point software: 1
HTU = 10 x fwha_timer_base_res = 10 x 10 milliseconds = 100 ms.
Hybrid
Starting in R80.20, on Security Gateways with 40 or more CPU cores, Software Blades
run in the user space (as 'fwk' processes). The Hybrid Mode refers to the state when you
upgrade Cluster Members from R80.10 (or below) to R80.20 (or above). The Hybrid
Mode is the state, in which the upgraded Cluster Members already run their Software
Blades in the user space (as fwk processes), while other Cluster Members still run their
Software Blades in the kernel space (represented by the fw_worker processes). In the
Hybrid Mode, Cluster Members are able to synchronize the required information.
ICA
Internal Certificate Authority. A component on Check Point Management Server that
issues certificates for authentication.
Init
State of a Cluster Member in the phase after the boot and until the Full Sync completes.
A Cluster Member in this state does not process any traffic passing through cluster.
Internal Network
Computers and resources protected by the Firewall and accessed by authenticated
users.
IP Tracking
Collecting and saving of Source IP addresses and Source MAC addresses from
incoming IP packets during the probing. IP tracking is a useful for Cluster Members to
determine whether the network connectivity of the Cluster Member is acceptable.
IP Tracking Policy
Internal setting that controls, which IP addresses should be tracked during IP tracking:
(1) Only IP addresses from the subnet of cluster VIP, or from subnet of physical cluster
interface (this is the default) (2) All IP addresses, also outside the cluster subnet.
IPv4
Internet Protocol Version 4 (see RFC 791). A 32-bit number - 4 sets of numbers, each
set can be from 0 - 255. For example, 192.168.2.1.
IPv6
Internet Protocol Version 6 (see RFC 2460 and RFC 3513). 128-bit number - 8 sets of
hexadecimal numbers, each set can be from 0 - ffff. For example,
FEDC:BA98:7654:3210:FEDC:BA98:7654:3210.
Link Aggregation
Technology that joins (aggregates) multiple physical interfaces together into one virtual
interface, known as a bond interface. Also known as Interface Bonding, or Interface
Teaming. This increases throughput beyond what a single connection could sustain,
and to provides redundancy in case one of the links should fail.
Load Sharing
Also, Load Balancing mode. A redundant cluster mode, where all Cluster Members
process all incoming traffic in parallel. See "Load Sharing Multicast Mode" and "Load
Sharing Unicast Mode". Acronym: LS.
Log
A record of an action that is done by a Software Blade.
Log Server
A dedicated Check Point computer that runs Check Point software to store and process
logs in Security Management Server or Multi-Domain Security Management
environment.
Management Interface
Interface on Gaia computer, through which users connect to Portal or CLI. Interface on a
Gaia Security Gateway or Cluster member, through which Management Server
connects to the Security Gateway or Cluster member.
Management Server
A Check Point Security Management Server or a Multi-Domain Server.
Master
State of a Cluster Member that processes all traffic in cluster configured in VRRP mode.
Multi-Domain Server
A computer that runs Check Point software to host virtual Security Management Servers
called Domain Management Servers. Acronym: MDS.
Multi-Version Cluster
The Multi-Version Cluster (MVC) mechanism lets you synchronize connections
between cluster members that run different versions. This lets you upgrade to a newer
version without a loss in connectivity and lets you test the new version on some of the
cluster members before you decide to upgrade the rest of the cluster members.
MVC
See "Multi-Version Cluster".
Network Object
Logical representation of every part of corporate topology (physical machine, software
component, IP Address range, service, and so on).
Network Objective
Defines how the cluster will configure and monitor an interface - Cluster, Sync,
Cluster+Sync, Monitored Private, Non-Monitored Private. Configured in SmartConsole >
cluster object > 'Topology' pane > 'Network Objective'.
Non-Blocking Mode
Cluster operation mode, in which Cluster Member keeps forwarding all traffic.
Non-Monitored Interface
An interface on a Cluster Member, whose Network Type was set as Private in
SmartConsole, in cluster object.
Non-Pivot
A Cluster Member in the Unicast Load Sharing cluster that receives all packets from the
Pivot Cluster Member.
Non-Sticky Connection
A connection is called non-sticky, if the reply packet returns via a different Cluster
Member, than the original packet (for example, if network administrator has configured
asymmetric routing). In Load Sharing mode, all Cluster Members are Active, and in
Static NAT and encrypted connections, the Source and Destination IP addresses
change. Therefore, Static NAT and encrypted connections through a Load Sharing
cluster may be non-sticky.
Open Server
A physical computer manufactured and distributed by a company, other than Check
Point.
Packet Selection
Distinguishing between different kinds of packets coming from the network, and
selecting, which member should handle a specific packet (Decision Function
mechanism): CCP packet from another member of this cluster; CCP packet from another
cluster or from a Cluster; Member with another version (usually older version of CCP);
Packet is destined directly to this member; Packet is destined to another member of this
cluster; Packet is intended to pass through this Cluster Member; ARP packets.
Pingable Host
Some host (that is, some IP address) that Cluster Members can ping during probing
mechanism. Pinging hosts in an interface's subnet is one of the health checks that
ClusterXL mechanism performs. This pingable host will allow the Cluster Members to
determine with more precision what has failed (which interface on which member). On
Sync network, usually, there are no hosts. In such case, if switch supports this, an IP
address should be assigned on the switch (for example, in the relevant VLAN). The IP
address of such pingable host should be assigned per this formula: IP_of_pingable_
host = IP_of_physical_interface_on_member + ~10. Assigning the IP address to
pingable host that is higher than the IP addresses of physical interfaces on the Cluster
Members will give some time to Cluster Members to perform the default health checks.
Example: IP address of physical interface on a given subnet on Member_A is
10.20.30.41; IP address of physical interface on a given subnet on Member_B is
10.20.30.42; IP address of pingable host should be at least 10.20.30.5
Pivot
A Cluster Member in the Unicast Load Sharing cluster that receives all packets. Cluster
Virtual IP addresses are associated with Physical MAC Addresses of this Cluster
Member. This Pivot Cluster Member distributes the traffic between other Non-Pivot
Cluster Members.
Pnote
See "Critical Device".
Preconfigured Mode
Cluster Mode, where cluster membership is enabled on all Cluster Members to be.
However, no policy had been yet installed on any of the Cluster Members - none of
them is actually configured to be primary, secondary, and so on. The cluster cannot
function, if one Cluster Member fails. In this scenario,the "preconfigured mode" takes
place. The preconfigured mode also comes into effect when no policy is yet installed,
right after the Cluster Members came up after boot, or when running the 'cphaconf init'
command.
Primary Up
ClusterXL in High Availability mode that was configured as Switch to higher priority
Cluster Member in the cluster object in SmartConsole: (1) Each Cluster Member is
given a priority (SmartConsole > cluster object > 'Cluster Members' pane). Cluster
Member with the highest priority appears at the top of the table, and Cluster Member
with the lowest priority appears at the bottom of the table. (2) The Cluster Member with
the highest priority will assume the Active state. (3) If the current Active Cluster Member
with the highest priority (for example, Member_A), fails for some reason, or is rebooted,
then failover occurs between Cluster Members. The Cluster Member with the next
highest priority will be promoted to be Active (for example, Member_B). (4) When the
Cluster Member with the highest priority (Member_A) recovers from a failure, or boots,
then additional failover occurs between Cluster Members. The Cluster Member with the
highest priority (Member_A) will be promoted to Active state (and Member_B will return
to Standby state).
Private Interface
An interface on a Cluster Member, whose Network Type was set as 'Private' in
SmartConsole in cluster object. This interface is not monitored by cluster, and failure on
this interface will not cause any changes in Cluster Member's state.
Probing
If a Cluster Member fails to receive status for another member (does not receive CCP
packets from that member) on a given segment, Cluster Member will probe that segment
in an attempt to illicit a response. The purpose of such probes is to detect the nature of
possible interface failures, and to determine which module has the problem. The
outcome of this probe will determine what action is taken next (change the state of an
interface, or of a Cluster Member).
Problem Notification
See "Critical Device".
Ready
State of a Cluster Member during after initialization and before promotion to the next
required state - Active / Standby / VRRP Master / VRRP Backup (depending on Cluster
Mode). A Cluster Member in this state does not process any traffic passing through
cluster. A member can be stuck in this state due to several reasons - see sk42096.
Rule
A set of traffic parameters and other conditions in a Rule Base that cause specified
actions to be taken for a communication session.
Rule Base
Also Rulebase. All rules configured in a given Security Policy.
SecureXL
Check Point product that accelerates IPv4 and IPv6 traffic. Installed on Security
Gateways for significant performance improvements.
Security Gateway
A computer that runs Check Point software to inspect traffic and enforces Security
Policies for connected network resources.
Security Policy
A collection of rules that control network traffic and enforce organization guidelines for
data protection and access to resources with packet inspection.
Selection
The packet selection mechanism is one of the central and most important components
in the ClusterXL product and State Synchronization infrastructure for 3rd party clustering
solutions. Its main purpose is to decide (to select) correctly what has to be done to the
incoming and outgoing traffic on the Cluster Member. (1) In ClusterXL, the packet is
selected by Cluster Member(s) depending on the cluster mode: In HA modes - by Active
member; In LS Unicast mode - by Pivot member; In LS Multicast mode - by all members.
Then the Cluster Member applies the Decision Function (and the Cluster Correction
Layer). (2) In 3rd party / OPSEC cluster, the 3rd party software selects the packet, and
Check Point software just inspects it (and performs State Synchronization).
SIC
Secure Internal Communication. The Check Point proprietary mechanism with which
Check Point computers that run Check Point software authenticate each other over
SSL, for secure communication. This authentication is based on the certificates issued
by the ICA on a Check Point Management Server.
Single Sign-On
A property of access control of multiple related, yet independent, software systems. With
this property, a user logs in with a single ID and password to gain access to a
connected system or systems without using different usernames or passwords, or in
some configurations seamlessly sign on at each system. This is typically accomplished
using the Lightweight Directory Access Protocol (LDAP) and stored LDAP databases
on (directory) servers. Acronym: SSO.
SmartConsole
A Check Point GUI application used to manage Security Policies, monitor products and
events, install updates, provision new devices and appliances, and manage a multi-
domain environment and each domain.
SmartDashboard
A legacy Check Point GUI client used to create and manage the security settings in
R77.30 and lower versions.
SmartUpdate
A legacy Check Point GUI client used to manage licenses and contracts.
Software Blade
A software blade is a security solution based on specific business needs. Each blade is
independent, modular and centrally managed. To extend security, additional blades can
be quickly added.
SSO
See "Single Sign-On".
Standalone
A Check Point computer, on which both the Security Gateway and Security
Management Server products are installed and configured.
Standby
State of a Cluster Member that is ready to be promoted to Active state (if the current
Active Cluster Member fails). Applies only to ClusterXL High Availability Mode.
State Synchronization
Technology that synchronizes the relevant information about the current connections
(stored in various kernel tables on Check Point Security Gateways) among all Cluster
Members over Synchronization Network. Due to State Synchronization, the current
connections are not cut off during cluster failover.
Sticky Connection
A connection is called sticky, if all packets are handled by a single Cluster Member (in
High Availability mode, all packets reach the Active Cluster Member, so all connections
are sticky).
Subscribers
User Space processes that are made aware of the current state of the ClusterXL state
machine and other clustering configuration parameters. List of such subscribers can be
obtained by running the 'cphaconf debug_data' command (see sk31499).
Sync Interface
Also, Secured Interface, Trusted Interface. An interface on a Cluster Member, whose
Network Type was set as Sync or Cluster+Sync in SmartConsole in cluster object. This
interface is monitored by cluster, and failure on this interface will cause cluster failover.
This interface is used for State Synchronization between Cluster Members. The use of
more than one Sync Interfaces for redundancy is not supported because the CPU load
will increase significantly due to duplicate tasks performed by all configured
Synchronization Networks. See sk92804.
Synchronization Network
Also, Sync Network, Secured Network, Trusted Network. A set of interfaces on Cluster
Members that were configured as interfaces, over which State Synchronization
information will be passed (as Delta Sync packets ). The use of more than one
Synchronization Network for redundancy is not supported because the CPU load will
increase significantly due to duplicate tasks performed by all configured
Synchronization Networks. See sk92804.
Traffic
Flow of data between network devices.
Users
Personnel authorized to use network resources and applications.
VLAN
Virtual Local Area Network. Open servers or appliances connected to a virtual network,
which are not physically connected to the same network.
VLAN Trunk
A connection between two switches that contains multiple VLANs.
VMAC
Virtual MAC address. When this feature is enabled on Cluster Members, all Cluster
Members in High Availability mode and Load Sharing Unicast mode associate the
same Virtual MAC address with Virtual IP address. This allows avoiding issues when
Gratuitous ARP packets sent by cluster during failover are not integrated into ARP
cache table on switches surrounding the cluster. See sk50840.
VSX
Virtual System Extension. Check Point virtual networking solution, hosted on a
computer or cluster with virtual abstractions of Check Point Security Gateways and
other network devices. These Virtual Devices provide the same functionality as their
physical counterparts.
VSX Gateway
Physical server that hosts VSX virtual networks, including all Virtual Devices that
provide the functionality of physical network devices. It holds at least one Virtual
System, which is called VS0.
Introduction to ClusterXL
The Need for Clusters
Security Gateways and VPN connections are business critical devices. The failure of a Security Gateway or
VPN connection can result in the loss of active connections and access to critical data. The Security
Gateway between the organization and the world must remain open under all circumstances.
ClusterXL Solution
ClusterXL is a Check Point software-based cluster solution for Security Gateway redundancy and Load
Sharing. A ClusterXL Security Cluster contains identical Check Point Security Gateways.
n A High Availability Security Cluster ensures Security Gateway and VPN connection redundancy by
providing transparent failover to a backup Security Gateway in the event of failure.
n A Load Sharing Security Cluster provides reliability and also increases performance, as all members
are active.
Item Description
1 Internal network
5 Internet
ClusterXL uses virtual IP addresses for the cluster itself and unique physical IP and MAC addresses for the
Cluster Members. Virtual IP addresses do not belong to physical interfaces.
Note - This guide contains information only for Security Gateway clusters. For
additional information about the use of ClusterXL with VSX, see the R81 VSX
Administration Guide.
Important - There is no need to add an explicit rule to the Security Policy Rule Base that
accepts CCP packets.
Best Practice - To avoid unexpected fail-overs due to issues with CCP packets on
cluster interfaces, we strongly recommend to pair only identical physical interfaces as
cluster interfaces - even when connecting the Cluster Members via a switch.
For example:
n Intel 82598EB on Member_A with Intel 82598EB on Member_B
n Broadcom NeXtreme on Member_A with Broadcom NeXtreme on Member_B
n SecureXL status on all Cluster Members must be the same (either enabled, or disabled)
n Number of CoreXL Firewall instances on all Cluster Members must be the same
Notes:
l A Cluster Member with a greater number of CoreXL Firewall instances
VMAC Mode
When ClusterXL is configured in High Availability mode or Load Sharing Unicast mode (not Multicast), a
single Cluster Member is associated with the Cluster Virtual IP address. In a High Availability environment,
the single member is the Active member. In a Load Sharing environment, the single member is the Pivot.
After fail-over, the new Active member (or Pivot member) broadcasts a series of Gratuitous ARP Requests
(GARPs). The GARPS associate the Virtual IP address of the cluster with the physical MAC address of the
new Active member or the new Pivot.
When this happens:
n A member with a large number of Static NAT entries can transmit too many GARPs
Switches may not integrate these GARP updates quickly enough into their ARP tables. Switches
continue to send traffic to the physical MAC address of the member that failed. This results in traffic
outage until the switches have fully updated ARP cache tables.
n Network components, such as VoIP phones, ignore GARPs
These components continue to send traffic to the MAC address of the failed member.
To minimize possible traffic outage during a fail-over, configure the cluster to use a virtual MAC address
(VMAC).
By enabling Virtual MAC in ClusterXL High Availability mode, or Load Sharing Unicast mode, all Cluster
Members associate the same Virtual MAC address with all Cluster Virtual Interfaces and the Virtual IP
address. In Virtual MAC mode, the VMAC that is advertised by the Cluster Members (through GARP
Requests) keeps the real MAC address of each member and adds a Virtual MAC address on top of it.
For local connections and sync connections, the real physical MAC address of each Cluster Member is still
associated with its real IP address.
Note - Traffic originating from the Cluster Members will be sent with the VMAC Source address.
You can enable VMAC in SmartConsole, or on the command line. See sk50840.
Failover time in a cluster with enabled VMAC mode is shorter than a failover in a cluster that uses a
physical MAC addresses.
On each Cluster Member, set the same value for the global kernel parameter fwha_vmac_global_
param_enabled.
1. Connect to the command line on each Cluster Member.
2. Log in to the Expert mode.
3. Get the current value of this kernel parameter. Run:
4. Set the new value for this kernel parameter temporarily (does not survive reboot). Run:
n To enable VMAC mode:
5. Make sure the state of the VMAC mode was changed. Run on each Cluster Member:
n In Gaia Clish:
cphaprob -a if
When VMAC mode is enabled, output of this command shows the VMAC address of each virtual
cluster interface.
6. To set the new value for this kernel parameter permanently:
Follow the instructions in sk26202 to add this line to the
$FWDIR/boot/modules/fwkern.conf file:
fwha_vmac_global_param_enabled=<value>
Topology 2
Additional information
If a Cluster Member does not receive CCP packets from other Cluster Members (for
example, a cable is disconnected between switches), it sends a dedicated CCP packet
to all other Cluster Members.
This dedicated CCP packet includes a map of all available bond slave interfaces on that
Cluster Member.
When other Cluster Members receive this dedicated CCP packet, they:
a. Compare the received map of available bond slave interfaces to the state of their
own bond slave interfaces.
b. Perform bond internal failover accordingly.
n Fail over to a specific selected slave interface instead of iterating over all available slave
interfaces.
Additional information
If a Cluster Member does not receive CCP packets from other Cluster Members (for
example, a cable is disconnected between switches), it sends a dedicated CCP packet
to all other Cluster Members.
This dedicated CCP packet includes a map of all available bond slave interfaces on that
Cluster Member.
When other Cluster Members receive this dedicated CCP packet, they:
a. Compare the received map of available bond slave interfaces to the state of their
own bond slave interfaces.
b. Perform bond internal failover accordingly.
n Fail over to a specific selected slave interface instead of iterating over all available slave
interfaces.
Limitations:
n IPv6 is not supported for Load Sharing clusters.
n You cannot define IPv6 address for synchronization interfaces.
Note - ClusterXL failover event detection is based on IPv4 probing. During state
transition the IPv4 driver instructs the IPv6 driver to reestablish IPv6 network
connectivity to the HA cluster.
High Availability
In a High Availability cluster, only one member is active (Active/Standby operation). In the event that the
active Cluster Member becomes unavailable, all connections are re-directed to a designated standby
without interruption. In a synchronized cluster, the standby Cluster Members are updated with the state of
the connections of the Active Cluster Member.
In a High Availability cluster, each member is assigned a priority. The highest priority member serves as
the Security Gateway in normal circumstances. If this member fails, control is passed to the next highest
priority member. If that member fails, control is passed to the next member, and so on.
Upon Security Gateway recovery, you can maintain the current Active Security Gateway (Active Up), or to
change to the highest priority Security Gateway (Primary Up).
ClusterXL High Availability mode supports both IPv4 and IPv6.
Load Sharing
ClusterXL Load Sharing distributes traffic within a cluster so that the total throughput of multiple members
is increased. In Load Sharing configurations, all functioning members in the cluster are active, and handle
network traffic (Active/Active operation).
If any member in a cluster becomes unreachable, transparent failover occurs to the remaining operational
members in the cluster, thus providing High Availability. All connections are shared between the remaining
Security Gateways without interruption.
ClusterXL Load Sharing modes do not support IPv6.
Example Diagram
The following diagram illustrates a two-member ClusterXL cluster, showing the cluster Virtual IP addresses
and members physical IP addresses.
This sample deployment is used in many of the examples presented in this chapter.
Item Description
1 Internal network
6 Internet
Each Cluster Member has three interfaces: one external interface, one internal interface, and one for
synchronization. Cluster Member interfaces facing in each direction are connected via a hub or switch.
All Cluster Member interfaces facing the same direction must be in the same network. For example, there
must not be a router between Cluster Members.
The Management Server can be located anywhere, and connection should be established to either the
internal or external cluster IP addresses.
These sections present ClusterXL configuration concepts shown in the example.
Note - In these examples, RFC 1918 private addresses in the range 192.168.0.0 to
192.168.255.255 are treated as public IP addresses.
Note - This release presents an option to use only two interfaces per member, one
external and one internal, and to run synchronization over the internal interface. We do
not recommend this configuration. It should be used for backup only. (See
"Synchronizing Connections in the Cluster" on page 70.)
IP Address Migration
If you wish to provide High Availability or Load Sharing to an existing Security Gateways configuration, we
recommend taking the existing IP addresses from the Active Security Gateway, and make these the
Cluster Virtual IP addresses, when feasible. Doing so lets you avoid altering of current IPsec endpoint
identities, as well keep Hide NAT configurations the same in many cases.
ClusterXL Modes
ClusterXL has several working modes.
This section briefly describes each mode and its relative advantages and disadvantages.
n High Availability Mode
n Load Sharing Multicast Mode
n Load Sharing Unicast Mode
n Active-Active Mode (see "Active-Active Mode in ClusterXL" on page 62)
Note - Many examples in the section refer to the sample deployment shown in the
"Example ClusterXL Topology" on page 48.
Example
This scenario describes a connection from a Client computer on the external network to a Web server
behind the Cluster (on the internal network).
The cluster of two members is configured in High Availability mode.
Example topology:
[Client on]
[external network]
{IP 192.168.10.78/24}
{DG 192.168.10.100}
|
|
{VIP 192.168.10.100/24}
/ \
| |
{IP 192.168.10.1/24} {IP 192.168.10.2/24}
| |
{Active} {Standby}
[Member1]-----sync-----[Member2]
| |
{IP 10.10.0.1/24} {IP 10.10.0.2/24}
| |
\ /
{VIP 10.10.0.100/24}
|
|
{DG 10.10.0.100}
{IP 10.10.0.34/24}
[Web server on]
[internal network]
Chain of events:
1. The user tries to connect from his Client computer 192.168.10.78 to the Web server 10.10.0.34.
2. The Default Gateway on Client computer is 192.168.10.100 (the cluster Virtual IP address).
3. The Client computer issues an ARP Request for IP 192.168.10.100.
4. The Active Cluster Member (Member1) handles the ARP Request for IP 192.168.10.100.
5. The Active Cluster Member sends the ARP Reply with the MAC address of the external interface,
on which the IP address 192.168.10.1 is configured.
6. The Client computer sends the HTTP request packet to the Active Cluster Member - to the VIP
address 192.168.10.100 and MAC address of the corresponding external interface.
7. The Active Cluster Member handles the HTTP request packet.
8. The Active Cluster Member sends the HTTP request packet to the Web server 10.10.0.34.
9. The Web server handles the HTTP request packet.
10. The Web server generates the HTTP response packet.
11. The Default Gateway on Web server computer is 10.10.0.100 (the cluster Virtual IP address).
12. The Web server issues an ARP Request for IP 10.10.0.100.
13. The Active Cluster Member handles the ARP Request for IP 10.10.0.100.
14. The Active Cluster Member sends the ARP Reply with the MAC address of the internal interface,
on which the IP address 10.10.0.1 is configured.
15. The Web server sends the HTTP response packet to the Active Cluster Member - to VIP address
10.10.0.100 and MAC address of the corresponding internal interface.
16. The Active Cluster Member handles the HTTP response packet.
17. The Active Cluster Member sends the HTTP response packet to the Client computer
192.168.10.78.
18. From now, all traffic between the Client computer and the Web server is routed through the
Active Cluster Member (Member1).
19. If a failure occurs on the current Active Cluster Member (Member1), the cluster fails over.
20. The Standby Cluster Member (Member2) assumes the role of the Active Cluster Member.
21. The Standby Cluster Member sends Gratuitous ARP Requests to both the 192.168.10.x and the
10.10.0.x networks.
These GARP Requests associate the Cluster Virtual IP addresses with the MAC addresses of the
physical interfaces on the new Active Cluster Member (former Standby Cluster Member):
n Cluster VIP address 192.168.10.100 and MAC address of the corresponding external
interface, on which the IP address 192.168.10.2 is configured.
n Cluster VIP address 10.10.0.100 and MAC address of the corresponding internal
interface, on which the IP address 10.10.0.2 is configured.
22. From now, all traffic between the Client computer and the Web server is routed through the new
Active Cluster Member (Member2).
23. The former Active member (Member1) is now considered to be "down". Upon the recovery of a
former Active Cluster Member, the role of the Active Cluster Member may or may not be switched
back to that Cluster Member, depending on the cluster object configuration.
Example
This scenario describes a user logging from the Internet to a Web server behind the cluster that is
configured in Load Sharing Multicast mode.
1. The user requests a connection from 192.168.10.78 (his computer) to 10.10.0.34 (the Web
server).
2. A router on the 192.168.10.x network recognizes 192.168.10.100 (the cluster Virtual IP address)
as the default gateway to the 10.10.0.x network.
3. The router issues an ARP Request for IP 192.168.10.100.
4. One of the Active Cluster Members processes the ARP Request, and responds with the Multicast
MAC assigned to the cluster Virtual IP address of 192.168.10.100.
5. When the Web server responds to the user requests, it recognizes 10.10.0.100 as its default
gateway to the Internet.
6. The Web server issues an ARP Request for IP 10.10.0.100.
7. One of the Active members processes the ARP Request, and responds with the Multicast MAC
address assigned to the cluster Virtual IP address of 10.10.0.100.
8. All packets sent between the user and the Web server reach every Cluster Member, which
decide whether to process each packet.
9. When a cluster failover occurs, one of the Cluster Members goes down. However, traffic still
reaches all of the Active Cluster Members. Therefore, there is no need to make changes in the
network ARP routing. The only thing that changes, is the cluster decision function, which takes
into account the new state of the Cluster Members.
Example
In this scenario, we use a Load Sharing Unicast cluster as the Security Gateway between the end user
computer and the Web server.
1. The user requests a connection from 192.168.10.78 (his computer) to 10.10.0.34 (the Web
server).
2. A router on the 192.168.10.x network recognizes 192.168.10.100 (the cluster Virtual IP address)
as the default gateway to the 10.10.0.x network.
3. The router issues an ARP Request for IP 192.168.10.100.
4. The Pivot Cluster Member handles the ARP Request, and responds with the MAC address that
corresponds to its own unique IP address of 192.168.10.1.
5. When the Web server responds to the user requests, it recognizes 10.10.0.100 as its default
gateway to the Internet.
6. The Web server issues an ARP Request for IP 10.10.0.100.
7. The Pivot Cluster Member handles the ARP Request, and responds with the MAC address that
corresponds to its own unique IP address of 10.10.0.1.
8. The user request packet reaches the Pivot Cluster Member on interface 192.168.10.1.
9. The Pivot Cluster Member decides that the second non-Pivot Cluster Member should handle this
packet, and forwards it to 192.168.10.2.
10. The second Cluster Member recognizes the packet as a forwarded packet, and handles it.
11. Further packets are processed by either the Pivot Cluster Member, or forwarded and processed
by the non-Pivot Cluster Member.
12. When a failover occurs from the current Pivot Cluster Member, the second Cluster Member
assumes the role of Pivot.
13. The new Pivot member sends Gratuitous ARP Requests to both the 192.168.10.x and the
10.10.0.x networks. These GARP requests associate the cluster Virtual IP address of
192.168.10.100 with the MAC address that corresponds to the unique IP address of
192.168.10.2, and the cluster Virtual IP address of 10.10.0.100 with the MAC address that
correspond to the unique IP address of 10.10.0.2.
14. Traffic sent to the cluster is now received by the new Pivot Cluster Member, and processed by it
(as it is currently the only Active Cluster Member).
15. When the former Pivot Cluster Member recovers, it re-assumes the role of Pivot, by associating
the cluster Virtual IP addresses with its own unique MAC addresses.
Load
High Load Sharing Active-
Feature Sharing
Availability Multicast Active
Unicast
Hardware Support All routers Not all routers are All routers All
supported routers
How cluster answers ARP requests for Unicast Unicast Unicast N/A
a MAC address
Cluster Failover
What is Failover?
Failover is a cluster redundancy operation that automatically occurs if a Cluster Member is not functional.
When this occurs, other Cluster Members take over for the failed Cluster Member.
In the High Availability mode:
n If the Active Cluster Member detects that it cannot function as a Cluster Member, it notifies the peer
Standby Cluster Members that it must go down. One of the Standby Cluster Members (with the next
highest priority) will promote itself to the Active state.
n If one of the Standby Cluster Members stops receiving Cluster Control Protocol (CCP) packets from
the current Active Cluster Member, that Standby Cluster Member can assume that the current
Active Cluster Member failed. As a result, one of the Standby Cluster Members (with the next
highest priority) will promote itself to the Active state.
n If you do not use State Synchronization in the cluster, existing connections are interrupted when
cluster failover occurs.
In Load Sharing modes:
n If a Cluster Member detects that it cannot function as a Cluster Member, it notifies the peer Cluster
Members that it must go down. Traffic load will be redistributed between the working Cluster
Members.
n If the Cluster Members stop receiving Cluster Control Protocol (CCP) packets from one of their peer
Cluster Member, those working Cluster Members can assume that their peer Cluster Member failed.
As a result, traffic load will be redistributed between the working Cluster Members.
n Because by design, all Cluster Members are always synchronized, current connections are not
interrupted when cluster failover occurs.
To tell each Cluster Member that the other Cluster Members are alive and functioning, the ClusterXL
Cluster Control Protocol (CCP) maintains a heartbeat between Cluster Members. If after a predefined
time, no CCP packets are received from a Cluster Member, it is assumed that the Cluster Member is down.
As a result, cluster failover can occur.
Note that more than one Cluster Member may encounter a problem that will result in a cluster failover
event. In cases where all Cluster Members encounter such problems, ClusterXL will try to choose a single
Cluster Member to continue operating. The state of the chosen member will be reported as Active(!). This
situation lasts until another Cluster Member fully recovers. For example, if a cross cable connecting the
sync interfaces on Cluster Members malfunctions, both Cluster Members will detect an interface problem.
One of them will change to the Down state, and the other to Active (!) state.
Important:
n You can configure the Active-Active mode only on Management Server R80.40
(and higher) and only on ClusterXL R80.40 (and higher).
n The Active-Active mode does not provide Load Sharing of the traffic.
Administrator must monitor the load on each Cluster Member (see "Monitoring
and Troubleshooting Clusters" on page 197).
n Cluster Control Protocol (CCP) Encryption must be enabled, which is the default
(see "Configuring the Cluster Control Protocol (CCP) Settings" on page 188).
n It is possible to configure the Cluster Control Protocol (CCP) to work on Layer 2.
n It is possible to enable Dynamic Routing on each Cluster Member, so they
become routers in the applicable Area or Autonomous System on the site.
In this case, you must enable the Bidirectional Forwarding Detection (BFD - ip-
reachability-detection) in the dynamic routing protocol on each cluster interface
and on the cluster sync interface (Known Limitation PMTR-41292).
2 Optional: On each Cluster Member, enable Dynamic Routing, so they become routers in the
applicable Area or Autonomous System on the site.
Important - In this case, you must enable the Bidirectional Forwarding Detection (BFD
- ip-reachability-detection) in the dynamic routing protocol on each cluster interface
and on the cluster sync interface (Known Limitation PMTR-41292).
See the R81 Gaia Advanced Routing Administration Guide.
n From the top toolbar, click the New ( ) > Cluster > Cluster.
n In the top left corner, click Objects menu > More object types > Network Object >
Gateways and Servers > Cluster > New Cluster.
n In the top right corner, click Objects Pane > New > More > Network Object >
Gateways and Servers > Cluster > Cluster.
In the Check Point Security Gateway Cluster Creation window, you must click Classic
Mode.
a. In the IPv4 Address field, you must enter the 0.0.0.0 address.
b. On the Network Security tab, you must clear the IPsec VPN.
Step Instructions
a. In the Select the cluster mode and configuration section, select Active-Active.
b. In the Tracking section, select the applicable option.
c. Optional: In the Advanced Settings section, select Use State Synchronization.
Best Practice - Enable this setting, so Cluster Members can synchronize the
inspected connections.
8 Click OK to update the cluster object properties with the new cluster mode.
a. From the top, click the Get Interfaces > Get Interfaces With Topology .
Notes:
l Check Point cluster supports only one synchronization network. If
11 Click OK.
Step Instructions
Member1>
Note - If it is necessary that Cluster Members change their cluster state because of
other Critical Devices, you must manually configure this behavior.
Procedure
11. Make sure the Cluster Member applied the new configuration:
fw ctl get str fwha_disable_member_state_pnotelist
Limitations
Feature or
Note
Configuration
Number of Cluster Up to five Cluster Members are supported in a cluster (see "ClusterXL
Members in cluster Requirements and Compatibility" on page 37).
Names of interfaces on On all Cluster Members in Active-Active mode, names of interfaces that
Cluster Members belong to the same "side" must be identical.
Examples:
n If you connected the interface eth1 to Switch #A on one Cluster
Member, then you must connect the interface eth1 to Switch #A
on all other Cluster Members.
n If you configured the interface eth2 as a Sync interface on one
Cluster Member, then you must configure the interface eth2 as a
Sync on all other Cluster Members.
Multi Portal All multi-portals are not supported (Mobile Access Portal, Identity
Awareness Captive Portal, Data Loss Prevention Portal, and so on).
Network Type configuration In the cluster object properties, go the Network Management page,
of cluster interfaces in select a cluster interface and click Edit.
SmartConsole In the Network Type field, it is not supported to select "Cluster+Sync "
when you deploy a cluster in a cloud (for example: AWS, Azure).
Topology configuration of In the cluster object properties, go the Network Management page,
cluster interfaces in select a cluster interface and click Edit.
SmartConsole In the Topology section, only these options are supported for cluster
interfaces:
n Override > Network defined by routes (this is the default).
n Override > Specific > select the applicable Network object or
Network Group object.
Notes:
n See "Supported Topologies for Synchronization Network" on page 41.
n You can synchronize members across a WAN. See "Synchronizing Clusters on
a Wide Area Network" on page 82.
n In ClusterXL, the synchronization network is supported on the lowest VLAN tag
of a VLAN interface.
For example, if three VLANs with tags 10, 20 and 30 are configured on interface
eth1, only interface eth1.10 may be used for synchronization.
Procedure:
1. In SmartConsole, click Objects > Object Explorer.
2. In the left tree, click the small arrow on the left of the Services to expand this category
3. In the left tree, select TCP.
4. Search for the applicable TCP service.
5. Double-click the applicable TCP service.
6. In the TCP service properties window, click Advanced page.
7. At the top, select Override default settings .
On Domain Management Server: select Override global domain settings .
8. At the bottom, in the Cluster and synchronization section, select Start synchronizing and enter
the applicable value.
Important - This change applies to all policies that use this service.
9. Click OK.
10. Close the Object Explorer.
11. Publish the SmartConsole session.
12. Install the Access Control Policy on the cluster object.
Note - The Delayed Notifications setting in the service object is ignored, if Connection
Templates are not offloaded by the Firewall to SecureXL. For additional information
about the Connection Templates, see the R81 Performance Tuning Administration
Guide.
Important - This change applies to all policies that use this service.
Sticky Connections
In This Section:
Item Description
1 Internal network
Item Description
5 Internet
In this scenario:
n A third-party peer (gateway or client) attempts to create a VPN tunnel.
n Cluster Members "A" and "B" belong to a ClusterXL in Load Sharing mode.
The third-party peers, lacking the ability to store more than one set of SAs, cannot negotiate a VPN tunnel
with multiple Cluster Members, and therefore the Cluster Member cannot complete the routing
transaction.
This issue is resolved for certain third-party peers or gateways that can save only one set of SAs by making
the connection sticky. The Cluster Correction Layer (CCL) makes sure that a single Cluster Member
processes all VPN sessions, initiated by the same third-party gateway.
Item Description
Item Description
4 Internet
5 Gateway - Spoke A
6 Gateway - Spoke B
2. Other VPN peers can be added to the Tunnel Group by including their IP addresses in the same
format as shown above. To continue with the example above, adding Spoke C would look like this:
The Tunnel Group Identifier ;1 stays the same, which means that the listed peers will always connect
through the same Cluster Member.
This procedure turns off Load Sharing for the affected connections. If the implementation is to
connect multiple sets of third-party gateways one to another, a form of Load Sharing can be
accomplished by setting Security Gateway pairs to work in tandem with specific Cluster Members.
For instance, to set up a connection between two other spokes (C and D), simply add their IP
addresses to the line and replace the Tunnel Group Identifier ;1 with ;2. The line would then look
something like this:
Note that there are now two peer identifiers: ;1 and ;2. Spokes A and B will now connect through one
Cluster Member, and Spokes C and D through another.
Note - The tunnel groups are shared between active Cluster Members. In case
of a change in cluster state (for example, failover or Cluster Member
attach/detach), the reassignment is performed according to the new state.
Non-Sticky Connections
In This Section:
A connection is called sticky if a single Cluster Member handles all packets of that connection. In a non-
sticky connection, the response packet of a connection returns through a different Cluster Member than
the original request packet.
The cluster synchronization mechanism knows how to handle non-sticky connections properly. In a non-
sticky connection, a Cluster Member can receive an out-of-state packet, which Firewall normally drops
because it poses a security risk.
In Load Sharing configurations, all Cluster Members are active. In Static NAT and encrypted connections,
the source and destination IP addresses change. Therefore, Static NAT and encrypted connections
through a Load Sharing cluster may be non-sticky. Non-stickiness may also occur with Hide NAT, but
ClusterXL has a mechanism to make it sticky.
In High Availability configurations, all packets reach the Active member, so all connections are sticky. If
failover occurs during connection establishment, the connection is lost, but synchronization can be
performed later.
If the other members do not know about a non-sticky connection, the packet will be out-of-state, and the
connection will be dropped for security reasons. However, the Synchronization mechanism knows how to
inform other members of the connection. The Synchronization mechanism thereby prevents out-of-state
packets in valid, but non-sticky connections, so that these non-sticky connections are allowed.
Non-sticky connections will also occur if the network Administrator has configured asymmetric routing,
where a reply packet returns through a different Security Gateway than the original packet.
Item Description
1 Client
Item Description
4 Server
The client initiates a connection by sending a SYN packet to the server. The SYN passes through Cluster
Member A, but the SYN-ACK reply returns through Cluster Member B. This is a non-sticky connection,
because the reply packet returns through a different Security Gateway than the original packet.
The synchronization network notifies Cluster Member B. If Cluster Member B is updated before the SYN-
ACK packet sent by the server reaches it, the connection is handled normally. If, however, synchronization
is delayed, and the SYN-ACK packet is received on Cluster Member B before the SYN flag has been
updated, then the Security Gateway treats the SYN-ACK packet as out-of-state, and drops the connection.
You can configure enhanced 3-Way TCP Handshake enforcement to address this issue (see "Enhanced
3-Way TCP Handshake Enforcement" on page 146).
Note - The Active-Active mode also supports Layer 3 (see "Active-Active Mode
in ClusterXL" on page 62).
You can monitor and troubleshoot geographically distributed clusters using the command line interface.
n Cluster Members cannot synchronize the connection statutes that use system resources. The
reason is the same as for the user-authenticated connections.
n Accounting information for connections is accumulated on each Cluster Member, sent to the
Management Server, and aggregated.
In the event of a cluster failover, the accounting information that is not yet sent to the Management
Server, is lost.
To minimize this risk, you can reduce the time interval when accounting information is sent.
To do this, in the cluster object > Logs > Additional Logging pane, set a lower value for the Update
Account Log every attribute.
Configuring ClusterXL
This procedure describes how to configure the Load Sharing Multicast, Load Sharing Unicast, and High
Availability modes from scratch.
Their configuration is identical, apart from the mode selection in SmartConsole Cluster object or Cluster
creation wizard.
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
Cluster Members configure the Cluster Control Protocol (CCP) mode automatically.
You can configure the Cluster Control Protocol (CCP) Encryption on the Cluster Members.
See "Viewing the Cluster Control Protocol (CCP) Settings" on page 240.
Shell Command
The Cluster Wizard is recommended for all Check Point Appliances (except models lower than 3000
series), and for Open Server platforms.
Step Instructions
1 In SmartConsole, click Objects menu > More object types > Network Object > Gateways
and Servers > Cluster > New Cluster.
2 In Check Point Security Gateway Cluster Creation window, click Wizard Mode.
Step Instructions
a. In the Cluster Name field, enter unique name for the cluster object.
b. In the Cluster IPv4 Address , enter the unique Cluster Virtual IPv4 addresses for this
cluster.
This is the main IPv4 address of the cluster object.
c. In the Cluster IPv6 Address , enter the unique Cluster Virtual IPv6 addresses for this
cluster.
This is the main IPv6 address of the cluster object.
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support pure IPv6 addresses.
d. In the Choose the Cluster's Solution field, select the applicable option and click Next:
n Check Point ClusterXL and then select High Availability or Load Sharing
n Gaia VRRP
4 In the Cluster member's properties window perform these steps for each Cluster Member
and click Next.
We assume you create a new cluster object from the scratch.
a. Click Add > New Cluster Member to configure each Cluster Member.
b. In the Cluster Name field, enter unique name for the Cluster Member object.
c. In the Cluster IPv4 Address , enter the unique Cluster Virtual IPv4 addresses for this
Cluster Member.
This is the main IPv4 address of the Cluster Member object.
d. In the Cluster IPv6 Address , enter the unique Cluster Virtual IPv6 addresses for this
Cluster Member.
This is the main IPv6 address of the Cluster Member object.
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support pure IPv6 addresses.
e. In the Activation Key and Confirm Activation Key fields, enter a one-time password
that you entered in First Time Configuration Wizard during the installation of this
Cluster Member.
f. Click Initialize.
Management Server will try to establish SIC with each Cluster Member.
The Trust State field should show Trust established.
g. Click OK.
Step Instructions
5 In the Cluster Topology window, define a network type (network role) for each cluster
interface and define the Cluster Virtual IP addresses. Click Next.
The wizard automatically calculates the subnet for each cluster network and assigns it to the
applicable interface on each Cluster Member. The calculated subnet shows in the upper
section of the window.
The available network objectives are:
Network
Description
Objective
After you complete the wizard, we recommend that you open the cluster object and complete the
configuration:
n Define Anti-Spoofing properties for each interface
n Change the Topology settings for each interface, if necessary
n Define the Network Type
n Configure other Software Blades, features and properties as necessary
The Small Office Cluster wizard is recommended for Centrally Managed Check Point appliances -
models lower than 3000 series.
Step Instructions
1 In SmartConsole, click Objects menu > More object types > Network Object > Gateways
and Servers > Cluster > New Small Office Cluster.
2 In Check Point Security Gateway Cluster Creation window, click Wizard Mode.
a. Enter the member name and IPv4 addresses for each Cluster Member.
b. Enter the one-time password for SIC trust.
c. Click Next.
d. Management Server will try to establish SIC with the Primary Cluster Member.
5 In the Configure WAN Interface page, configure the Cluster Virtual IPv4 address.
6 Define the Cluster Virtual IPv4 addresses for the other cluster interfaces.
After you complete the wizard, we recommend that you open the cluster object and complete the
configuration:
n Define Anti-Spoofing properties for each interface
n Change the Topology settings for each interface, if necessary
n Define the Network Type
n Configure other Software Blades, features and properties as necessary
n From the top toolbar, click the New ( ) > Cluster > Cluster.
n In the top left corner, click Objects menu > More object types > Network Object >
Gateways and Servers > Cluster > New Cluster.
n In the top right corner, click Objects Pane > New > More > Network Object >
Gateways and Servers > Cluster > Cluster.
4 In the Check Point Security Gateway Creation window, click Classic Mode.
The Gateway Cluster Properties window opens.
a. In the Name field, make sure you see the configured applicable name for this ClusterXL
object.
b. In the IPv4 Address and IPv6 Address fields, configure the same IPv4 and IPv6
addresses that you configured on the Management Connection page of the Cluster
Member's First Time Configuration Wizard.
Make sure the Security Management Server or Multi-Domain Server can connect to
these IP addresses.
6 On the General Properties page > Platform section, select the correct options:
a. On the Network Security tab, make sure the ClusterXL Software Blade is selected.
b. Enable the additional applicable Software Blades on the Network Security tab and on
the Threat Prevention tab.
Step Description
If the Trust State field does not show Trust established, perform these steps:
a. Connect to the command line on the Cluster Member.
b. Make sure there is a physical connectivity between the Cluster Member and the
Management Server (for example, pings can pass).
c. Run:
cpconfig
d. Enter the number of this option:
Secure Internal Communication
e. Follow the instructions on the screen to change the Activation Key.
f. In SmartConsole, click Reset.
g. Enter the same Activation Key you entered in the cpconfig menu.
h. In SmartConsole, click Initialize.
a. In the Select the cluster mode and configuration section, select the applicable mode:
n High Availability and ClusterXL
n Load Sharing and Multicast or Unicast
b. In the Tracking section, select the applicable option.
c. In the Advanced Settings section:
Step Description
For more information, click the (?) button in the top right corner.
ii. Optional: Select Use Virtual MAC.
For more information, see sk50840.
iii. Select the Cluster Member recovery method.
For more information, click the (?) button in the top right corner.
Step Description
a. Select each interface and click Edit. The Network: <Name of Interface> window opens.
b. From the left tree, click the General page.
c. In the General section, in the Network Type field, select the applicable type:
n For cluster traffic interfaces, select Cluster.
Make sure the Cluster Virtual IPv4 address and its Net Mask are correct.
n For cluster synchronization interfaces, select Sync or Cluster+Sync .
Notes:
l We do not recommend the configuration Cluster+Sync .
11 Click OK.
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support pure IPv6 addresses.
IPv6 Considerations
To activate IPv6 functionality for an interface, define an IPv6 address for the applicable interface on each
Cluster Member and in the cluster object. All interfaces configured with an IPv6 address must also have a
corresponding IPv4 address. If an interface does not require IPv6, only the IPv4 definition address is
necessary.
Note - You must configure synchronization interfaces with IPv4 addresses only. This is
because the synchronization mechanism works using IPv4 only. All IPv6 information
and states are synchronized using this interface.
1. Connect with SmartConsole to the Security Management Server or Domain Management Server
that manages this cluster.
2. From the left navigation panel, click Gateways & Servers .
3. Open the cluster object.
4. From the left tree, click the Network Management page.
5. Select a cluster interface and click Edit.
6. From the left navigation tree, click General page:
a. In the General section, configure these settings for Cluster Virtual Interface:
n Network Type - one of these: Cluster, Sync , Cluster + Sync , Private
The available network types (network objectives) are:
Network
Description
Type
Private An interface that is not part of the cluster. ClusterXL does not monitor the
state of this interface. As a result, there is no cluster failover if a fault
occurs with this interface. This option is recommended for the
management interface.
n Virtual IPv4 - Virtual IPv4 address assigned to this Cluster Virtual Interface
n Virtual IPv6 - Virtual IPv6 address assigned to this Cluster Virtual Interface
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support the configuration of only IPv6
addresses.
7. In the Member IPs section, click Modify and configure these settings:
n Physical IPv4 address and Mask Length assigned to the applicable physical interface on each
Cluster Member
n Physical IPv6 address and Mask Length assigned to the applicable physical interface on each
Cluster Member
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support the configuration of only IPv6
addresses.
If you wish to use Office Mode for Remote Access, select Offer Manual Office Mode and
define the IP pool allocated to each Cluster Member.
n In the Certificate List with keys stored on the Security Gateway section:
If your Cluster Member supports hardware storage for IKE certificates, define the certificate
properties.
In that case, Management Server directs the Cluster Member to create the keys and supply
only the required material for creation of the certificate request.
The certificate is downloaded to the Cluster Member during policy installation.
5. Click OK to close the Cluster Member Properties window.
6. In the left navigation tree, go to ClusterXL and VRRP page.
7. Make sure to select Use State Synchronization.
This is required to synchronize IKE keys.
8. In the left navigation tree, go to Network Management > VPN Domain page.
9. Define the encryption domain of the cluster.
Select one of the two possible settings:
n All IP addresses behind Cluster Members based on Topology information. This is the
default option.
n Manually defined. Use this option if the cluster IP address is not on the member network, in
other words, if the cluster virtual IP address is on a different subnet than the Cluster Member
interfaces. In that case, select a network or group of networks, which must include the virtual
IP address of the cluster, and the network or group of networks behind the cluster.
10. Click OK to close the Gateway Cluster Properties window.
11. Install the Access Control Policy on the cluster.
5. Click OK.
6. Install the Access Control Policy on the cluster.
Performing this NAT means that when a packet originates behind or on the non-Cluster interface of the
Cluster Member, and is sent to a host on the other side of the internal Security Gateway, the source
address of the packet will be translated.
Interface Monitoring in
Monitoring in VSX Cluster
type ClusterXL (non-VSX)
Only the Enabled by default. Must disable the monitoring of all VLAN IDs - set
lowest VLAN the value of the kernel parameter fwha_
ID monitor_all_vlan to 0.
See sk92826.
All VLAN IDs Disabled by default. Virtual System Load Sharing: Disabled by default.
Controlled by the kernel Controlled by the kernel parameter fwha_
parameter fwha_monitor_ monitor_all_vlan.
all_vlan. See sk92826.
See sk92826.
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This procedure lets you configure the Cluster Member to monitor only the physical link on the cluster
interfaces (instead of monitoring the Cluster Control Protocol (CCP) packets):
n If a link disappears on the configured interface, the Cluster Member changes the interface's state to
DOWN.
This causes the Cluster Member to change its state to DOWN.
n If a link appears again on the configured interface, the Cluster Member changes the interface's state
back to UP.
This causes the Cluster Member to change its state back to ACTIVE or STANDBY.
See "Viewing Cluster State" on page 203.
Procedure
Step Instructions
Step Instructions
Best Practices:
Item Description
1 Security Gateway
1A Interface 1
1B Interface 2
2 Bond Interface
3 Router
A bond interface (also known as a bonding group or bond) is identified by its Bond ID (for example:
bond1) and is assigned an IP address. The physical interfaces included in the bond are called slaves and
do not have IP addresses.
You can configure a bond interface to use one of these functional strategies:
n High Availability (Active/Backup): Gives redundancy when there is an interface or a link failure.
This strategy also supports switch redundancy. Bond High Availability works in Active/Backup
mode - interface Active/Standby mode. When an Active slave interface is down, the connection
automatically fails over to the primary slave interface. If the primary slave interface is not available,
the connection fails over to a different slave interface.
n Load Sharing (Active/Active): All slave interfaces in the UP state are used simultaneously. Traffic is
distributed among the slave interfaces to maximize throughput. Bond Load Sharing does not
support switch redundancy.
You can configure Bond Load Sharing to use one of these modes:
l Round Robin - Selects the Active slave interfaces sequentially.
l 802.3ad (LACP) - Dynamically uses Active slave interfaces to share the traffic load. This
mode uses the LACP protocol, which fully monitors the interface link between the Check Point
Security Gateway and a switch.
l XOR - All slave interfaces in the UP state are Active for Load Sharing. Traffic is assigned to
Active slave interfaces based on the transmit hash policy: Layer 2 information (XOR of
hardware MAC addresses), or Layer 3+4 information (IP addresses and Ports).
For Bonding High Availability mode and for Bonding Load Sharing mode:
n The number of bond interfaces that can be defined is limited by the maximal number of interfaces
supported by each platform. See the R81 Release Notes.
n Up to 8 physical slave interfaces can be configured in a single bond interface.
Item Description
1 Cluster Member GW1 with interfaces connected to the external switches (5 and 6)
2 Cluster Member GW2 with interfaces connected to the external switches (5 and 6)
3 Interconnecting network C1
4 Interconnecting network C2
5 Switch S1
6 Switch S2
If Cluster Member GW1 (1), its NIC, or switch S1 (5) fails, Cluster Member GW2 (2) becomes the only
Active member, connecting to switch S2 (6) over network C2 (4).
If any component fails (Cluster Member, NIC, or switch), the result of the failover is that no further
redundancy exists.
A further failure of any active component completely stops network traffic through this cluster.
Item Description
1 Cluster Member GW1 with interfaces connected to the external switches (4 and 5)
2 Cluster Member GW2 with interfaces connected to the external switches (4 and 5)
3 Interconnecting network
4 Switch S1
5 Switch S2
In this scenario:
n GW1 and GW2 are Cluster Members in the High Availability mode, each connected to the two
external switches
n S1 and S2 are external switches
n Item 3 are the network connections
If any of the interfaces on a Cluster Member that connect to an external switch fails, the other interface
continues to provide the connectivity.
If any Cluster Member, its NIC, or switch fails, the other Cluster Member, connecting to switch S2 over
network C2. If any component fails (Cluster Member , NIC, or switch), the result of the failover is that no
further redundancy exists. A further failure of any active component completely stops network traffic.
Bonding provides High Availability of NICs. If one fails, the other can function in its place.
Step Instructions
Sync Redundancy
The use of more than one physical synchronization interface (1st sync, 2nd sync, 3rd sync ) for
synchronization redundancy is not supported. For synchronization redundancy, you can use bond
interfaces.
cphaprob -am if
Step Description
4 Make sure the value of the kernel parameter fwha_bond_enhanced_enable was set to
1:
fw ctl get int fwha_bond_enhanced_enable
Step Description
5 Add this line to the file (spaces and comments are not allowed):
fwha_bond_enhanced_enable=1
8 Make sure the value of the kernel parameter fwha_bond_enhanced_enable was set to
1:
fw ctl get int fwha_bond_enhanced_enable
Important - If you change your cluster configuration from VRRP to ClusterXL, you must
remove the kernel parameter configuration from each Cluster Member.
Step Instructions
Group of Bonds
In This Section:
Introduction
Group of Bonds, which is a logical group of existing Bond interfaces, provides additional link redundancy.
1. The Cluster Member GW-A is the Active and the Cluster Member GW-B is the Standby.
2. On the Cluster Member GW-A, the Bond-1 interface fails.
3. On the Cluster Member GW-A, the Critical Device Interface Active Check reports its state as
"problem".
4. The Cluster Member GW-A changes its cluster state from Active to Down.
5. The cluster fails over - the Cluster Member GW-B changes its cluster state from Standby to
Active.
This is not the desired behavior, because the Cluster Member GW-A connects not only to the switch
SW-1, but also to the switch SW-2. In our example topology, there is no actual reason to fail-over from
the Cluster Member GW-A to the Cluster Member GW-B.
In order to overcome this problem, Cluster Members use the Group of Bonds consisting of Bond-1 and
Bond-2. The Group of Bonds fails only when both Bond interfaces fail on the Cluster Member. Only then
the cluster fails over.
1. The Cluster Member GW-A is the Active and the Cluster Member GW-B is the Standby.
2. On the Cluster Member GW-A, the Bond-1 interface fails.
3. On the Cluster Member GW-A, the Critical Device Interface Active Check reports its state as
"problem".
4. The Cluster Member GW-A does not change its cluster state from Active to Down.
5. On the Cluster Member GW-A, the Bond-2 interface fails as well.
6. The Cluster Member GW-A changes its cluster state from Active to Down.
7. The cluster fails over - the Cluster Member GW-B changes its cluster state from Standby to
Active.
Procedure
Important - In a Cluster, you must configure all the Cluster Members in the same way.
vsenv <VSID>
cp -v $FWDIR/boot/modules/fwkern.conf{,_BKP}
vi $FWDIR/boot/modules/fwkern.conf
c. Add these two lines at the bottom of the file (spaces or comments are not allowed):
Example:
fwha_group_of_bonds_str=GoB0:bond0,bond1;GoB1:bond2,bond3
fwha_arp_probe_method=1
Note - The kernel parameter "fwha_arp_probe_method"
configures the Cluster Member to use the Virtual IP address as the
Source IP address in the ARP Requests during the probing of the local
network.
d. Save the changes in the file and exit the editor.
5. Change the value of the kernel parameter fwha_group_of_bonds_str to add the Group of
Bonds on-the-fly:
Example:
Procedure
Important - In a Cluster, you must configure all the Cluster Members in the same way.
vsenv <VSID>
cp -v $FWDIR/boot/modules/fwkern.conf{,_BKP}
vi $FWDIR/boot/modules/fwkern.conf
c. Edit the value of the kernel parameter fwha_group_of_bonds_str to add the Bond
interface to the existing Group of Bonds.
Example:
fwha_group_of_bonds_
str=GoB0:bond0,bond1;GoB1:bond2,bond3,bond4
7. Make sure the Cluster Member reset the value of the kernel parameter fwha_group_of_
bonds_str:
8. Change the value of the kernel parameter fwha_group_of_bonds_str to add the Bond
interface to the existing Group of Bonds on-the-fly:
Example:
Notes:
n The apostrophe characters are mandatory part of the syntax.
n Spaces are not allowed in the value of the kernel parameter fwha_
group_of_bonds_str.
9. Make sure the Cluster Member accepted the new configuration:
10. In SmartConsole, install the Access Control Policy on the cluster object.
Procedure
Important - In a Cluster, you must configure all the Cluster Members in the same way.
vsenv <VSID>
cp -v $FWDIR/boot/modules/fwkern.conf{,_BKP}
vi $FWDIR/boot/modules/fwkern.conf
fwha_group_of_bonds_str=GoB0:bond0,bond1;GoB1:bond2,bond3
7. Make sure the Cluster Member reset the value of the kernel parameter fwha_group_of_
bonds_str:
8. Change the value of the kernel parameter fwha_group_of_bonds_str to remove the Bond
interface from the existing Group of Bonds on-the-fly:
Example:
10. In SmartConsole, install the Access Control Policy on the cluster object.
Procedure
Important - In a Cluster, you must configure all the Cluster Members in the same way.
vsenv <VSID>
cp -v $FWDIR/boot/modules/fwkern.conf{,_BKP}
vi $FWDIR/boot/modules/fwkern.conf
6. Make sure the Cluster Member reset the value of the kernel parameter fwha_group_of_
bonds_str:
Monitoring
To see the configured Groups of Bonds, run the "cphaprob show_bond_groups" command. See
"Viewing Bond Interfaces" on page 218.
Logs
Cluster Members generate some applicable logs.
See the R81 Next Generation Security Gateway Guide - Chapter Kernel Debug on Security
Gateway.
The kernel debug shows:
Limitations
Specific limitations apply to a Group of Bonds.
List of Limitations
n The maximal length on the text string "<Name for Group of Bonds>" is 16 characters.
n The maximal length on this text string is 1024 characters:
n You can configure the maximum of five Groups of Bonds on a Cluster Member or Virtual System.
n You can configure the maximum of five Bond interfaces in each Groups of Bonds.
n Group of Bonds feature does support Virtual Switches and Virtual Routers. Meaning, do not
configure Groups of Bonds in the context of these Virtual Devices.
n Group of Bonds feature supports only Bond interfaces that belong to the same Virtual System.
You cannot configure bonds that belong to different Virtual Systems into the same Group of
Bonds.
You must perform all configuration in the context of the applicable Virtual System.
n Group of Bonds feature does support Sync interfaces (an interface on a Cluster Member, whose
Network Type was set as Sync or Cluster+Sync in SmartConsole in cluster object).
n Group of Bonds feature does support Bridge interfaces.
n If a Bond interface goes down on one Cluster Member, the "cphaprob show_bond_groups"
command (see "Viewing Bond Interfaces" on page 218) on the peer Cluster Members also
shows the same Bond interface as DOWN.
This is because the peer Cluster Members stop receiving the CCP packets on that Bond interface
and cannot probe the local network to determine that their Bond interface is really working.
n After you add a Bond interface to the existing Group of Bonds, you must install the Access Control
Policy on the cluster object.
n After you remove a Bond interface from the existing Group of Bonds, you must install the Access
Control Policy on the cluster object.
Example configuration
An appliance has:
n Four processing CPU cores:
core 0, core 1, core 2, and core 3
n Two bond interfaces:
bond0 with slave interfaces eth0, eth1, and eth2
bond1 with slave interfaces eth3, eth4, and eth5
In such case, two of the CPU cores need to handle two slave interfaces each.
An optimal configuration can be:
0 eth0 eth3
1 eth1 eth4
2 eth2
3 eth5
For more information, see the R81 Performance Tuning Administration Guide:
n Chapter SecureXL > Section SecureXL Commands and Debug > Section 'sim' and 'sim6' > Section
sim affinity.
Note - Removing a slave interface from an existing bond, does not require a reboot.
Important: - In a Cluster, you must configure all the Cluster Members in the same way.
See the R81 Next Generation Security Gateway Guide - Chapter Working with Kernel Parameters on
Security Gateway.
The reason for blocking new connections is that new connections are the main source of new Delta
Synchronization traffic. Delta Synchronization may be at risk, if new traffic continues to be processed at
high rate.
A related error message in cluster logs and in the /var/log/messages file is:
Reducing the amount of traffic passing through the Cluster Member protects the Delta Synchronization
mechanism. See sk43896: Blocking New Connections Under Load in ClusterXL.
These kernel parameters let you control how Cluster Member behave:
Kernel
Description
Parameter
fw_sync_ Controls how Cluster Member detect heavy loads and whether they start blocking new
block_ connections.
new_conns Load is considered heavy when the synchronization transmit queue of the Cluster
Member starts to fill beyond the value of the kernel parameter "fw_sync_buffer_
threshold".
n To enable blocking new connections under load, set the value of the "fw_sync_
block_new_conns" to 0.
n To disable blocking new connections under load, set the value of the "fw_sync_
block_new_conns" to -1 (must use the hex value 0xFFFFFFFF). This is the
default.
Note - Blocking new connections when sync is busy is only recommended for
ClusterXL Load Sharing deployments. While it is possible to block new
connections in ClusterXL High Availability mode, doing so does not solve
inconsistencies in sync, because the High Availability mode prevents that
from happening.
fw_sync_ Configures the maximum percentage of the buffer that may be filled before new
buffer_ connections are blocked (see the parameter "fw_sync_block_new_conns"
threshold above).
The default percentage value is 80, with a buffer size of 512.
By default, if more than 410 consecutive packets are sent without getting an ACK on
any one of them, new connections are dropped.
Kernel
Description
Parameter
fw_sync_ Determines the type of connections that can be opened while the system is in a blocking
allowed_ state.
protocols Thus, the user can have better control over the system behavior in cases of unusual
load.
The value of this kernel parameter is a combination of flags, each specifying a different
type of connection. The required value is the result of adding the separate values of
these flags.
Summary table:
Flag Value
ICMP_CONN_ALLOWED 1
Warning - Do not set the value of this kernel parameter to a number larger than 3.
See the R81 Next Generation Security Gateway Guide - Chapter Working with Kernel Parameters on
Security Gateway.
Warning - The price for this extra security is a considerable delay in TCP connection
establishment.
sync_tcp_handshake_mode
7. In the lower pane, right-click on the sync_tcp_handshake_mode property and select Edit.
8. Choose complete_sync and click OK.
For more information, see the section "Synchronization modes for TCP 3-way handshake"
below.
9. To save the changes, from the File menu select Save All .
10. Close GuiDBedit Tool.
11. Connect with SmartConsole to the Security Management Server or Domain Management Server
that manages this cluster.
12. In SmartConsole, install the Access Control Policy onto the Cluster object.
Mode Instructions
Complete All 3-way handshake packets are Sync-and-ACK'ed, and the 3-way handshake is
sync enforced.
This mode slows down connection establishment considerably.
It may be used when there is no way to know where the next packet goes (for example,
in 3rd party clusters).
Smart In most cases, we can assume that if SYN and SYN-ACK were encountered by the
sync same cluster member, then the connection is “sticky”.
ClusterXL uses one additional flag in Connections Table record that says, “If this
member encounters a 3-way handshake packet, it should sync all other cluster
members”.
When a SYN packet arrives, the member that encountered it, records the connection
and turns off its flag. All other members are synchronized, and by using a post-sync
handler, their flag is turned on (in their Connections Tables).
If the same member encounters the SYN-ACK packet, the connection is sticky, thus
other cluster members are not informed.
Otherwise, the relevant member will inform all other member (since its flag is turned
on).
The original member (that encountered the SYN) will now turn on its flag, thus all
members will have their flag on.
In this case, the third packet of the 3-way handshake is also synchronized.
If for some reason, our previous assumption is not true (i.e., one cluster member
encountered both SYN and SYN-ACK packets, and other members encountered the
third ACK), then the “third” ACK will be dropped by the other cluster members, and we
rely on the periodic sync and TCP retransmission scheme to complete the 3-way
handshake.
This 3-way handshake synchronization mode is a good solution for ClusterXL Load
Sharing users that want to enforce 3-way handshake verification with the minimal
performance cost.
This 3-way handshake synchronization mode is also recommended for ClusterXL High
Availability.
Traffic sent from Cluster Members to internal or external networks is hidden behind the cluster Virtual IP
addresses and cluster MAC addresses. The cluster MAC address assigned to cluster interfaces is:
Load Sharing Multicast Multicast MAC address of the cluster Virtual IP Address
Load Sharing Unicast MAC address of the Pivot Cluster Member's interface
The use of different subnets with cluster objects has some limitations - see "Limitations of Cluster
Addresses on Different Subnets" on page 152.
The required static routes must use applicable local member's interface as the next hop gateway
for network of cluster Virtual IP address.
If you do not define the static routes correctly, Cluster Members are not able to pass traffic.
Note - In VSX Cluster, you must configure all routes only in SmartConsole in
the VSX Cluster object.
For configuration instructions, see the R81 Gaia Administration Guide - Chapter Network
Management - Sections IPv4 Static Routes and IPv6 Static Routes.
Procedure:
IP address of IP address of
Interface Properties
Cluster Interface "A" Cluster Interface "B
g. Click OK.
h. Install the Access Control Policy on this cluster object.
Note - Static ARP is not required in order for the Cluster Members to work properly as a
cluster, since the cluster synchronization protocol does not rely on ARP.
Configuring Anti-Spoofing
1. Connect with SmartConsole to the Management Server.
2. From the left navigation panel, click Gateways & Servers .
3. Create a Group object, which contains the objects of both the external network and the internal
network.
In the "Example of Cluster IP Addresses on Different Subnets" on page 150, suppose Side "A" is the
external network, and Side "B" is the internal network.
You must configure the Group object to contain both the network 172.16.4.0 / 24 and the network
192.168.2.0 / 24.
4. Open the cluster object.
5. From the left tree, click Network Management.
6. Select the cluster interface and click Edit.
7. On the General page, in the Topology section, click Modify .
8. Select Override.
9. Select This Network (Internal).
10. Select Specific
11. Select the Group object that contains the objects of both the external network and the internal
network.
12. Click OK.
13. Install the Access Control Policy on this cluster object.
Best Practice - Before you change the current configuration, export a complete
management database with "migrate_server" command. See the R81 CLI
Reference Guide.
Install a new Cluster Member you plan to add to the existing cluster.
See the R81 Installation and Upgrade Guide > Chapter Installing a ClusterXL, VSX Cluster,
VRRP Cluster.
Follow only the step "Install the Cluster Members".
Important - The new Cluster Member must run the same version with the
same Hotfixes as the existing Cluster Members.
On the new Cluster Member you plan to add to the existing cluster:
a. Configure or change the IP addresses on the applicable interfaces to match the current
cluster topology.
Use Gaia Portal or Gaia Clish.
See R81 Gaia Administration Guide.
b. Configure or change the applicable static routes to match the current cluster topology.
Use Gaia Portal or Gaia Clish.
c. Connect to the command line.
d. Log in to Gaia Clish or the Expert mode.
e. Start the Check Point Configuration Tool. Run:
cpconfig
f. Select the option Enable cluster membership for this gateway and enter y to confirm.
g. Reboot the new Cluster Member.
h. In the IPv6 Address field, enter a physical IPv6 address, if you need to use IPv6.
The Management Server must be able to connect to the Cluster Member at this IPv6
address. This IPv6 address can be an internal, or external. You can use a dedicated
management interface on the Cluster Member.
Shell Command
Procedure
a. Connect to the command line on each Cluster Member (existing and the newly added).
b. Log in to Gaia Clish or the Expert mode.
c. Restart the clustering on each Cluster Member.
Run:
cphastop
cphastart
d. Make sure all Cluster Members detect each other and agree on their cluster states.
Run:
Shell Command
Important - The existing Security Gateway must run the same version with the same
Hotfixes as the existing Cluster Members.
On the existing Security Gateway you plan to add to the existing cluster:
a. Configure or change the IP addresses on the applicable interfaces to match the current
cluster topology.
Use Gaia Portal or Gaia Clish.
See R81 Gaia Administration Guide.
b. Configure or change the applicable static routes to match the current cluster topology.
Use Gaia Portal or Gaia Clish.
c. Connect to the command line.
d. Log in to Gaia Clish or the Expert mode.
e. Start the Check Point Configuration Tool. Run:
cpconfig
f. Select the option Enable cluster membership for this gateway and enter y to confirm.
g. Reboot the Security Gateway.
f. In the list of Cluster Members, select the new Cluster Member and click Edit.
g. Click the NAT tab and configure the applicable NAT settings.
h. Click the VPN tab and configure the applicable VPN settings.
i. From the left tree, click Network Management.
n Make sure all interfaces are defined correctly.
n Make sure all IP addresses are defined correctly.
j. Click OK.
k. Install the Access Control Policy on this cluster object.
Policy installation must succeed on all Cluster Members.
l. Install the Threat Prevention Policy on this cluster object.
Policy installation must succeed on all Cluster Members.
c. Make sure all Cluster Members detect each other and agree on their cluster states. Run:
Shell Command
Procedure
a. Connect to the command line on each Cluster Member (existing and the newly added).
b. Log in to Gaia Clish or the Expert mode.
c. Restart the clustering on each Cluster Member.
Run:
cphastop
cphastart
d. Make sure all Cluster Members detect each other and agree on their cluster states.
Run:
Shell Command
Best Practice - Before you change the current configuration, export a complete
management database with "migrate_server" command. See the R81 CLI
Reference Guide.
Important:
n This operation deletes the object.
n There must be at least two Cluster Members in the cluster
object.
cphastop
cphastart
d. Make sure all Cluster Members detect each other and agree on their cluster states. Run:
Shell Command
cpconfig
d. Select the option Disable cluster membership for this gateway and enter y to confirm.
e. Select the option Secure Internal Communication > enter y to confirm > enter the new
Activation Key. Make sure to write it down.
f. Exit from the cpconfig menu.
g. Reboot the Security Gateway.
If you need to use the Security Gateway you removed from the existing cluster, then establish
Secure Internal Communication (SIC) with it.
a. Connect with SmartConsole to the Security Management Server or Domain Management
Server that manages this Security Gateway.
b. From the left navigation panel, click Gateways & Servers .
e. Click OK.
f. Publish the SmartConsole session.
g. Install the Access Control Policy on the Security Gateway object.
h. Install the Threat Prevention Policy on the Security Gateway object.
Introduction 163
ISP Redundancy Modes 165
Outgoing Connections 165
Incoming Connections 166
Introduction
ISP Redundancy lets you connect Cluster Members to the Internet through redundant Internet Service
Provider (ISP) links.
ISP Redundancy monitors the ISP links and chooses the best current link.
Notes:
n R81 supports two ISPs.
n ISP Redundancy is intended for traffic that originates on your internal networks
and goes to the Internet.
Important:
n You must connect each Cluster Member with a dedicated physical interface to
each of the ISPs.
n The IP addresses assigned to physical interfaces on each Cluster Member must
be on the same subnet as the Cluster Virtual IP address.
Item Description
1 Internal network
2 Switches
Item Description
3 Cluster Member A
3b Cluster interface (IP address 20.20.20.11) connected to the Sync network (IP address
20.20.20.0/24)
4 Cluster Member B
4b Cluster interface (IP address 20.20.20.22) connected to the Sync network (IP address
20.20.20.0/24)
5 ISP B
6 ISP A
7 Internet
Mode Description
Best Practice:
n If both ISPs are basically the same, use the Load Sharing mode to ensure that
you are making the best use of both ISPs.
n You may prefer to use one of your two ISPs that is more cost-effective in terms of
price and reliability. In that case, use Primary/Backup mode and set the more
cost-effective ISP as the Primary ISP link.
Outgoing Connections
n In ISP Redundancy Load Sharing mode, outgoing traffic that exits the Cluster on its way to the
Internet is distributed between the ISP Links. You can set a relative weight for how much you want
each of the ISP Links to be used.
For example, if one link is faster, it can be configured to route more traffic across that ISP link than
the other.
n In ISP Redundancy Primary/Backup mode, outgoing traffic uses an active primary link.
Hide NAT is used to change the source address of outgoing packets to the address of the interface,
through which the packet leaves the Cluster. This allows return packets to be automatically routed
through the same ISP link, because their destination address is the address of the correct link. Hide
NAT is configured by the administrator.
Incoming Connections
For external users to make incoming connections, the administrator must give each application server two
routable IP addresses, one for each ISP. The administrator must also configure Static NAT to translate the
routable addresses to the real server address.
If the servers handle different services (for example, HTTP and FTP), you can use NAT to employ only two
routable IP addresses for all the publicly available servers.
External clients use one of the two addresses. In order to connect, the clients must be able to resolve the
DNS name of the server to the correct IP address.
In the following example, the Web server www.example.com is assigned an IP address from each ISP:
n 192.168.1.2 from ISP A
n 172.16.2.2 from ISP B
If the ISP Link A is down, then IP address 192.168.1.2 becomes unavailable, and the clients must be able
to resolve the URL www.example.com to the IP address 172.16.2.2.
An incoming connection is established, based on this example, in the following sequence:
1. When an external client on the Internet contacts www.example.com , the client sends a DNS query
for the IP address of this URL.
The DNS query reaches the Cluster. The Cluster has a built-in mini-DNS server that can be
configured to intercept DNS queries (of Type A) for servers in its domain.
2. A DNS query arriving at an interface that belongs to one of the ISP links, is intercepted by the
Cluster.
3. If the Cluster recognizes the name of the host, it sends one of the following replies:
n In ISP Redundancy Primary/Backup mode, the Cluster replies only with the IP addresses
associated with the Primary ISP link, as long as the Primary ISP link is active.
n In ISP Redundancy Load Sharing mode, the Cluster replies with two IP addresses,
alternating their order.
4. If the Cluster is unable to handle DNS requests (for example, it may not recognize the host name), it
passes the DNS query to its original destination or the DNS server of the domain example.com .
5. When the external client receives the reply to its DNS query, it opens a connection. Once the
packets reach the Cluster, the Cluster uses Static NAT to translate the destination IP address
192.168.1.2 or 172.16.2.2 to the real server IP address 10.0.0.2.
6. The Cluster routes the reply packets from the server to the client through the same ISP link that was
used to initiate the connection.
Make sure you have the ISP data - the speed of the link and next hop IP address.
Automatic vs Manual configuration:
n If the Cluster object has two interfaces with Topology "External " in the Network
Management page, you can configure the ISP links automatically.
Configuring ISP links automatically
n If the Cluster object only one interface with Topology "External " in the Network
Management page, you must configure the ISP links manually.
Configuring ISP links manually
The Cluster, or a DNS server behind it, must respond to DNS queries.
Note - If the servers use different services (for example, HTTP and
FTP), you can use NAT for only two public IP addresses.
The Access Control Policy must allow connections through the ISP links, with Automatic Hide NAT
on network objects that start outgoing connections.
a. In the properties of the object for an internal network, select NAT > Add Automatic
Address Translation Rules .
b. Select Hide behind the gateway .
c. Click OK.
d. Define rules for publicly reachable servers (Web servers, DNS servers, DMZ servers).
n If you have one public IP address from each ISP for the Cluster, define Static NAT.
Allow specific services for specific servers.
For example, make NAT rules, so that incoming HTTP connections from the two
ISPs reach a Web server, and DNS traffic from the ISP reach the DNS server.
Example: Manual Static Rules for a Web Server and a DNS Server
n If you have a public IP address from each ISP for each publicly reachable server (in
addition to the Cluster), define NAT rules:
i. Give each server a private IP address.
ii. Use the public IP addresses in the Original Destination.
iii. Use the private IP address in the Translated Destination.
iv. Select Any as the Original Service.
Note - If you use Manual NAT, then automatic ARP does not work for the IP
addresses behind NAT. You must configure the local.arp file as
described in sk30197.
Note - ISP Redundancy settings override the VPN Link Selection settings.
When ISP Redundancy is enabled, VPN encrypted connections survive a failure of an ISP link.
The settings in the ISP Redundancy page override settings in the IPsec VPN > Link Selection page.
Step Instructions
7 Make sure that Use ongoing probing. Link redundancy mode shows the mode of the ISP
Redundancy:
High Availability (for Primary/Backup) or Load Sharing.
The VPN Link Selection now only probes the ISP configured in ISP Redundancy.
If the VPN peer is not a Check Point Security Gateway, the VPN may fail, or the third-party device may
continue to encrypt traffic to a failed ISP link.
n Make sure the third-party VPN peer recognizes encrypted traffic from the secondary ISP link as
coming from the Check Point cluster.
n Change the configuration of ISP Redundancy to not use these Check Point technologies:
l Use Probing - Makes sure that Link Selection uses another option.
l The options Load Sharing, Service Based Link Selection, and Route based probing
work only on Check Point Security Gateways and Clusters.
If used, the Security Gateway or Cluster Members use one link to connect to the third-party
VPN peer.
The link with the highest prefix length and lowest metric is used.
For more information, see the R81 CLI Reference Guide > Chapter Security Gateway Commands -
Section fw - Section fw isp_link.
2. If you use PPPoE or PPTP xDSL modems, in the PPPoE or PPTP configuration, the Use Peer as
Default Gateway option must be cleared.
Router IP Address
All Cluster Members use the Cluster Virtual IP address(es) as Router IP address(es).
Routing information is synchronized among the Cluster Members using the Forwarding Information Base
(FIB) Manager process.
This is done to prevent traffic interruption in case of failover, and used for Load Sharing and High
Availability modes.
The FIB Manager is the responsible for the routing information.
The FIB Manager is registered as a Critical Device called "FIB". If the slave goes out of sync, this Critical
Device reports its state as "problem". As a result, the slave member changes its state to "DOWN" until the
FIB Manager is synchronized.
n When Dynamic Routing protocols and/or DHCP Relay are configured on cluster, the "Wait for
Clustering" option must be enabled in these cluster modes:
l ClusterXL High Availability
l ClusterXL Load Sharing Unicast
l ClusterXL Load Sharing Multicast
l VSX High Availability
l VSX Load Sharing (VSLS)
n When Dynamic Routing protocols and/or DHCP Relay are configured on cluster, the "Wait for
Clustering" must be disabled in these cluster modes:
l VRRP Cluster on Gaia OS
For more information, see sk92322.
Failure Recovery
Dynamic Routing on ClusterXL avoids creating a ripple effect upon failover by informing the neighboring
routers that the router has exited a maintenance mode.
The neighboring routers then reestablish their relationships to the cluster, without informing the other
routers in the network.
These restart protocols are widely adopted by all major networking vendors.
This table lists the RFC and drafts compliant with Check Point Dynamic Routing:
Important:
n We do not recommend that you run these commands. These commands must
be run automatically only by the Security Gateway or the Check Point Support.
n In a Cluster, you must configure all the Cluster Members in the same way.
Syntax
Notes:
n In Gaia Clish:
Enter the set cluster<ESC><ESC> to see all the available commands.
n In Expert mode:
Run the cphaconf command see all the available commands.
You can run the cphaconf commands only from the Expert mode.
n Syntax legend:
1. Curly brackets or braces { }:
Enclose a list of available commands or parameters, separated by the
vertical bar |, from which user can enter only one.
2. Angle brackets < > :
Enclose a variable - a supported value user needs to specify explicitly.
3. Square brackets or brackets [ ]:
Enclose an optional command or parameter, which user can also enter.
n You can include these commands in scripts to run them automatically.
The meaning of each command is explained in the next sections.
Table: ClusterXL Configuration Commands
Description Command in Command in
of Command Gaia Clish Expert Mode
Configure how to show the Cluster Member in set cphaconf mem_id_mode {id
local ClusterXL logs - by its Member ID or its cluster | name}
Member Name (see "Configuring the Cluster member
Member ID Mode in Local Logs" on page 180) idmode {id
| name}
Configure the Cluster Forwarding Layer on the set cphaconf forward {off |
Cluster Member (controls the forwarding of cluster on}
traffic between Cluster Members) member
Note - For Check Point use only. forwarding
{off | on}
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command lets you configure how to show the Cluster Member in the local ClusterXL logs - by its
Member ID (default), or its Member Name.
This configuration affects these local logs:
n /var/log/messages
n dmesg
n $FWDIR/log/fwd.elg
See "Viewing the Cluster Member ID Mode in Local Logs" on page 235.
Syntax
Shell Command
Example
[Expert@Member1:0]#
[Expert@Member1:0]# cphaconf mem_id_mode name
[Expert@Member1:0]#
[Expert@Member1:0]# cphaprob names
[Expert@Member1:0]#
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
You can add a user-defined critical device to the default list of critical devices. Use this command to
register <device> as a critical process, and add it to the list of devices that must run for the Cluster Member
to be considered active. If <device> fails, then the Cluster Member is seen as failed.
If a Critical Device fails to report its state to the Cluster Member in the defined timeout, the Critical Device,
and by design the Cluster Member, are seen as failed.
Define the status of the Critical Device that is reported to ClusterXL upon registration.
This initial status can be one of these:
n ok - Critical Device is alive.
n init - Critical Device is initializing. The Cluster Member is Down. In this state, the Cluster Member
cannot become Active.
n problem - Critical Device failed. If this state is reported to ClusterXL, the Cluster Member
immediately goes Down. This causes a failover.
Syntax
Shell Command
Gaia N/A
Clish
Notes:
n The "-t" flags specifies how frequently to expect the periodic reports from this Critical
Device.
If no periodic reports should be expected, then enter the value 0 (zero).
n The "-p" flag makes these changes permanent (survive reboot).
n The "-g" flag applies the command to all configured Virtual Systems.
Restrictions
n Total number of critical devices (pnotes) on Cluster Member is limited to 16.
n Name of any critical device (pnote) on Cluster Member is limited to 15 characters, and must not
include white spaces.
Related topics
n "Viewing Critical Devices" on page 207
n "Reporting the State of a Critical Device" on page 184
n "Registering Critical Devices Listed in a File" on page 185
n "Unregistering a Critical Device" on page 183
n "Unregistering All Critical Devices" on page 187
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command lets you unregister a user-defined Critical Device (Pnote). This means that this device is no
longer considered critical.
If a Critical Device was registered with a state "problem", before you ran this command, then after you
run this command, the status of the Cluster Member depends only on the states of the remaining Critical
Devices.
Syntax
Shell Command
Notes:
n The "-p" flag makes these changes permanent.
This means that after you reboot, these Critical Devices remain
unregistered.
n The "-g" flag applies the command to all configured Virtual Systems.
Related topics
n "Viewing Critical Devices" on page 207
n "Reporting the State of a Critical Device" on page 184
n "Registering a Critical Device" on page 181
n "Registering Critical Devices Listed in a File" on page 185
n "Unregistering All Critical Devices" on page 187
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command lets you report (change) manually the state of a Critical Device to ClusterXL.
The reported state can be one of these:
n ok - Critical Device is alive.
n init - Critical Device is initializing. The Cluster Member is Down. In this state, the Cluster Member
cannot become Active.
n problem - Critical Device failed. If this state is reported to ClusterXL, the Cluster Member
immediately goes Down. This causes a failover.
If a Critical Device fails to report its state to the Cluster Member within the defined timeout, the Critical
Device, and by design the Cluster Member, are seen as failed. This is true only for Critical Devices with
timeouts. If a Critical Device is registered with the "-t 0" parameter, there is no timeout. Until the Critical
Device reports otherwise, the state of the Critical Device is considered to be the last reported state.
Syntax
Shell Command
Gaia N/A
Clish
Notes:
n The "-g" flag applies the command to all configured Virtual Systems.
n If the "<Name of Critical Device>" reports its state as "problem", then
the Cluster Member reports its state as failed.
Related topics
n "Viewing Critical Devices" on page 207
n "Registering a Critical Device" on page 181
n "Registering Critical Devices Listed in a File" on page 185
n "Unregistering a Critical Device" on page 183
n "Unregistering All Critical Devices" on page 187
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command lets you register all the user-defined Critical Devices listed in the specified file.
This file must be a plain-text ASCII file, with each Critical Device defined on a separate line.
Each definition must contain three parameters, which must be separated by a space or a tab character:
Where:
Parameter Description
< If the Critical Device <Name of Device> fails to report its state to the Cluster Member
Timeout within this specified number of seconds, the Critical Device (and by design the Cluster
> Member), are seen as failed.
For no timeout, use the value 0 (zero).
< The Critical Device <Name of Device> reports one of these statuses to the Cluster
Status> Member:
n ok - Critical Device is alive.
n init- Critical Device is initializing. The Cluster Member is Down. In this state, the
Cluster Member cannot become Active.
n problem - Critical Device failed. If this state is reported to ClusterXL, the Cluster
Member immediately goes Down. This causes a failover.
Syntax
Shell Command
Note - The "-g" flag applies the command to all configured Virtual Systems.
Related topics
n "Viewing Critical Devices" on page 207
n "Reporting the State of a Critical Device" on page 184
n "Registering a Critical Device" on page 181
n "Unregistering a Critical Device" on page 183
n "Unregistering All Critical Devices" on page 187
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command lets you unregister all critical devices from the Cluster Member.
Syntax
Shell Command
Notes:
n The "-a" flag specifies that all Pnotes must be unregistered
n The "-g" flag applies the command to all configured Virtual
Systems
Related topics
n "Viewing Critical Devices" on page 207
n "Reporting the State of a Critical Device" on page 184
n "Registering a Critical Device" on page 181
n "Registering Critical Devices Listed in a File" on page 185
n "Unregistering a Critical Device" on page 183
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
Cluster Members configure the Cluster Control Protocol (CCP) mode automatically.
You can configure the Cluster Control Protocol (CCP) Encryption on the Cluster Members.
See "Viewing the Cluster Control Protocol (CCP) Settings" on page 240.
Shell Command
Syntax
Shell Command
Example
... ...
[Expert@Member1:0]#
[Expert@Member1:0]#
[Expert@Member1:0]# clusterXL_admin up
This command does not survive reboot. To make the change permanent, please run 'set cluster member admin
down/up permanent' in clish or add '-p' at the end of the command in expert mode
Setting member to normal operation ...
Member current state is STANDBY
[Expert@Member1:0]#
[Expert@Member1:0]#
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
ClusterXL considers a bond in Load Sharing mode to be in the "down" state when fewer than a minimal
number of required slave interfaces stay in the "up" state.
By default, the minimal number of required slave interfaces, which must stay in the "up" state in a bond of n
slave interfaces is n-1.
If one more slave interface fails (when n-2 slave interfaces stay in the "up" state), ClusterXL considers the
bond interface to be in the "down" state, even if the bond contains more than two slave interfaces.
If a smaller number of slave interfaces can pass the expected traffic, you can configure explicitly the
minimal number of required slave interfaces.
Divide your maximal expected traffic speed by the speed of your slave interfaces and round up the result to
find an applicable minimal number of required slave interfaces.
Notes :
n Cluster Members save the configuration in the $FWDIR/conf/cpha_bond_
ls_config.conf file.
n The commands below save the changes in this file.
n Each line in the file has this syntax:
<Name of Bond Interface> <Minimal Number of Required
Slave Interfaces>
Syntax to add the minimal number of required slave interfaces for a specific Bond interface
Shell Command
Gaia N/A
Clish
Syntax to remove the configured minimal number of required slave interfaces for a specific Bond interface
Shell Command
Syntax to see the current configuration of the minimal number of required slave interfaces
Shell Command
Procedure
Step Instructions
3 Add or remove the minimal number of required slave interfaces for a specific Bond interface:
cphaconf bond_ls set <Bond> <Minimal Number of Slaves>
Example
[Expert@Member1:0]#
bond1 2
[Expert@Member1:0]#
[Expert@Member1:0]#
[Expert@Member1:0]#
Important:
n The MVC Mechanism is disabled by default.
n For limitations of the MVC Mechanism, see the R81 Installation and Upgrade
Guide > Chapter Upgrading Gateways and Clusters > Section Upgrading
ClusterXL, VSX Cluster, VRRP Cluster > Section Multi-Version Cluster
Upgrade.
Syntax
Shell Command
Parameters
Parameter Description
Notes:
n This command does not provide an output. To view the current state of the MVC
Mechanism, see "Viewing the State of the Multi-Version Cluster Mechanism" on
page 242.
n The change made with this command survives reboot.
n If a specific scenario requires you to disable the MVC Mechanism before the first
start of an R81 Cluster Member (for example, immediately after an upgrade to
R81), then disable it before the first policy installation on this Cluster Member.
Syntax
Notes:
n In Gaia Clish:
Enter the show cluster<ESC><ESC> to see all the available commands.
n In Expert mode:
Run the cphaprob command see all the available commands.
You can run the cphaprob commands from Gaia Clish as well.
n Syntax legend:
1. Curly brackets or braces { }:
Enclose a list of available commands or parameters, separated by the
vertical bar |, from which user can enter only one.
2. Angle brackets < > :
Enclose a variable - a supported value user needs to specify explicitly.
3. Square brackets or brackets [ ]:
Enclose an optional command or parameter, which user can also enter.
n You can include these commands in scripts to run them automatically.
The meaning of each command is explained in the next sections.
Table: ClusterXL Monitoring Commands
Description Command in Command in
of Command Gaia Clish Expert Mode
Show states of Cluster Members and their names (see show cluster cphaprob [-
"Viewing Cluster State" on page 203) state vs <VSID>]
state
Show Critical Devices (Pnotes) and their states on the show cluster cphaprob [-
Cluster Member (see "Viewing Critical Devices" on members pnotes l] [-ia] [-
page 207) {all | problem} e] list
Show cluster interfaces on the cluster member (see show cluster cphaprob [-
"Viewing Cluster Interfaces" on page 214) members vs all] [-a]
interfaces {all [-m] if
| secured |
virtual |
vlans}
Show cluster bond configuration on the Cluster Member show cluster cphaprob
(see "Viewing Bond Interfaces" on page 218) bond {all | show_bond
name <bond_ [<bond_
name>} name>]
Show (and reset) cluster failover statistics on the show cluster cphaprob [-
Cluster Member (see "Viewing Cluster Failover failover [reset reset {-c |
Statistics" on page 223) {count | -h}] [-l
history}] <count>]
show_
failover
Show information about the software version (including show cluster cphaprob
hotfixes) on the local Cluster Member and its release release
matches/mismatches with other Cluster Members (see
"Viewing Software Versions on Cluster Members" on
page 225)
Show Delta Sync statistics on the Cluster Member (see show cluster cphaprob [-
"Viewing Delta Synchronization" on page 226) statistics sync reset]
[reset] syncstat
Show Delta Sync statistics for the Connections table on show cluster cphaprob [-
the Cluster Member (see "Viewing Cluster Delta Sync statistics reset]
Statistics for Connections Table" on page 233) transport ldstat
[reset]
Show the Cluster Control Protocol (CCP) mode on the show cluster cphaprob [-
Cluster Member (see "Viewing Cluster Interfaces" on members vs all] -a
page 214) interfaces if
virtual
Show the IGMP membership of the Cluster Member show cluster cphaprob
(see "Viewing IGMP Status" on page 232) members igmp igmp
Show cluster unique IP's table on the Cluster Member show cluster cphaprob
(see "Viewing Cluster IP Addresses" on page 234) members ips tablestat
Show the Cluster Member ID Mode in local logs - by show cluster cphaprob
Member ID (default) or Member Name (see "Viewing members idmode names
the Cluster Member ID Mode in Local Logs" on
page 235)
Show interfaces, which the RouteD monitors on the show ospf cphaprob
Cluster Member when you configure OSPF (see interfaces routedifcs
"Viewing Interfaces Monitored by RouteD" on [detailed]
page 236)
Show the Cluster Control Protocol (CCP) mode (see show cluster cphaprob -a
"Viewing the Cluster Control Protocol (CCP) Settings" members if
on page 240) interfaces
virtual
Show the Cluster Control Protocol (CCP) Encryption show cluster cphaprob
settings (see "Viewing the Cluster Control Protocol members ccpenc ccp_encrypt
(CCP) Settings" on page 240)
show cluster
bond
all
name <Name of Bond>
failover
members
ccpenc
idmode
igmp
interfaces
all
secured
virtual
vlans
ips
mvc
pnotes
all
problem
release
roles
state
statistics
sync [reset]
transport [reset]
Syntax
Shell Command
Example
Member1>
Assigned n In the ClusterXL High Availability mode - shows the Active Cluster Member with
Load 100% load, and all other Standby Cluster Members with 0% load.
n In ClusterXL Load Sharing modes (Unicast and Multicast) - shows all Active
Cluster Members with 100% load.
State n In the ClusterXL High Availability mode, only one Cluster Member in a fully-
functioning cluster must be ACTIVE, and the other Cluster Members must be in
the STANDBY state.
n In the ClusterXL Load Sharing modes (Unicast and Multicast), all Cluster
Members in a fully-functioning cluster must be ACTIVE.
n In 3rd-party clustering configuration, all Cluster Members in a fully-functioning
cluster must be ACTIVE. This is because this command only reports the status
of the Full Synchronization process.
See the summary table below.
Active Shows the Critical Devices that report theirs states as "problem" (see "Viewing
PNOTEs Critical Devices" on page 207).
Last member Shows information about the last time this Cluster Member changed its cluster state.
state change
event
State change Shows the previous cluster state and the new cluster state of this Cluster Member.
Reason for Shows the reason why this Cluster Member changed its cluster state.
state change
Event time Shows the date and the time when this Cluster Member changed its cluster state.
Last cluster Shows information about the last time a cluster failover occurred.
failover
event
Event time Shows the date and the time of the last cluster failover.
Time of Shows the date and the time of the last counter reset, and the reset initiator.
counter reset
When you examine the state of the Cluster Member, consider whether it forwards packets, and whether it
has a problem that prevents it from forwarding packets. Each state reflects the result of a test on critical
devices. This table shows the possible cluster states, and whether or not they represent a problem.
Table: Description of the cluster states
Is this
Cluster Forwarding
Description state a
State packets?
problem?
ACTIVE(!) A problem was detected, but the Cluster Member still Yes Yes
ACTIVE forwards packets, because it is the only member in the
(!F) cluster, or because there are no other Active members in the
ACTIVE cluster. In any other situation, the state of the member is
(!P) Down.
ACTIVE n ACTIVE(!) - See above.
(!FP) n ACTIVE(!F) - See above. Cluster Member is in the
freeze state.
n ACTIVE(!P) - See above. This is the Pivot Cluster
Member in Load Sharing Unicast mode.
n ACTIVE(!FP) - See above. This is the Pivot Cluster
Member in Load Sharing Unicast mode and it is in the
freeze state.
DOWN One of the Critical Devices reports its state as "problem" No Yes
(see "Viewing Critical Devices" on page 207).
LOST The peer Cluster Member lost connectivity to this local No Yes
Cluster Member (for example, while the peer Cluster Member
is rebooted).
READY State Ready means that the Cluster Member recognizes itself No No
as a part of the cluster and is literally ready to go into action,
but, by design, something prevents it from taking action.
Possible reasons that the Cluster Member is not yet Active
include:
n Not all required software components were loaded and
initialized yet and/or not all configuration steps finished
successfully yet. Before a Cluster Member becomes
Active, it sends a message to the rest of the Cluster
Members, to check if it can become Active. In High
Availability mode it checks if there is already an Active
member and in Load Sharing Unicast mode it checks if
there is a Pivot member already. The member remains
in the Ready state until it receives the response from
the rest of the Cluster Members and decides which,
which state to choose next (Active, Standby, Pivot, or
non-Pivot).
n Software installed on this Cluster Member has a higher
version than all the other Cluster Members. For
example, when a cluster is upgraded from one version
of Check Point Security Gateway to another, and the
Cluster Members have different versions of Check
Point Security Gateway, the Cluster Members with the
new version have the Ready state, and the Cluster
Members with the previous version have the
Active/Active Attention state.
This applies only when the Multi-Version Cluster
Mechanism is disabled (see "Viewing the State of the
Multi-Version Cluster Mechanism" on page 242).
See sk42096 for a solution.
INIT The Cluster Member is in the phase after the boot and until No No
the Full Sync completes.
Problem Monitors all the Critical Devices. None of the At least one of the
Notification Critical Devices Critical Devices on this
on this Cluster Cluster Member
Member report its reports its state as
state as problem.
problem.
Interface Monitors the state of cluster All cluster At least one of the
Active Check interfaces. interfaces on this cluster interfaces on
Cluster Member this Cluster Member is
are up (CCP down (CCP packets
packets are sent are not sent and/or
and received on received on time).
all cluster
interfaces).
Fullsync Monitors if Full Sync on this This Cluster This Cluster Member
Cluster Member completed Member was not able to
successfully. completed Full complete Full Sync.
Sync
successfully.
Policy Monitors if the Security Policy is This Cluster Security Policy is not
installed. Member currently installed on
successfully this Cluster Member.
installed Security
Policy.
fwd Monitors the Security Gateway fwd daemon on fwd daemon on this
process called fwd. this Cluster Cluster Member did
Member reported not report its state on
its state on time. time.
routed Monitors the Gaia process called routed daemon routed daemon on
routed. on this Cluster this Cluster Member
Member reported did not report its state
its state on time. on time.
cvpnd Monitors the Mobile Access back- cvpnd daemon cvpnd daemon on
end process called cvpnd. on this Cluster this Cluster Member
This pnote appears if Mobile Member reported did not report its state
Access Software Blade is enabled. its state on time. on time.
ted Monitors the Threat Emulation ted daemon on ted daemon on this
process called ted. this Cluster Cluster Member did
Member reported not report its state on
its state on time. time.
VSX Monitors all Virtual Systems in VSX On VS0, means Minimum of blocking
Cluster. that states of all states of all Virtual
Virtual Systems Systems is not "active"
are not Down on (the VSIDs will be
this Cluster printed on the line
Member. Problematic
On other Virtual VSIDs:) on this
Systems, means Cluster Member.
that VS0 is alive
on this Cluster
Member.
host_monitor Monitors the Critical Device All monitored IP At least one of the
host_monitor. addresses on this monitored IP
User executed the Cluster Member addresses on this
$FWDIR/bin/clusterXL_ replied to pings. Cluster Member did
monitor_ips script. not reply to at least
See "The clusterXL_monitor_ips one ping.
Script" on page 277.
A name of a user User executed the All monitored At least one of the
space process $FWDIR/bin/clusterXL_ user space monitored user space
(except fwd, monitor_process script. processes on this on this Cluster
routed, cvpnd, See "The clusterXL_monitor_ Cluster Member Member processes is
ted) process Script" on page 281. are running. not running.
Syntax
Shell Command
Where:
Command Description
show cluster Prints the list of all the "Built-in Devices" and the "Registered
members pnotes Devices"
problem
cphaprob -l Prints the list of all the "Built-in Devices" and the "Registered
Devices"
cphaprob -i list When there are no issues on the Cluster Member, shows:
There are no pnotes in problem state
When a Critical Device reports a problem, prints only the Critical Device that
reports its state as "problem".
cphaprob -ia When there are no issues on the Cluster Member, shows:
list There are no pnotes in problem state
When a Critical Device reports a problem, prints the Critical Device
"Problem Notification" and the Critical Device that reports its state
as "problem"
cphaprob -e list When there are no issues on the Cluster Member, shows:
There are no pnotes in problem state
When a Critical Device reports a problem, prints only the Critical Device that
reports its state as "problem"
Related topics
n "Reporting the State of a Critical Device" on page 184
n "Registering a Critical Device" on page 181
n "Registering Critical Devices Listed in a File" on page 185
n "Unregistering a Critical Device" on page 183
n "Unregistering All Critical Devices" on page 187
Examples
Critical Device fwd reports its state as problem because the fwd process is down.
Built-in Devices:
Registered Devices:
[Expert@Member1:0]#
Critical Device CoreXL Configuration reports its state as problem because the numbers of
CoreXL Firewall instances do not match between the Cluster Members.
Built-in Devices:
Registered Devices:
[Expert@Member1:0]#
Syntax
Shell Command
Where:
Command Description
show cluster members interfaces Shows full list of all cluster interfaces:
all
n including the number of required interfaces
n including Network Objective
n including VLAN monitoring mode, or list of
monitored VLAN interfaces
show cluster members interfaces Shows only cluster interfaces (Cluster and Sync) and
secured their states:
n without Network Objective
n without VLAN monitoring mode
n without monitored VLAN interfaces
show cluster members interfaces Shows full list of cluster virtual interfaces and their
virtual states:
n including the number of required interfaces
n including Network Objective
n without VLAN monitoring mode
n without monitored VLAN interfaces
cphaprob -a -m if Shows full list of all cluster interfaces and their states:
n including the number of required interfaces
n including Network Objective
n including VLAN monitoring mode, or list of
monitored VLAN interfaces
Output
The output of these commands must be identical to the configuration in the cluster object's Network
Management page in SmartConsole.
Example
[Expert@Member1:0]# cphaprob -a -m if
eth0 UP
eth1 (S) UP
eth2 (LM) UP
bond1 (LS) UP
eth0 192.168.3.247
eth2 44.55.66.247
bond1 77.88.99.247
[Expert@Member1:0]#
Required interfaces Shows the total number of monitored cluster interfaces, including
the Sync interface.
This number is based on the configuration of the cluster object >
Network Management page.
Required secured interfaces Shows the total number of the required Sync interfaces.
This number is based on the configuration of the cluster object >
Network Management page.
Non-Monitored This means that Cluster Member does not monitor the state of this
interface.
In SmartConsole, in the cluster object > Network Management
page, administrator configured the Network Type Private for this
interface.
UP This means that Cluster Member monitors the state of this interface.
The current cluster state of this interface is UP, which means this
interface can send and receive CCP packets.
In SmartConsole, in the cluster object > Network Management
page, administrator configured one of these Network Types for this
interface: Cluster, Sync , or Cluster + Sync .
DOWN This means that Cluster Members monitors the state of this
interface.
The current cluster state of this interface is DOWN, which means
this interface cannot send CCP packets, receive CCP packets, or
both.
In SmartConsole, in the cluster object > Network Management
page, administrator configured one of these Network Types for this
interface: Cluster, Sync , or Cluster + Sync .
Virtual cluster interfaces Shows the total number of the configured virtual cluster interfaces.
This number is based on the configuration of the cluster object >
Network Management page.
No VLANs are monitored on Shows the VLAN monitoring mode - there are no VLAN interfaces
the member configured on the cluster interfaces.
Monitoring mode is Monitor Shows the VLAN monitoring mode - there are some VLAN interfaces
all VLANs: All VLANs are configured on the cluster interfaces, and Cluster Member monitors
monitored all VLAN IDs.
Monitoring mode is Monitor Shows the VLAN monitoring mode - there are some VLAN interfaces
specific VLAN: Only specified configured on the cluster interfaces, and Cluster Member monitors
VLANs are monitored only specific VLAN IDs.
Syntax
Shell Command
Where:
Command Description
show cluster bond all Shows configuration of all configured bond interfaces
show bonding groups
cphaprob show_bond
show cluster bond name <bond_ Shows configuration of the specified bond interface
name>
cphaprob show_bond <bond_name>
Examples
Legend:
-------
UP! - Bond interface state is UP, yet attention is required
Slaves configured - number of slave interfaces configured on the bond
Slaves link up - number of operational slaves
Slaves required - minimal number of operational slaves required for bond to be UP
[Expert@Member2:0]#
Description of the output fields for the "cphaprob show_bond" and "show cluster bond all"
commands:
Table: Description of the output fields
Field Description
Slaves Total number of physical slave interfaces configured in this Gaia bonding group.
configured
Slaves link Number of operational physical slave interfaces in this Gaia bonding group.
up
Slaves Minimal number of operational physical slave interfaces required for the state of this
required Gaia bonding group to be UP.
[Expert@Member2:0]#
Description of the output fields for the "cphaprob show_bond <bond_name>" and "show
cluster bond name <bond_name>" commands:
Table: Description of the output fields
Field Description
Configured Total number of physical slave interfaces configured in this Gaia bonding group.
slave
interfaces
In use Number of operational physical slave interfaces in this Gaia bonding group.
slave
interfaces
Required Minimal number of operational physical slave interfaces required for the state of this
slave Gaia bonding group to be UP.
interfaces
Slave Names of physical slave interfaces configured in this Gaia bonding group.
name
Link State of the physical link on the physical slave interfaces in this Gaia bonding group.
One of these:
n Yes - Link is present
n No - Link is lost
Legend:
---------
Bonds in group - a list of the bonds in the bond group
Required active bonds - number of required active bonds
[Expert@Member2:0]#
Required active bonds Number of required active bonds in this Group of Bonds.
Bonds in group Names of the Gaia bond interfaces configured in this Group of Bonds.
Shell Command
Shell Command
Parameters
Parameter Description
-l <number> Specifies how many of last failover events to show (between 1 and 50)
Example
Cluster failover history (last 20 failovers since reboot/reset on Sun Sep 8 16:08:34 2019):
[Expert@Member1:0]#
Syntax
Shell Command
Example
ID SW release
[Expert@Member1:0]#
Shell Command
Shell Command
Example output of the "show cluster statistics sync" and "cphaprob syncstat"
commands from a Cluster Member:
Sync status: OK
Drops:
Lost updates................................. 0
Lost bulk update events...................... 0
Oversized updates not sent................... 0
Sync at risk:
Sent reject notifications.................... 0
Received reject notifications................ 0
Sent messages:
Total generated sync messages................ 26079
Sent retransmission requests................. 0
Sent retransmission updates.................. 0
Peak fragments per update.................... 1
Received messages:
Total received updates....................... 3710
Received retransmission requests............. 0
Sync Interface:
Name......................................... eth1
Link speed................................... 1000Mb/s
Rate......................................... 46000 [Bps]
Peak rate.................................... 46000 [Bps]
Link usage................................... 0%
Total........................................ 376827[KB]
Timers:
Delta Sync interval (ms)..................... 100
This section shows the status of the Delta Sync mechanism. One of these:
n Sync status: OK
n Sync status: Off - Full-sync failure
n Sync status: Off - Policy installation failure
n Sync status: Off - Cluster module not started
n Sync status: Off - SIC failure
n Sync status: Off - Full-sync checksum error
n Sync status: Off - Full-sync received queue is full
n Sync status: Off - Release version mismatch
This section shows statistics for drops on the Delta Sync network.
Lost Shows how many Delta Sync updates this Cluster Member considers as lost (based on
updates sequence numbers in CCP packets).
If this counter shows a value greater than 0, this Cluster Member lost Delta Sync
updates.
Possible mitigation:
Increase the size of the Sending Queue and the size of the Receiving Queue:
n Increase the size of the Sending Queue, if the counter Received reject
notification is increasing.
n Increase the size of the Receiving Queue, if the counter Received reject
notification is not increasing.
Lost bulk Shows how many times this Cluster Member missed Delta Sync updates.
update (bulk update = twice the size of the local receiving queue)
events This counter increases when this Cluster Member receives a Delta Sync update with a
sequence number much greater than expected. This probably indicates some
networking issues that cause massive packet drops.
This counter increases when the amount of missed Delta Sync updates is more than
twice the local Receiving Queue Size.
Possible mitigation:
n If the counter's value is steady, this might indicate a one-time synchronization
problem that can be resolved by running manual Full Sync. See sk37029.
n If the counter's value keeps increasing, probable there are some networking
issues. Increase the sizes of both the Receiving Queue and Sending Queue.
Oversized Shows how many oversized Delta Sync updates were discarded before sending them.
updates This counter increases when Delta Sync update is larger than the local Fragments
not sent Queue Size.
Possible mitigation:
n If the counter's value is steady, increase the size of the Sending Queue.
n If the counter's value keeps increasing, contact Check Point Support.
This section shows statistics that the Sending Queue is at full capacity and rejects Delta Sync
retransmission requests.
Table: Description of the output fields
Field Description
Sent reject Shows how many times this Cluster Member rejected Delta Sync retransmission
notifications requests from its peer Cluster Members, because this Cluster Member does not hold
the requested Delta Sync update anymore.
Received Shows how many reject notifications this Cluster Member received from its peer
reject Cluster Members.
notification
This section shows statistics for Delta Sync updates sent by this Cluster Member to its peer Cluster
Members.
Table: Description of the output fields
Field Description
Sent Shows how many times this Cluster Member asked its peer Cluster Members to
retransmission retransmit specific Delta Sync update(s).
requests Retransmission requests are sent when certain Delta Sync updates (with a
specified sequence number) are missing, while the sending Cluster Member
already received Delta Sync updates with advanced sequences.
Note - Compare the number of Sent retransmission requests to the Total
generated sync messages of the other Cluster Members.
A large counter's value can imply connectivity problems. If the counter's value is
unreasonably high (more than 30% of the Total generated sync messages of
other Cluster Members), contact Check Point Support equipped with the entire
output and a detailed description of the network topology and configuration.
Sent Shows how many times this Cluster Member retransmitted specific Delta Sync
retransmission update(s) at the requests from its peer Cluster Members.
updates
Peak Shows the peak amount of fragments in the Fragments Queue on this Cluster
fragments per Member (usually, should be 1).
update
This section shows statistics for Delta Sync updates that were received by this Cluster Member from its
peer Cluster Members.
Table: Description of the output fields
Field Description
Total received Shows the total number of Delta Sync updates this Cluster Member received from
updates its peer Cluster Members.
This counts only Delta Sync updates (not Retransmission Requests,
Retransmission Acknowledgments, and others).
Received Shows how many retransmission requests this Cluster Member received from its
retransmission peer Cluster Members.
requests A large counter's value can imply connectivity problems. If the counter's value is
unreasonably high (more than 30% of the Total generated sync messages on
this Cluster Member), contact Check Point Support equipped with the entire
output and a detailed description of the network topology and configuration.
Sending Shows the size of the cyclic queue, which buffers all the Delta Sync updates that were
queue already sent until it receives an acknowledgment from the peer Cluster Members.
size This queue is needed for retransmitting the requested Delta Sync updates.
Each Cluster Member has one Sending Queue.
Default: 512 Delta Sync updates, which is also the minimal value.
Receiving Shows the size of the cyclic queue, which buffers the received Delta Sync updates in
queue two cases:
size
n When Delta Sync updates are missing, this queue is used to hold the remaining
received Delta Sync updates until the lost Delta Sync updates are retransmitted
(Cluster Members must keep the order, in which they save the Delta Sync
updates in the kernel tables).
n This queue is used to re-assemble a fragmented Delta Sync update.
Each Cluster Member has one Receiving Queue.
Default: 256 Delta Sync updates, which is also the minimal value.
Fragments Shows the size of the queue, which is used to prepare a Delta Sync update before
queue moving it to the Sending Queue.
size Notes:
n This queue must be smaller than the Sending Queue.
n This queue must be significantly smaller than the Receiving Queue.
Default: 50 Delta Sync updates, which is also the minimal value.
Field Description
Delta Sync Shows the interval at which this Cluster Member sends the Delta Sync updates
interval (ms) from its Sending Queue.
The base time unit is 100ms (or 1 tick).
Default: 100 ms, which is also the minimum value.
See Increasing the Sync Timer.
Syntax
Shell Command
Example
[Expert@Member1:0]#
Syntax
Shell Command
The "reset" flag resets the kernel statistics, which were collected since the last reboot or reset.
Example
[Expert@Member1:0]#
Syntax
Shell Command
Example
(Local)
0 1 192.168.3.245
0 2 11.22.33.245
0 3 44.55.66.245
1 1 192.168.3.246
1 2 11.22.33.246
1 3 44.55.66.246
------------------------------------------
[Expert@Member1:0]#
[Expert@Member1:0]# fw ctl iflist
1 : eth0
2 : eth1
3 : eth2
[Expert@Member1:0]#
Syntax
Shell Command
Example
[Expert@Member1:0]#
Syntax
Shell Command
Example 1
[Expert@Member1:0]#
Example 2
eth0
[Expert@Member1:0]#
Notes:
n In ClusterXL High Availability, the RouteD daemon must run as a Master only on
the Active Cluster Member.
n In ClusterXL Load Sharing, the RouteD daemon must run as a Master only on
one of the Active Cluster Members and as a Non-Master on all other Cluster
Members.
n In VRRP Cluster, the RouteD daemon must run as a Master only on the VRRP
Master Cluster Member.
Syntax
Shell Command
Example
ID Role
1 (local) Master
2 Non-Master
[Expert@Member1:0]#
Note - For more information about CoreXL, see the R81 Performance Tuning
Administration Guide.
Syntax
Shell Command
Where:
Command Description
cphaprob -d corr Shows Cluster Correction Statistics for CoreXL SND only.
cphaprob -f corr Shows Cluster Correction Statistics for CoreXL Firewall instances only.
Shell Command
Shell Command
Syntax
Shell Command
Example
id 2
Latency | Drop
[msec] | rate
eth0 0.000 0%
eth1 0.000 0%
eth2 0.000 0%
[Expert@Member1:0]#
Syntax
Shell Command
Example
ON
Member1>
Syntax
Shell Command
Example
During FCU....................... no
Connection module map............ none
[Expert@Member1:0]#
General Logs
Log Description
State Logs
Log Description
Log Description
State change of member [ID] When a Cluster Member needs to change its state
([IP]) from [STATE] to [STATE] (for example, when an Active Cluster Member
was cancelled, since all other encounters a problem and needs to change its state
members are down. Member remains to "Down"), it first queries the other Cluster Members
[STATE]. for their state.
If all other Cluster Members are down, this Cluster
Member cannot change its state to a non-active one
(otherwise the cluster cannot function).
Thus, the reporting Cluster Member continues to
function, despite its problem (and usually reports its
state as "Active(!)").
member [ID] ([IP]) <is active|is This message is issued whenever a Cluster Member
down|is stand-by|is changes its state.
initializing> ([REASON]). The log text specifies the new state of the Cluster
Member.
Log Description
[DEVICE] on member [ID] Either an error was detected by the Critical Device, or the
([IP]) detected a problem Critical Device has not reported its state for a number of
([REASON]). seconds (as set by the "timeout" option of the Critical
Device)
[DEVICE] on member [ID] Indicates that the Critical Device has registered itself with the
([IP]) is initializing Critical Device mechanism, but has not yet determined its
([REASON]). state.
Interface Logs
Log Description
Log Description
interface [INTERFACE NAME] of Indicates that a new interface was registered with
member [ID] ([IP]) was added. the Cluster Member (meaning that Cluster Control
Protocol (CCP) packets arriving on this interface).
Usually, this message is the result of activating an
interface (such as issuing the "ifconfig up"
command).
The interface is now included in the ClusterXL
reports (in the output of the applicable CLI
commands.
Note that the interface may still be reported as
"Disconnected", in case it was configured as
such for ClusterXL.
interface [INTERFACE NAME] of Indicates that an interface was detached from the
member [ID] ([IP}) was removed. Cluster Member, and is therefore no longer
monitored by ClusterXL.
Reason Strings
Log Description
member [ID] ([IP]) reports This text can be included in a Critical Device log message
more interfaces up. describing the reasons for a problem report: another Cluster
Member has more interfaces reported to be working, than the
local Cluster Member does.
Usually, this means that the local Cluster Member has a faulty
interface, and that its peer Cluster Member can do a better
job as a Cluster Member. The local Cluster Member changes
it state to "Down", leaving the peer Cluster Member specified
in the message to handle traffic.
member [ID] ([IP]) has This message is issued when Cluster Members in the same
more interfaces - check cluster have a different number of interfaces.
your disconnected A Cluster Member with fewer interfaces than the maximal
interfaces configuration number in the cluster (the reporting Cluster Member) may not
in the <discntd.if be working properly, as it is missing an interface required to
file|registry> operate against a cluster IP address, or a synchronization
network.
If some of the interfaces on the other Cluster Member are
redundant, and should not be monitored by ClusterXL, they
should be explicitly designated as "Non-Monitored". See
"Defining Non-Monitored Interfaces" on page 144.
Log Description
[NUMBER] interfaces ClusterXL has detected a problem with one or more of the
required, only [NUMBER] monitored interfaces.
up. This does not necessarily mean that the Cluster Member
changes its state to "Down", as the other Cluster Members
may have less operational interfaces.
In such a condition, the Cluster Member with the largest
number of operational interfaces will remain up, while the
others go down.
4. Run:
threshold_config
For more information, see the R81 CLI Reference Guide > Chapter Security Management Server
Commands > Section threshold_config.
5. From the Threshold Engine Configuration Options menu, select (9) Configure Thresholds .
6. From the Threshold Categories menu, select (2) High Availability .
7. Select the applicable traps.
8. Select and configure these actions for the specified trap:
n Enable/Disable Threshold
n Set Severity
n Set Repetitions
n Configure Alert Destinations
9. From the Threshold Engine Configuration Options menu, select (7) Configure alert destinations .
10. Configure your alert destinations.
11. From the Threshold Engine Configuration Options menu, select (3) Save policy .
You can optionally save the policy to a file.
12. In SmartConsole, install the Access Control Policy on this cluster object.
Note - You can download the most recent Check Point MIB files from sk90470.
Method 1 (recommended)
Command
Change in the State Command
in the Expert Notes
of the Cluster Member in Gaia Clish
Mode
Change the state of the set cluster clusterXL_ Does not disable
Cluster Member to DOWN member admin down admin down Delta Sync.
Change the state of the set cluster clusterXL_ Does not initiate
Cluster Member to UP member admin up admin up Full Sync.
See:
n "Initiating Manual Cluster Failover" on page 189
n "The clusterXL_admin Script" on page 273
See:
n "Registering a Critical Device" on page 181
n "Reporting the State of a Critical Device" on page 184
n "Unregistering a Critical Device" on page 183
Notes:
n In Load Sharing mode, the cluster distributes the traffic load between the
remaining Active members.
n In High Availability mode, the cluster fails over to a Standby Cluster Member with
the highest priority.
dbset routed:instance:default:traceoptions:traceoptions:Cluster
Part of the
Description
message
The first digit shows which Cluster Member generated the event:
n 1 - local member
n 2 - remote member
Y Shows the ID or the NAME of the local Cluster Member that generated this log
message.
See "Configuring the Cluster Member ID Mode in Local Logs" on page 180.
Important:
n We do not recommend that you run these commands. These commands must
be run automatically only by the Security Gateway or the Check Point Support.
n In a Cluster, you must configure all the Cluster Members in the same way.
Syntax
Notes:
n In Gaia Clish:
Enter the set cluster<ESC><ESC> to see all the available commands.
n In Expert mode:
Run the cphaconf command see all the available commands.
You can run the cphaconf commands only from the Expert mode.
n Syntax legend:
1. Curly brackets or braces { }:
Enclose a list of available commands or parameters, separated by the
vertical bar |, from which user can enter only one.
2. Angle brackets < > :
Enclose a variable - a supported value user needs to specify explicitly.
3. Square brackets or brackets [ ]:
Enclose an optional command or parameter, which user can also enter.
n You can include these commands in scripts to run them automatically.
The meaning of each command is explained in the next sections.
Table: ClusterXL Configuration Commands
Description Command in Command in
of Command Gaia Clish Expert Mode
Configure how to show the Cluster Member in set cphaconf mem_id_mode {id
local ClusterXL logs - by its Member ID or its cluster | name}
Member Name (see "Configuring the Cluster member
Member ID Mode in Local Logs" on page 180) idmode {id
| name}
Configure the Cluster Forwarding Layer on the set cphaconf forward {off |
Cluster Member (controls the forwarding of cluster on}
traffic between Cluster Members) member
Note - For Check Point use only. forwarding
{off | on}
Syntax
Notes:
n In Gaia Clish:
Enter the show cluster<ESC><ESC> to see all the available commands.
n In Expert mode:
Run the cphaprob command see all the available commands.
You can run the cphaprob commands from Gaia Clish as well.
n Syntax legend:
1. Curly brackets or braces { }:
Enclose a list of available commands or parameters, separated by the
vertical bar |, from which user can enter only one.
2. Angle brackets < > :
Enclose a variable - a supported value user needs to specify explicitly.
3. Square brackets or brackets [ ]:
Enclose an optional command or parameter, which user can also enter.
n You can include these commands in scripts to run them automatically.
The meaning of each command is explained in the next sections.
Table: ClusterXL Monitoring Commands
Description Command in Command in
of Command Gaia Clish Expert Mode
Show states of Cluster Members and their names (see show cluster cphaprob [-
"Viewing Cluster State" on page 203) state vs <VSID>]
state
Show Critical Devices (Pnotes) and their states on the show cluster cphaprob [-
Cluster Member (see "Viewing Critical Devices" on members pnotes l] [-ia] [-
page 207) {all | problem} e] list
Show cluster interfaces on the cluster member (see show cluster cphaprob [-
"Viewing Cluster Interfaces" on page 214) members vs all] [-a]
interfaces {all [-m] if
| secured |
virtual |
vlans}
Show cluster bond configuration on the Cluster Member show cluster cphaprob
(see "Viewing Bond Interfaces" on page 218) bond {all | show_bond
name <bond_ [<bond_
name>} name>]
Show (and reset) cluster failover statistics on the show cluster cphaprob [-
Cluster Member (see "Viewing Cluster Failover failover [reset reset {-c |
Statistics" on page 223) {count | -h}] [-l
history}] <count>]
show_
failover
Show information about the software version (including show cluster cphaprob
hotfixes) on the local Cluster Member and its release release
matches/mismatches with other Cluster Members (see
"Viewing Software Versions on Cluster Members" on
page 225)
Show Delta Sync statistics on the Cluster Member (see show cluster cphaprob [-
"Viewing Delta Synchronization" on page 226) statistics sync reset]
[reset] syncstat
Show Delta Sync statistics for the Connections table on show cluster cphaprob [-
the Cluster Member (see "Viewing Cluster Delta Sync statistics reset]
Statistics for Connections Table" on page 233) transport ldstat
[reset]
Show the Cluster Control Protocol (CCP) mode on the show cluster cphaprob [-
Cluster Member (see "Viewing Cluster Interfaces" on members vs all] -a
page 214) interfaces if
virtual
Show the IGMP membership of the Cluster Member show cluster cphaprob
(see "Viewing IGMP Status" on page 232) members igmp igmp
Show cluster unique IP's table on the Cluster Member show cluster cphaprob
(see "Viewing Cluster IP Addresses" on page 234) members ips tablestat
Show the Cluster Member ID Mode in local logs - by show cluster cphaprob
Member ID (default) or Member Name (see "Viewing members idmode names
the Cluster Member ID Mode in Local Logs" on
page 235)
Show interfaces, which the RouteD monitors on the show ospf cphaprob
Cluster Member when you configure OSPF (see interfaces routedifcs
"Viewing Interfaces Monitored by RouteD" on [detailed]
page 236)
Show the Cluster Control Protocol (CCP) mode (see show cluster cphaprob -a
"Viewing the Cluster Control Protocol (CCP) Settings" members if
on page 240) interfaces
virtual
Show the Cluster Control Protocol (CCP) Encryption show cluster cphaprob
settings (see "Viewing the Cluster Control Protocol members ccpenc ccp_encrypt
(CCP) Settings" on page 240)
show cluster
bond
all
name <Name of Bond>
failover
members
ccpenc
idmode
igmp
interfaces
all
secured
virtual
vlans
ips
mvc
pnotes
all
problem
release
roles
state
statistics
sync [reset]
transport [reset]
cpconfig
Description
This command starts the Check Point Configuration Tool.
This tool lets you configure specific settings for the installed Check Point products.
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Syntax
cpconfig
Menu Options
Note - The options shown depend on the configuration and installed products.
Licenses and contracts Manages Check Point licenses and contracts on this Security
Gateway or Cluster Member.
Random Pool Configures the RSA keys, to be used by Gaia Operating System.
Secure Internal Communication Manages SIC on the Security Gateway or Cluster Member.
This change requires a restart of Check Point services on the
Security Gateway or Cluster Member.
For more information, see:
n The R81 Security Management Administration Guide.
n sk65764: How to reset SIC.
Enable cluster membership for Enables the cluster membership on the Security Gateway.
this gateway This change requires a reboot of the Security Gateway.
For more information, see the R81 Installation and Upgrade
Guide.
Disable cluster membership for Disables the cluster membership on the Security Gateway.
this gateway This change requires a reboot of the Security Gateway.
For more information, see the R81 Installation and Upgrade
Guide.
Enable Check Point Per Virtual Enables Virtual System Load Sharing on the VSX Cluster
System State Member.
For more information, see the R81 VSX Administration Guide.
Disable Check Point Per Virtual Disables Virtual System Load Sharing on the VSX Cluster
System State Member.
For more information, see the R81 VSX Administration Guide.
Enable Check Point ClusterXL Enables Check Point ClusterXL for Bridge mode.
for Bridge Active/Standby This change requires a reboot of the Cluster Member.
For more information, see the R81 Installation and Upgrade
Guide.
Disable Check Point ClusterXL Disables Check Point ClusterXL for Bridge mode.
for Bridge Active/Standby This change requires a reboot of the Cluster Member.
For more information, see the R81 Installation and Upgrade
Guide.
Check Point CoreXL Manages CoreXL on the Security Gateway or Cluster Member.
After all changes in CoreXL configuration, you must reboot the
Security Gateway or Cluster Member.
For more information, see the R81 Performance Tuning
Administration Guide.
Automatic start of Check Point Shows and controls which of the installed Check Point products
Products start automatically during boot.
[Expert@MySingleGW:0]# cpconfig
This program will let you re-configure
your Check Point products configuration.
Configuration Options:
----------------------
(1) Licenses and contracts
(2) SNMP Extension
(3) PKCS#11 Token
(4) Random Pool
(5) Secure Internal Communication
(6) Enable cluster membership for this gateway
(7) Check Point CoreXL
(8) Automatic start of Check Point Products
(9) Exit
[Expert@MyClusterMember:0]# cpconfig
This program will let you re-configure
your Check Point products configuration.
Configuration Options:
----------------------
(1) Licenses and contracts
(2) SNMP Extension
(3) PKCS#11 Token
(4) Random Pool
(5) Secure Internal Communication
(6) Disable cluster membership for this gateway
(7) Enable Check Point Per Virtual System State
(8) Enable Check Point ClusterXL for Bridge Active/Standby
(9) Check Point CoreXL
(10) Automatic start of Check Point Products
(11) Exit
cphastart
Description
Starts the cluster configuration on a Cluster Member after it was stopped with the "cphastop" on page 267
command.
Note - This command does not initiate a Full Synchronization on the Cluster Member.
Syntax
cphastart
[-h]
[-d]
Parameters
Parameter Description
Best Practice - If you use this parameter, then redirect the output to a file, or
use the script command to save the entire CLI session.
Refer to:
n These lines in the output file:
prepare_command_args: -D ... start
/opt/CPsuite-R81/fw1/bin/cphaconf clear-secured
/opt/CPsuite-R81/fw1/bin/cphaconf -D ...(truncated here
for brevity)... start
n The $FWDIR/log/cphastart.elg log file.
cphastop
Description
Stops the cluster software on a Cluster Member.
Notes:
n This command stops the Cluster Member from passing traffic.
n This command stops the State Synchronization between this Cluster Member
and its peer Cluster Members.
n After you run this command, you can still open connections directly to this
Cluster Member.
n To start the cluster software, run the "cphastart" on page 266 command.
Syntax
cphastop
cp_conf fullha
Description
Manages the state of the Full High Availability Cluster:
n Enables the Full High Availability Cluster
n Disables the Full High Availability Cluster
n Deletes the Full High Availability peer
n Shows the Full High Availability state
Important - To configure a Full High Availability cluster, follow the R81 Installation and
Upgrade Guide.
Syntax
cp_conf fullha
enable
del_peer
disable
state
Parameters
Parameter Description
del_peer Deletes the Full High Availability peer from the configuration.
Example
cp_conf ha
Description
Enables or disables cluster membership on this Security Gateway.
Important - This command is for Check Point use only. To configure cluster
membership, you must use the "cpconfig" command.
Syntax
Parameters
Parameter Description
norestart Optional: Specifies to apply the configuration change without the restart of Check Point
services. The new configuration takes effect only after reboot.
Example 1 - Enable the cluster membership without restart of Check Point services
[Expert@MyGW:0]#
Example 2 - Disable the cluster membership without restart of Check Point services
[Expert@MyGW:0]#
fw hastat
Description
Shows information about Check Point computers in High Availability configuration and their states.
Note - This command is outdated. On cluster members, run the Gaia Clish command
"show cluster state", or the Expert mode command "cphaprob state". See
"Viewing Cluster State" on page 203.
Syntax
Parameters
Parameter Description
[Expert@Member1:0]# fw hastat
HOST NUMBER HIGH AVAILABILITY STATE MACHINE STATUS
192.168.3.52 1 active OK
[Expert@Member1:0]#
fwboot ha_conf
Description
Configures the cluster mechanism during boot.
Notes:
n You must run this command from the Expert mode.
n To install a cluster, see the R81 Installation and Upgrade
Guide.
Syntax
ClusterXL Scripts
You can use special scripts to change the state of Cluster Members.
$FWDIR/bin/clusterXL_admin
Script Workflow
This shell script does one of these:
n Registers a Critical Device called "admin_down" and reports the state of that Critical Device as
"problem".
This gracefully changes the state of the Cluster Member to "DOWN".
n Reports the state of the registered Critical Device "admin_down" as "ok".
This gracefully changes the state of the Cluster Member to "UP".
Then, the script unregisters the Critical Device "admin_down".
Example
#! /bin/csh -f
#
# The script will cause the machine to get into down state, thus the member will not filter packets.
# It will supply a simple way to initiate a failover by registering a new device in problem state when
# a failover is required and will unregister the device when wanting to return to normal operation.
# USAGE:
# clusterXL_admin <up|down>
# Inform the user that the command can run with persistent mode.
if ("$PERSISTENT" != "-p") then
echo "This command does not survive reboot. To make the change permanent, please run 'set cluster
member admin down/up permanent' in clish or add '-p' at the end of the command in expert mode"
endif
sleep 1
$FWDIR/bin/clusterXL_monitor_ips
Script Workflow
1. Registers a Critical Device called "host_monitor" with the status "ok".
2. Starts to send pings to the list of predefined IP addresses in the $FWDIR/conf/cpha_hosts file.
3. While the script receives responses to its pings, it does not change the status of that Critical Device.
4. If the script does not receive a response to even one ping, it reports the state of that Critical Device
as "problem".
This gracefully changes the state of the Cluster Member to DOWN.
If the script receives responses to its pings again, it changes the status of that Critical Device to "ok"
again.
For more information, see sk35780.
Example
#!/bin/sh
#
# The script tries to ping the hosts written in the file $FWDIR/conf/cpha_hosts. The names (must be
resolveable) ot the IPs of the hosrs must be written in seperate lines.
# the file must not contain anything else.
# We ping the given hosts every number of seconds given as parameter to the script.
# USAGE:
# cpha_monitor_ips X silent
# where X is the number of seconds between loops over the IPs.
# if silent is set to 1, no messages will appear on the console
#
# We initially register a pnote named "host_monitor" in the problem notification mechanism
# when we detect that a host is not responding we report the pnote to be in "problem" state.
# when ping succeeds again - we report the pnote is OK.
silent=0
fi
$FWDIR/bin/cphaconf set_pnote -d host_monitor -s problem report
else
if [ $silent = 0 ]
then
echo " Cluster member seems fine!"
fi
$FWDIR/bin/cphaconf set_pnote -d host_monitor -s ok report
fi
if [ "$silent" = 0 ]
then
echo "sleeping"
fi
sleep $1
echo "sleep $1"
done
$FWDIR/bin/clusterXL_monitor_process
Script Workflow
1. Registers Critical Devices (with the status "ok") called as the names of the processes you specified
in the $FWDIR/conf/cpha_proc_list file.
2. While the script detects that the specified process runs, it does not change the status of the
corresponding Critical Device.
3. If the script detects that the specified process do not run anymore, it reports the state of the
corresponding Critical Device as "problem".
This gracefully changes the state of the Cluster Member to "DOWN".
If the script detects that the specified process runs again, it changes the status of the corresponding
Critical Device to "ok" again.
Example
#!/bin/sh
#
# This script monitors the existance of processes in the system. The process names should be written
# in the $FWDIR/conf/cpha_proc_list file one every line.
#
# USAGE :
# cpha_monitor_process X silent
# where X is the number of seconds between process probings.
# if silent is set to 1, no messages will appear on the console.
#
#
# We initially register a pnote for each of the monitored processes
# (process name must be up to 15 charachters) in the problem notification mechanism.
# when we detect that a process is missing we report the pnote to be in "problem" state.
# when the process is up again - we report the pnote is OK.
if [ "$2" -le 1 ]
then
silent=$2
else
silent=0
fi
if [ -f $FWDIR/conf/cpha_proc_list ]
then
procfile=$FWDIR/conf/cpha_proc_list
else
echo "No process file in $FWDIR/conf/cpha_proc_list "
exit 0
fi
arch=`uname -s`
while [ 1 ]
do
result=1
status=$?
if [ $status = 0 ]
then
if [ $silent = 0 ]
then
echo " $process is alive"
fi
# echo "3, $FWDIR/bin/cphaconf set_pnote -d $process -s ok report"
$FWDIR/bin/cphaconf set_pnote -d $process -s ok report
else
if [ $silent = 0 ]
then
echo " $process is down"
fi
done
if [ $result = 0 ]
then
if [ $silent = 0 ]
then
echo " One of the monitored processes is down!"
fi
else
if [ $silent = 0 ]
then
echo " All monitored processes are up "
fi
fi
if [ "$silent" = 0 ]
then
echo "sleeping"
fi
sleep $1
done
List of APIs
API Category API Description
Asynchronous add simple- Creates a new simple cluster object from scratch
cluster
Synchronous show simple- Shows an existing simple cluster object specified by its
cluster Name or UID
API Examples
Example 1 - Adding a simple cluster object
API command:
Use this API to add a simple cluster object.
add simple-cluster
Once the API command finishes, and the session is published, a new cluster object appears in
SmartConsole.
Prerequisite:
1. All Cluster Members must already be installed.
2. The applicable interfaces on each Cluster Member must already be configured.
Example description:
n A simple ClusterXL in High Availability mode called cluster1
n With two cluster members called member1 and member2
n With three interfaces: eth0 (external), eth1 (sync), and eth2 (internal)
n Only the Firewall Software Blade is enabled (the IPsec VPN blade is disabled)
n Cluster software version is R80.20
API example:
Important - In the API command you must use the same One-Time Password you
used on Cluster Members during the First Time Configuration Wizard.
{
"name" : "cluster1",
"color" : "yellow",
"version" : "R80.20",
"ip-address" : "172.23.5.254",
"os-name" : "Gaia",
"cluster-mode" : "cluster-xl-ha",
"firewall" : true,
"vpn" : false,
"interfaces" : [
{
"name" : "eth0",
"ip-address" : "172.23.5.254",
"network-mask" : "255.255.255.0",
"interface-type" : "cluster",
"topology" : "EXTERNAL",
"anti-spoofing" : "true"
},
{
"name" : "eth1",
"interface-type" : "sync",
"topology" : "INTERNAL",
"topology-settings": {
"ip-address-behind-this-interface": "network defined by the interface ip and net mask",
"interface-leads-to-dmz": false
}
},
{
"name" : "eth2",
"ip-address" : "192.168.1.254",
"network-mask" : "255.255.255.0",
"interface-type" : "cluster",
"topology" : "INTERNAL",
"anti-spoofing" : "true",
"topology-settings": {
"ip-address-behind-this-interface": "network defined by the interface ip and net mask",
"interface-leads-to-dmz": false
}
}
],
"members" : [ {
"name" : "member1",
"one-time-password" : "abcd",
"ip-address" : "172.23.5.1",
"interfaces" : [
{
"name" : "eth0",
"ip-address" : "172.23.5.1",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth1",
"ip-address" : "1.1.1.1",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth2",
"ip-address" : "192.168.1.1",
"network-mask" : "255.255.255.0"
}
]
},
{
"name" : "member2",
"one-time-password" : "abcd",
"ip-address" : "172.23.5.2",
"interfaces" : [
{
"name" : "eth0",
"ip-address" : "172.23.5.2",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth1",
"ip-address" : "1.1.1.2",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth2",
"ip-address" : "192.168.1.2",
"network-mask" : "255.255.255.0"
}
]
}
]
}
API command:
Use this API to add (scale up) Cluster Members.
set simple-cluster
Example description:
Adding a Cluster Member called member3.
API example:
{
"name" : "cluster1",
"members" : { "add" :
{
"name" : "member3",
"ipv4-address" : "172.23.5.3",
"one-time-password" : "aaaa",
"interfaces" : [
{
"name" : "eth0",
"ip-address" : "172.23.5.3",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth1",
"ip-address" : "1.1.1.3",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth2",
"ip-address" : "192.168.1.3",
"network-mask" : "255.255.255.0"
}
]
}
}
}
API command:
Use this API to remove (scale down) Cluster Members.
set simple-cluster
Example description:
Removing a Cluster Member called member3.
API example:
{
"name" : "cluster1",
"members" : { "remove" : "member3" }
}
API command:
Use this API to add a cluster interface.
set simple-cluster
Example description:
Adding a cluster interface called eth3.
API example:
{
"name" : "cluster1",
"interfaces" : { "add" :
{
"name" : "eth3",
"ip-address" : "10.10.10.254",
"ipv4-mask-length" : "24",
"interface-type" : "cluster",
"topology" : "INTERNAL",
"anti-spoofing" : "true"
}
},
"members" : { "update" :
[{
"name" : "member1" ,
"interfaces" :
{"name" : "eth3",
"ipv4-address" : "10.10.10.1",
"ipv4-network-mask" : "255.255.255.0"}
},
{
"name" : "member2" ,
"interfaces" :
{"name" : "eth3",
"ipv4-address" : "10.10.10.2",
"ipv4-network-mask" : "255.255.255.0"}
}
]}
}
API command:
Use this API to remove a cluster interface.
set simple-cluster
Example description:
Removing a cluster interface called eth3.
API example:
{
"name" : "cluster1",
"interfaces" : { "remove" : "eth3" }
}
API command:
Use this API to change settings of a cluster interface.
set simple-cluster
Example description:
Changing the IP address of the cluster interfaces called eth2 from 192.168.x.254 / 255.255.255.0 to
172.30.1.x / 255.255.255.0
API example:
{
"name" : "cluster1",
"interfaces" : { "update" :
{
"name" : "eth2",
"ip-address" : "172.30.1.254",
"ipv4-mask-length" : "24",
"interface-type" : "cluster",
"topology" : "INTERNAL",
"anti-spoofing" : "true"
}
},
"members" : { "update" : [
{
"name" : "member1" ,
"interfaces" :
{"name" : "eth2",
"ipv4-address" : "172.30.1.1",
"ipv4-mask-length" : "24"}
},
{
"name" : "member2" ,
"interfaces" :
{"name" : "eth2",
"ipv4-address" : "172.30.1.2",
"ipv4-mask-length" : "24"}
}
]}
}
API command:
Use this API to reestablish SIC with Cluster Members.
set simple-cluster
Prerequisite:
SIC must already be reset on the Cluster Members.
API example:
Important - In the API command you must use the same One-Time Password you
used on Cluster Members during the SIC reset.
{
"name" : "cluster1",
"members" : { "update" :
[
{
"name" : "member1",
"one-time-password" : "aaaa"
},
{
"name" : "member2",
"one-time-password" : "aaaa"
}
]
}
}
API command:
Use this API to enable and disable Software Blades on Cluster Members.
set simple-cluster
Notes:
n To enable a Software Blade, set its value to true in the
API command.
n To disable a Software Blade, set its value to false in the
API command.
API example:
To enable all Software Blades supported by the Cluster API:
{
"name" : "cluster1",
"vpn" : true,
"application-control" : true,
"url-filtering" : true,
"ips" : true,
"content-awareness" : true,
"anti-bot" : true,
"anti-virus" : true,
"threat-emulation" : true
}
{
"name" : "cluster1",
"vpn" : false,
"application-control" : false,
"url-filtering" : false,
"ips" : false,
"content-awareness" : false,
"anti-bot" : false,
"anti-virus" : false,
"threat-emulation" : false
}
API command:
Use this API to view a specific existing cluster object.
show simple-cluster
{
"limit-interfaces" : "10",
"name" : "cluster1"
}
{
"uid": "e0ce560b-8a0a-4468-baa9-5f8eb2658b96",
"name": "cluster1",
"type": "simple-cluster",
"domain": {
"uid": "41e821a0-3720-11e3-aa6e-0800200c9fde",
"name": "SMC User",
"domain-type": "domain"
},
"meta-info": {
"lock": "unlocked",
"validation-state": "ok",
"last-modify-time": {
"posix": 1567417185885,
"iso-8601": "2019-09-02T12:39+0300"
},
"last-modifier": "aa",
"creation-time": {
"posix": 1567417140278,
"iso-8601": "2019-09-02T12:39+0300"
},
"creator": "aa"
},
"tags": [],
"read-only": false,
"comments": "",
"color": "yellow",
"icon": "NetworkObjects/cluster",
"groups": [],
"ipv4-address": "172.23.5.254",
"dynamic-ip": false,
"version": "R80.20",
"os-name": "Gaia",
"hardware": "Open server",
"firewall": true,
"firewall-settings": {
"auto-maximum-limit-for-concurrent-connections": true,
"maximum-limit-for-concurrent-connections": 25000,
"auto-calculate-connections-hash-table-size-and-memory-pool": true,
"connections-hash-size": 131072,
"memory-pool-size": 6,
"maximum-memory-pool-size": 30
},
"vpn": false,
"application-control": false,
"url-filtering": false,
"content-awareness": false,
"ips": false,
"anti-bot": false,
"anti-virus": false,
"threat-emulation": false,
"save-logs-locally": false,
"send-alerts-to-server": [
"harry-main-take-96"
],
"send-logs-to-server": [
"harry-main-take-96"
],
"send-logs-to-backup-server": [],
"logs-settings": {
"rotate-log-by-file-size": false,
"rotate-log-file-size-threshold": 1000,
"rotate-log-on-schedule": false,
"alert-when-free-disk-space-below-metrics": "mbytes",
"alert-when-free-disk-space-below": true,
"alert-when-free-disk-space-below-threshold": 3000,
"alert-when-free-disk-space-below-type": "popup alert",
"delete-when-free-disk-space-below-metrics": "mbytes",
"delete-when-free-disk-space-below": true,
"delete-when-free-disk-space-below-threshold": 5000,
"before-delete-keep-logs-from-the-last-days": false,
"before-delete-keep-logs-from-the-last-days-threshold": 0,
"before-delete-run-script": false,
"before-delete-run-script-command": "",
"stop-logging-when-free-disk-space-below-metrics": "mbytes",
"stop-logging-when-free-disk-space-below": true,
"stop-logging-when-free-disk-space-below-threshold": 100,
"reject-connections-when-free-disk-space-below-threshold": false,
"reserve-for-packet-capture-metrics": "mbytes",
"reserve-for-packet-capture-threshold": 500,
"delete-index-files-when-index-size-above-metrics": "mbytes",
"delete-index-files-when-index-size-above": false,
"delete-index-files-when-index-size-above-threshold": 100000,
"delete-index-files-older-than-days": false,
"delete-index-files-older-than-days-threshold": 14,
"forward-logs-to-log-server": false,
"perform-log-rotate-before-log-forwarding": false,
"update-account-log-every": 3600,
"detect-new-citrix-ica-application-names": false,
"turn-on-qos-logging": true
},
"interfaces": {
"total": 3,
"from": 1,
"to": 3,
"objects": [
{
"name": "eth0",
"ipv4-address": "172.23.5.254",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"topology": "external",
"anti-spoofing": true,
"anti-spoofing-settings": {
"action": "prevent"
},
"security-zone": false,
"comments": "",
"color": "black",
"icon": "NetworkObjects/network",
"interface-type": "cluster"
},
e-type": "sync"
},
{
"name": "eth2",
"ipv4-address": "192.168.1.254",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"topology": "internal",
"topology-settings": {
"ip-address-behind-this-interface": "network defined by the interface ip and net
mask",
"interface-leads-to-dmz": false
},
"anti-spoofing": true,
"anti-spoofing-settings": {
"action": "prevent"
},
"security-zone": false,
"comments": "",
"color": "black",
"icon": "NetworkObjects/network",
"interface-type": "cluster"
}
]
},
"cluster-mode": "cluster-xl-ha",
"cluster-members": [
{
"name": "member1",
"sic-state": "initialized",
"sic-message": "Initialized but trust not established",
"ip-address": "172.23.5.1",
"interfaces": [
{
"name": "eth0",
"ipv4-address": "172.23.5.1",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"ipv6-network-mask": "::",
"ipv6-mask-length": 0
},
{
"name": "eth1",
"ipv4-address": "1.1.1.1",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"ipv6-network-mask": "::",
"ipv6-mask-length": 0
},
{
"name": "eth2",
"ipv4-address": "192.168.1.1",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"ipv6-network-mask": "::",
"ipv6-mask-length": 0
}
]
},
{
API command:
Use this API to view all existing cluster objects.
show simple-clusters
API command:
Use this API to delete a specific cluster object.
delete simple-cluster
API example:
{
"name" : "cluster1"
}
Known Limitations
n These Cluster APIs support only subset of cluster operations.
n These Cluster APIs support only basic configuration of Software Blades (similar to "simple-
gateway" APIs - see the Check Point Management API Reference).
n These Cluster APIs support only ClusterXL High Availability, ClusterXL Load Sharing, and
CloudGuard OPSEC clusters.
n These Cluster APIs do not support the configuration of a Cluster Virtual IP address on a different
subnet than the IP addresses of the Cluster Members.
For such configuration, use SmartConsole.
n These Cluster APIs do not support VRRP Clusters (either on Gaia OS or IPSO OS).
n These Cluster APIs support a limited subset of interface settings.
To change interface settings such as Topology, Anti-Spoofing and Security Zone, you must replace
the interface.