Red Hat Enterprise Linux-7-High Availability Add-On Reference-en-US
Red Hat Enterprise Linux-7-High Availability Add-On Reference-en-US
Reference Document for the High Availability Add-On for Red Hat
Enterprise Linux 7
Steven Levine
Red Hat Enterprise Linux 7 High Availability Add-On Reference
Reference Document for the High Availability Add-On for Red Hat
Enterprise Linux 7
Steven Levine
Red Hat Custo mer Co ntent Services
[email protected] m
Legal Notice
Co pyright © 20 17 Red Hat, Inc. and o thers.
This do cument is licensed by Red Hat under the Creative Co mmo ns Attributio n-ShareAlike 3.0
Unpo rted License. If yo u distribute this do cument, o r a mo dified versio n o f it, yo u must pro vide
attributio n to Red Hat, Inc. and pro vide a link to the o riginal. If the do cument is mo dified, all Red
Hat trademarks must be remo ved.
Red Hat, as the licenso r o f this do cument, waives the right to enfo rce, and agrees no t to assert,
Sectio n 4 d o f CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shado wman lo go , JBo ss, OpenShift, Fedo ra, the Infinity
lo go , and RHCE are trademarks o f Red Hat, Inc., registered in the United States and o ther
co untries.
Linux ® is the registered trademark o f Linus To rvalds in the United States and o ther co untries.
XFS ® is a trademark o f Silico n Graphics Internatio nal Co rp. o r its subsidiaries in the United
States and/o r o ther co untries.
MySQL ® is a registered trademark o f MySQL AB in the United States, the Euro pean Unio n and
o ther co untries.
All o ther trademarks are the pro perty o f their respective o wners.
Abstract
Red Hat High Availability Add-On Reference pro vides reference info rmatio n abo ut installing,
co nfiguring, and managing the Red Hat High Availability Add-On fo r Red Hat Enterprise Linux 7.
T able of Cont ent s
T able of Contents
C
. .hapt . . . .er
. .1. .. Red
. . . . Hat
. . . .High
. . . . Availabilit
. . . . . . . . . y. .Add-
. . . .O. n
. . Configurat
. . . . . . . . . .ion
. . .and
. . . .Management
. . . . . . . . . . . .Reference
..........................
O verview 4
1.1. New and Chang ed Features 4
1.2. Ins talling Pac emaker c o nfig uratio n to o ls 6
1.3. Co nfig uring the ip tab les Firewall to Allo w Clus ter Co mp o nents 6
1.4. The Clus ter and Pac emaker Co nfig uratio n Files 7
1.5. Clus ter Co nfig uratio n Co ns id eratio ns 7
1.6 . Up d ating a Red Hat Enterp ris e Linux Hig h Availab ility Clus ter 7
C
. .hapt
. . . .er
. .2. .. T. he
. . . pcsd
. . . . .Web
. . . . UI
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . .
2 .1. p c s d Web UI Setup 9
2 .2. Creating a Clus ter with the p c s d Web UI 10
2 .3. Co nfig uring Clus ter Co mp o nents 12
C
. .hapt
. . . .er
. .3.
. .T. he
. . .pcs
. . . .Command
. . . . . . . . .Line
. . . .Int
. . erface
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. 5. . . . . . . . . .
3 .1. The p c s Co mmand s 15
3 .2. p c s Us ag e Help Dis p lay 15
3 .3. Viewing the Raw Clus ter Co nfig uratio n 16
3 .4. Saving a Co nfig uratio n Chang e to a File 16
3 .5. Dis p laying Status 16
3 .6 . Dis p laying the Full Clus ter Co nfig uratio n 17
3 .7. Dis p laying The Current p c s Vers io n 17
3 .8 . Bac king Up and Res to ring a Clus ter Co nfig uratio n 17
C
. .hapt
. . . .er
. .4. .. Clust
. . . . .er
. .Creat
. . . . .ion
. . .and
. . . .Administ
. . . . . . . .rat
. . ion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 8. . . . . . . . . .
4 .1. Clus ter Creatio n 18
4 .2. Co nfig uring Timeo ut Values fo r a Clus ter 19
4 .3. Co nfig uring Red und ant Ring Pro to c o l (RRP) 20
4 .4. Manag ing Clus ter No d es 20
4 .5. Setting Us er Permis s io ns 22
4 .6 . Remo ving the Clus ter Co nfig uratio n 24
4 .7. Dis p laying Clus ter Status 24
4 .8 . Clus ter Maintenanc e 25
C
. .hapt
. . . .er
. .5.
. .Fencing:
. . . . . . . .Configuring
. . . . . . . . . . .ST
..O. .NIT
. . .H
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. 6. . . . . . . . . .
5 .1. Availab le STO NITH (Fenc ing ) Ag ents 26
5 .2. G eneral Pro p erties o f Fenc ing Devic es 26
5 .3. Dis p laying Devic e-Sp ec ific Fenc ing O p tio ns 27
5 .4. Creating a Fenc ing Devic e 28
5 .5. Co nfig uring Sto rag e-Bas ed Fenc e Devic es with unfenc ing 28
5 .6 . Dis p laying Fenc ing Devic es 28
5 .7. Mo d ifying and Deleting Fenc ing Devic es 29
5 .8 . Manag ing No d es with Fenc e Devic es 29
5 .9 . Ad d itio nal Fenc ing Co nfig uratio n O p tio ns 29
5 .10 . Co nfig uring Fenc ing Levels 32
5 .11. Co nfig uring Fenc ing fo r Red und ant Po wer Sup p lies 33
C
. .hapt
. . . .er
. .6. .. Configuring
. . . . . . . . . . . Clust
. . . . .er
. .Resources
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
...........
6 .1. Res o urc e Creatio n 34
6 .2. Res o urc e Pro p erties 35
6 .3. Res o urc e-Sp ec ific Parameters 35
6 .4. Res o urc e Meta O p tio ns 36
6 .5. Res o urc e G ro up s 38
1
High Availabilit y Add- O n Reference
C
. .hapt
. . . .er
. .7. .. Resource
. . . . . . . . .Const
. . . . . raint
....s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. 5. . . . . . . . . .
7 .1. Lo c atio n Co ns traints 45
7 .2. O rd er Co ns traints 47
7 .3. Co lo c atio n o f Res o urc es 50
7 .4. Dis p laying Co ns traints 52
C
. .hapt
. . . .er
. .8. .. Managing
. . . . . . . . . Clust
. . . . .er
. . Resources
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
...........
8 .1. Manually Mo ving Res o urc es Aro und the Clus ter 53
8 .2. Mo ving Res o urc es Due to Failure 54
8 .3. Mo ving Res o urc es Due to Co nnec tivity Chang es 55
8 .4. Enab ling , Dis ab ling , and Banning Clus ter Res o urc es 56
8 .5. Dis ab ling a Mo nito r O p eratio ns 57
8 .6 . Manag ed Res o urc es 57
C
. .hapt
. . . .er
. .9. .. Advanced
. . . . . . . . . Resource
. . . . . . . . . Configurat
. . . . . . . . . .ion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
...........
9 .1. Res o urc e Clo nes 59
9 .2. MultiState Res o urc es : Res o urc es That Have Multip le Mo d es 61
9 .3. Co nfig uring a Virtual Do main as a Res o urc e 63
9 .4. The p ac emaker_remo te Servic e 65
9 .5. Utiliz atio n and Plac ement Strateg y 71
C
. .hapt
. . . .er
. .1. 0. .. Clust
. . . . .er
. .Q
. .uorum
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. 5. . . . . . . . . .
10 .1. Co nfig uring Q uo rum O p tio ns 75
10 .2. Q uo rum Ad minis tratio n Co mmand s (Red Hat Enterp ris e Linux 7.3 and Later) 76
10 .3. Mo d ifying Q uo rum O p tio ns (Red Hat Enterp ris e Linux 7.3 and later) 76
10 .4. The q uo rum unb lo c k Co mmand 77
10 .5. Q uo rum Devic es (Tec hnic al Preview) 77
C
. .hapt
. . . .er
. .1. 1. .. Pacemaker
. . . . . . . . . .Rules
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8. 4. . . . . . . . . .
11.1. No d e Attrib ute Exp res s io ns 84
11.2. Time/Date Bas ed Exp res s io ns 85
11.3. Date Sp ec ific atio ns 85
11.4. Duratio ns 86
11.5. Co nfig uring Rules with p c s 86
11.6 . Samp le Time Bas ed Exp res s io ns 86
11.7. Us ing Rules to Determine Res o urc e Lo c atio n 87
C
. .hapt
. . . .er
. .1. 2. .. Pacemaker
. . . . . . . . . .Clust
. . . . .er. .Propert
. . . . . . ies
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8. 8. . . . . . . . . .
12.1. Summary o f Clus ter Pro p erties and O p tio ns 88
12.2. Setting and Remo ving Clus ter Pro p erties 90
12.3. Q uerying Clus ter Pro p erty Setting s 90
C
. .hapt
. . . .er
. .1. 3.
. . T. riggering
. . . . . . . . .Script
. . . . . s. .for
. . .Clust
. . . . er
. . .Event
. . . . .s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. 2. . . . . . . . . .
13.1. Pac emaker Alert Ag ents (Red Hat Enterp ris e Linux 7.3 and later) 92
13.2. Event No tific atio n with Mo nito ring Res o urc es 98
C
. .hapt
. . . .er
. .1. 4. .. Configuring
. . . . . . . . . . . Mult
. . . . i-
. .Sit
. . e. .Clust
. . . . ers
. . . wit
. . . h. .Pacemaker
. . . . . . . . . (T
. . echnical
. . . . . . . . Preview)
. . . . . . . . . . . . . . . . . . . .1.0. 1. . . . . . . . . .
A
. .ppendix
. . . . . . . A.
. . Clust
. . . . . er
. . Creat
. . . . .ion
. . . in
. . Red
. . . . Hat
. . . . Ent
. . . erprise
. . . . . . .Linux
. . . . . 6. .and
. . . .Red
. . . .Hat
. . . Ent
. . . erprise
. . . . . . .Linux
. . . . . 7. . . . . . . . . . . . . . . .
104
2
T able of Cont ent s
A .1. Clus ter Creatio n with rg manag er and with Pac emaker 10 4 104
A .2. Pac emaker Ins tallatio n in Red Hat Enterp ris e Linux 6 and Red Hat Enterp ris e Linux 7
10 8
A
. .ppendix
. . . . . . . B.
. . .Revision
. . . . . . . .Hist
. . . ory
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.1. 0. . . . . . . . . .
I.ndex
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.1. 0. . . . . . . . . .
3
High Availabilit y Add- O n Reference
You can configure a Red Hat High Availability Add-On cluster with the p cs configuration interface or
with the p csd GUI interface.
This section lists features of the Red Hat High Availability Add-On that are new since the initial
release of Red Hat Enterprise Linux 7.
1.1.1. New and Changed Feat ures for Red Hat Ent erprise Linux 7.1
Red Hat Enterprise Linux 7.1 includes the following documentation and feature updates and
changes.
The p cs reso u rce clean u p command can now reset the resource status and f ailco u n t for all
resources, as documented in Section 6.11, “ Cluster Resources Cleanup” .
You can specify a lif et ime parameter for the p cs reso u rce mo ve command, as documented in
Section 8.1, “ Manually Moving Resources Around the Cluster” .
As of Red Hat Enterprise Linux 7.1, you can use the p cs acl command to set permissions for local
users to allow read-only or read-write access to the cluster configuration by using access control
lists (ACLs). For information on ACLs, see Section 4.5, “ Setting User Permissions” .
Section 7.2.3, “ Ordered Resource Sets” and Section 7.3, “ Colocation of Resources” have been
extensively updated and clarified.
Section 6.1, “ Resource Creation” documents the d isab led parameter of the p cs reso u rce
creat e command, to indicate that the resource being created is not started automatically.
As of the Red Hat Enterprise Linux 7.1 release, you can backup the cluster configuration in a
tarball and restore the cluster configuration files on all nodes from backup with the b acku p and
rest o re options of the p cs co n f ig command. For information on this feature, see Section 3.8,
“ Backing Up and Restoring a Cluster Configuration” .
1.1.2. New and Changed Feat ures for Red Hat Ent erprise Linux 7.2
Red Hat Enterprise Linux 7.2 includes the following documentation and feature updates and
changes.
4
Chapt er 1 . Red Hat High Availabilit y Add- O n Configurat ion and Management Reference O verview
You can now use the p cs reso u rce relo cat e ru n command to move a resource to its preferred
node, as determined by current cluster status, constraints, location of resources and other
settings. For information on this command, see Section 8.1.2, “ Moving a Resource to its Preferred
Node” .
Section 13.2, “ Event Notification with Monitoring Resources” has been modified and expanded to
better document how to configure the C lu st erMo n resource to execute an external program to
determine what to do with cluster notifications.
When configuring fencing for redundant power supplies, you now are only required to define
each device once and to specify that both devices are required to fence the node. For information
on configuring fencing for redundant power supplies, see Section 5.11, “ Configuring Fencing for
Redundant Power Supplies” .
This document now provides a procedure for adding a node to an existing cluster in
Section 4.4.3, “ Adding Cluster Nodes” .
The new reso u rce- d isco very location constraint option allows you to indicate whether
Pacemaker should perform resource discovery on a node for a specified resource, as
documented in Table 7.1, “ Location Constraint Options” .
Small clarifications and corrections have been made throughout this document.
1.1.3. New and Changed Feat ures for Red Hat Ent erprise Linux 7.3
Red Hat Enterprise Linux 7.3 includes the following documentation and feature updates and
changes.
Section 9.4, “ The pacemaker_remote Service” , has been wholly rewritten for this version of the
document.
You can configure Pacemaker alerts by means of alert agents, which are external programs that
the cluster calls in the same manner as the cluster calls resource agents to handle resource
configuration and operation. Pacemaker alert agents are described in Section 13.1, “ Pacemaker
Alert Agents (Red Hat Enterprise Linux 7.3 and later)” .
New quorum administration commands are supported with this release which allow you to display
the quorum status and to change the exp ect ed _vo t es parameter. These commands are
described in Section 10.2, “ Quorum Administration Commands (Red Hat Enterprise Linux 7.3 and
Later)” .
You can now modify general quorum options for your cluster with the p cs q u o ru m u p d at e
command, as described in Section 10.3, “ Modifying Quorum Options (Red Hat Enterprise Linux
7.3 and later)” .
You can configure a separate quorum device which acts as a third-party arbitration device for the
cluster. The primary use of this feature is to allow a cluster to sustain more node failures than
standard quorum rules allow. This feature is provided for technical preview only. For information
on quorum devices, see Section 10.5, “ Quorum D evices (Technical Preview)” .
Red Hat Enterprise Linux release 7.3 provides the ability to configure high availability clusters that
span multiple sites through the use of a Booth cluster ticket manager. This feature is provided for
technical preview only. For information on the Booth cluster ticket manager, see Chapter 14,
Configuring Multi-Site Clusters with Pacemaker (Technical Preview).
5
High Availabilit y Add- O n Reference
When configuring a KVM guest node running a the p acemaker_remo t e service, you can include
guest nodes in groups, which allows you to group a storage device, file system, and VM. For
information on configuring KVM guest nodes, see Section 9.4.5, “ Configuration Overview: KVM
Guest Node” .
Additionally, small clarifications and corrections have been made throughout this document.
Alternately, you can install the Red Hat High Availability Add-On software packages along with only
the fence agent that you require with the following command.
# rp m - q - a | g rep f en ce
fence-agents-rhevm-4.0.2-3.el7.x86_64
fence-agents-ilo-mp-4.0.2-3.el7.x86_64
fence-agents-ipmilan-4.0.2-3.el7.x86_64
...
The lvm2- clu st er and g f s2- u t ils packages are part of ResilientStorage channel. You can install
them, as needed, with the following command.
Warning
After you install the Red Hat High Availability Add-On packages, you should ensure that your
software update preferences are set so that nothing is installed automatically. Installation on a
running cluster can cause unexpected behaviors.
6
Chapt er 1 . Red Hat High Availabilit y Add- O n Configurat ion and Management Reference O verview
Po rt Wh en R eq u ired
TCP 2224 Required on all nodes (needed by the p csd daemon)
TCP 3121 Required on all nodes if the cluster has any Pacemaker Remote nodes
TCP 21064 Required on all nodes if the cluster contains any resources requiring D LM
(such as clvm or G FS2)
UD P 5405 Required on all cluster nodes (needed by co ro syn c)
UD P 5404 Required on cluster nodes if co ro syn c is configured for multicast UD P
The configuration files for the Red Hat High Availability add-on are co ro syn c.co n f and cib .xml.
D o not edit these files directly; use the p cs or p csd interface instead.
The co ro syn c.co n f file provides the cluster parameters used by co ro syn c, the cluster manager
that Pacemaker is built on.
The cib .xml file is an XML file that represents both the cluster’s configuration and current state of all
resources in the cluster. This file is used by Pacemaker's Cluster Information Base (CIB). The
contents of the CIB are automatically kept in sync across the entire cluster
When configuring a Red Hat High Availability Add-On cluster, you must take the following
considerations into account:
Red Hat does not support cluster deployments greater than 16 full cluster nodes. It is possible,
however, to scale beyond that limit with remote nodes running the p acemaker_remo t e service.
For information on the p acemaker_remo t e service, see Section 9.4, “ The pacemaker_remote
Service” .
The use of D ynamic Host Configuration Protocol (D HCP) for obtaining an IP address on a
network interface that is utilized by the co ro syn c daemons is not supported. The D HCP client
can periodically remove and re-add an IP address to its assigned interface during address
renewal. This will result in co ro syn c detecting a connection failure, which will result in fencing
activity from any other nodes in the cluster using co ro syn c for heartbeat connectivity.
1.6. Updat ing a Red Hat Ent erprise Linux High Availabilit y Clust er
Updating packages that make up the RHEL High Availability and Resilient Storage Add-Ons, either
individually or as a whole, can be done in one of two general ways:
Rolling Updates: Remove one node at a time from service, update its software, then integrate it back
into the cluster. This allows the cluster to continue providing service and managing resources
while each node is updated.
Entire Cluster Update: Stop the entire cluster, apply updates to all nodes, then start the cluster back
up.
7
High Availabilit y Add- O n Reference
Warning
It is critical that when performing software update procedures for Red Hat Enterprise LInux
High Availability and Resilient Storage clusters, you ensure that any node that will undergo
updates is not an active member of the cluster before those updates are initiated.
For a full description of each of these methods and the procedures to follow for the updates, see
Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient
Storage Cluster.
When updating your Red Hat High Availability software, you should keep the following
considerations in mind.
Red Hat does not support partial upgrades. The only time you should be running different
versions of the cluster packages on the nodes of a cluster is while you are in the process of a
rolling upgrade.
While in the process of performing an update, do not make any changes to your cluster
configuration. For example, do not add or remove resources or constraints.
Red Hat does not support in-place upgrades or rolling upgrades of cluster nodes from one major
release of Red Hat Enterprise Linux to another. For example, there is no supported method for
updating some nodes in a cluster from Red Hat Enterprise Linux 6 to Red Hat Enterprise 7,
introducing them into the cluster with existing RHEL 6 nodes to take over resources from them,
and then updating the remaining RHEL 6 nodes. Upgrades in major releases of RHEL must be
done either as a whole to the entire cluster at once, or through migrating services from a running
cluster on the old release to another cluster running the new release.
Red Hat does not support rolling upgrades of shared storage that is exported with Samba and
CTD B.
8
Chapt er 2 . T he pcsd Web UI
1. Install the Pacemaker configuration tools, as described in Section 1.2, “ Installing Pacemaker
configuration tools” .
2. On each node that will be part of the cluster, use the p asswd command to set the password
for user h aclu st er, using the same password on each node.
4. On one node of the cluster, authenticate the nodes that will constitute the cluster with the
following command. After executing this command, you will be prompted for a U sern ame
and a Passwo rd . Specify h aclu st er as the U sern ame.
5. On any system, open a browser to the following URL, specifying one of the nodes you have
authorized (note that this uses the h t t p s protocol). This brings up the p csd Web UI login
screen.
https://nodename:2224
6. Log in as user h aclu st er. This brings up the Man ag e C lu st ers page as shown in
Figure 2.1, “ Manage Clusters page” .
9
High Availabilit y Add- O n Reference
From the Man ag e C lu st ers page, you can create a new cluster, add an existing cluster to the Web
UI, or remove a cluster from the Web UI.
To create a cluster, click on C reat e N ew and enter the name of the cluster to create and the
nodes that constitute the cluster. You can also configure advanced cluster options from this
screen, including the transport mechanism for cluster communication, as described in
Section 2.2.1, “ Advanced Cluster Configuration Options” . After entering the cluster information,
click C reat e C lu st er.
To add an existing cluster to the Web UI, click on Ad d Exist in g and enter the host name or IP
address of a node in the cluster that you would like to manage with the Web UI.
Once you have created or added a cluster, the cluster name is displayed on the Man ag e C lu st er
page. Selecting the cluster displays information about the cluster.
Note
When using the p csd Web UI to configure a cluster, you can move your mouse over the text
describing many of the options to see longer descriptions of those options as a t o o lt ip
display.
When creating a cluster, you can click on Ad van ced O p t io n s to configure additional cluster
options, as shown in Figure 2.2, “ Create Clusters page” . For information about the options
displayed, move your mouse over the text for that option.
10
Chapt er 2 . T he pcsd Web UI
Note that you can configure a cluster with Redundant Ring Protocol by specifying the interfaces for
each node. The Redundant Ring Protocol settings display will change if you select U D P rather than
the default value of U D PU as the transport mechanism for the cluster.
11
High Availabilit y Add- O n Reference
You can grant permission for specific users other than user h aclu st er to manage the cluster
through the Web UI by adding them to the group h aclien t . By default, the root user and any user
who is a member of the group h aclien t has full read and write access to the cluster configuration.
You can limit the permissions set for an individual member of the group h aclien t by clicking the
Permissio n s tab on the Man ag e C lu st ers page and setting the permissions on the resulting
screen. From this screen, you can also set permissions for groups.
Note
The permissions you set on this screen are for managing the cluster with the Web UI and are
not the same as the permissions you set by using access control lists (ACL), which are
described in Section 2.3.4, “ Configuring ACLs” .
Write permissions, to modify cluster settings (except for permissions and ACLs)
Full permissions, for unrestricted access to a cluster, including adding and removing nodes, with
access to keys and certificates
To configure the components and attributes of a cluster, click on the name of the cluster displayed on
the Man ag e C lu st ers screen. This brings up the N o d es page, as described in Section 2.3.1,
“ Cluster Nodes” . This page displays a menu along the top of the page, as shown in Figure 2.3,
“ Cluster Components Menu” , with the following entries:
12
Chapt er 2 . T he pcsd Web UI
Selecting the N o d es option from the menu along the top of the cluster management page displays
the currently configured nodes and the status of the currently selected node, including which
resources are running on the node and the resource location preferences. This is the default page
that displays when you select a cluster from the Man ag e C lu st ers screen.
You can add or remove nodes from this page, and you can start, stop, restart, or put a node in
standby mode. For information on standby mode, see Section 4.4.5, “ Standby Mode” .
You can also configure fence devices directly from this page, as described in Section 2.3.3, “ Fence
D evices” . by selecting C o n f ig u re Fen cin g .
Selecting the R eso u rces option from the menu along the top of the cluster management page
displays the currently configured resources for the cluster, organized according to resource groups.
Selecting a group or a resource displays the attributes of that group or resource.
From this screen, you can add or remove resources, you can edit the configuration of existing
resources, and you can create a resource group.
To add a new resource to the cluster, click Ad d . The brings up the Ad d R eso u rce screen. When
you select a resource type from the dropdown T yp e menu, the arguments you must specify for that
resource appear in the menu. You can click O p t io n al Arg u men t s to display additional arguments
you can specify for the resource you are defining. After entering the parameters for the resource you
are creating, click C reat e R eso u rce.
When configuring the arguments for a resource, a brief description of the argument appears in the
menu. If you move the cursor to the field, a longer help description of that argument is displayed.
You can define as resource as a cloned resource, or as a master/slave resource. For information on
these resource types, see Chapter 9, Advanced Resource Configuration.
Once you have created at least one resource, you can create a resource group. For information on
resource groups, see Section 6.5, “ Resource Groups” .
To create a resource group, select a resource that will be part of the group from the R eso u rces
screen, then click C reat e G ro u p . This displays the C reat e G ro u p screen. Enter a group name and
click C reat e G ro u p . This returns you to the R eso u rces screen, which now displays the group
name for the resource. After you have created a resource group, you can indicate that group name as
a resource parameter when you create or modify additional resources.
Selecting the Fen ce D evices option from the menu along the top of the cluster management page
displays Fen ce D evices screen, showing the currently configured fence devices.
To add a new fence device to the cluster, click Ad d . The brings up the Ad d Fen ce D evice screen.
When you select a fence device type from the drop-down T yp e menu, the arguments you must specify
for that fence device appear in the menu. You can click on O p t io n al Arg u men t s to display
additional arguments you can specify for the fence device you are defining. After entering the
parameters for the new fence device, click C reat e Fen ce In st an ce.
For information on configuring fence devices with Pacemaker, see Chapter 5, Fencing: Configuring
STONITH.
13
High Availabilit y Add- O n Reference
Selecting the AC LS option from the menu along the top of the cluster management page displays a
screen from which you can set permissions for local users, allowing read-only or read-write access to
the cluster configuration by using access control lists (ACLs).
To assign ACL permissions, you create a role and define the access permissions for that role. Each
role can have an unlimited number of permissions (read/write/deny) applied to either an XPath query
or the ID of a specific element. After defining the role, you can assign it to an existing user or group.
Selecting the C lu st er Pro p ert ies option from the menu along the top of the cluster management
page displays the cluster properties and allows you to modify these properties from their default
values. For information on the Pacemaker cluster properties, see Chapter 12, Pacemaker Cluster
Properties.
14
Chapt er 3. T he pcs Command Line Int erface
clu st er
Configure cluster options and nodes. For information on the p cs clu st er command, see
Chapter 4, Cluster Creation and Administration.
reso u rce
Create and manage cluster resources. For information on the p cs clu st er command, see
Chapter 6, Configuring Cluster Resources, Chapter 8, Managing Cluster Resources, and Chapter 9,
Advanced Resource Configuration.
st o n it h
Configure fence devices for use with Pacemaker. For information on the p cs st o n it h command,
see Chapter 5, Fencing: Configuring STONITH.
co n st rain t
Manage resource constraints. For information on the p cs co n st rain t command, see Chapter 7,
Resource Constraints.
p ro p ert y
Set Pacemaker properties. For information on setting properties with the p cs p ro p ert y command,
see Chapter 12, Pacemaker Cluster Properties.
st at u s
View current cluster and resource status. For information on the p cs st at u s command, see
Section 3.5, “ D isplaying Status” .
co n f ig
D isplay complete cluster configuration in user readable form. For information on the p cs co n f ig
command, see Section 3.6, “ D isplaying the Full Cluster Configuration” .
# p cs reso u rce - h
15
High Availabilit y Add- O n Reference
You can save the raw cluster configuration to a specified file with the p cs clu st er cib filename as
described in Section 3.4, “ Saving a Configuration Change to a File” .
If you have previously configured a cluster and there is already an active CIB, you use the following
command to save the raw xml a file.
For example, the following command saves the raw xml from the CIB into a file name t est f ile.
The following command creates a resource in the file t est f ile1 but does not add that resource to the
currently running cluster configuration.
# p cs - f t est f ile1 reso u rce creat e Virt u alIP o cf :h eart b eat :IPad d r2 ip = 19 2.16 8.0.120
cid r_n et mask= 24 o p mo n it o r in t erval= 30s
You can push the current content of t est f ile to the CIB with the following command.
3.5. Displaying St at us
You can display the status of the cluster and the cluster resources with the following command.
16
Chapt er 3. T he pcs Command Line Int erface
If you do not specify a commands parameter, this command displays all information about the cluster
and the resources. You display the status of only particular cluster components by specifying
reso u rces, g ro u p s, clu st er, n o d es, or p csd .
pcs config
pcs --version
Use the following command to restore the cluster configuration files on all nodes from the backup. If
you do not specify a file name, the standard input will be used. Specifying the - - lo cal option restores
only the files on the current node.
17
High Availabilit y Add- O n Reference
The following sections described the commands that you use to perform these steps.
The following commands start the p csd service and enable p csd at system start. These commands
should be run on each node in the cluster.
The following command authenticates p cs to the p cs daemon on the nodes in the cluster.
The user name for the p cs administrator must be h aclu st er on every node. It is recommended
that the password for user h aclu st er be the same on each node.
If you do not specify u sern ame or p asswo rd , the system will prompt you for those parameters
for each node when you execute the command.
If you do not specify any nodes, this command will authenticate p cs on the nodes that are
specified with a p cs clu st er set u p command, if you have previously executed that command.
For example, the following command authenticates user h aclu st er on z 1.examp le.co m for both of
the nodes in the cluster that consist of z 1.examp le.co m and z 2.examp le.co m. This command
prompts for the password for user h aclu st er on the cluster nodes.
Authorization tokens are stored in the file ~ /.p cs/t o ken s (or /var/lib /p csd /t o ken s).
18
Chapt er 4 . Clust er Creat ion and Administ rat ion
The following command configures the cluster configuration file and syncs the configuration to the
specified nodes.
If you specify the - - st art option, the command will also start the cluster services on the specified
nodes. If necessary, you can also start the cluster services with a separate p cs clu st er st art
command.
When you create a cluster with the p cs clu st er set u p - - st art command or when you start
cluster services with the p cs clu st er st art command, there may be a slight delay before the
cluster is up and running. Before performing any subsequent actions on the cluster and its
configuration, it is recommended that you use the p cs clu st er st at u s command to be sure that
the cluster is up and running.
If you specify the - - lo cal option, the command will perform changes on the local node only.
pcs cluster setup [--start] [--local] --name cluster_ name node1 [node2] [...]
The following command starts cluster services on the specified node or nodes.
If you specify the - - all option, the command starts cluster services on all nodes.
If you do not specify any nodes, cluster services are started on the local node only.
T ab le 4 .1. T imeo u t O p t io n s
O p t io n D escrip t io n
- - t o ken timeout Sets time in milliseconds until a token loss is declared after not
receiving a token (default 1000 ms)
- - jo in timeout sets time in milliseconds to wait for join messages (default 50 ms)
- - co n sen su s timeout sets time in milliseconds to wait for consensus to be achieved
before starting a new round of member- ship configuration
(default 1200 ms)
- - miss_co u n t _co n st count sets the maximum number of times on receipt of a token a
message is checked for retransmission before a retransmission
occurs (default 5 messages)
- - f ail_recv_co n st failures specifies how many rotations of the token without receiving any
messages when messages should be received may occur before
a new configuration is formed (default 2500 failures)
For example, the following command creates the cluster n ew_clu st er and sets the token timeout
value to 10000 milliseconds (10 seconds) and the join timeout value to 100 milliseconds.
19
High Availabilit y Add- O n Reference
For example, the following command configures a cluster named my_rrp _clu st erM with two nodes,
node A and node B. Node A has two interfaces, n o d eA- 0 and n o d eA- 1. Node B has two interfaces,
n o d eB - 0 and n o d eB - 1. To configure these nodes as a cluster using RRP, execute the following
command.
For information on configuring RRP in a cluster that uses u d p transport, see the help screen for the
p cs clu st er set u p command.
The following sections describe the commands you use to manage cluster nodes, including
commands to start and stop cluster services and to add and remove cluster nodes.
The following command stops cluster services on the specified node or nodes. As with the p cs
clu st er st art , the - - all option stops cluster services on all nodes and if you do not specify any
nodes, cluster services are stopped on the local node only.
You can force a stop of cluster services on the local node with the following command, which
performs a kill - 9 command.
Use the following command to configure the cluster services to run on startup on the specified node
or nodes.
If you specify the - - all option, the command enables cluster services on all nodes.
If you do not specify any nodes, cluster services are enabled on the local node only.
Use the following command to configure the cluster services not to run on startup on the specified
node or nodes.
If you specify the - - all option, the command disables cluster services on all nodes.
If you do not specify any nodes, cluster services are disabled on the local node only.
20
Chapt er 4 . Clust er Creat ion and Administ rat ion
Note
It is highly recommended that you add nodes to existing clusters only during a production
maintenance window. This allows you to perform appropriate resource and deployment testing
for the new node and its fencing configuration.
Use the following procedure to add a new node to an existing cluster. In this example, the existing
cluster nodes are clu st ern o d e- 01.examp le.co m, clu st ern o d e- 02.examp le.co m, and
clu st ern o d e- 03.examp le.co m. The new node is n ewn o d e.examp le.co m.
On the new node to add to the cluster, perform the following tasks.
1. Install the cluster packages. If the cluster uses SBD , the Booth ticket manager, or a quorum
device, you must manually install the respective packages (sb d , b o o t h - sit e, co ro syn c-
d evice) on the new node as well.
2. If you are running the f irewalld daemon, execute the following commands to enable the
ports that are required by the Red Hat High Availability Add-On.
3. Set a password for the user ID h aclu st er. It is recommended that you use the same
password for each node in the cluster.
4. Execute the following commands to start the p csd service and to enable p csd at system
start.
21
High Availabilit y Add- O n Reference
2. Add the new node to the existing cluster. This command also syncs the cluster configuration
file co ro syn c.co n f to all nodes in the cluster, including the new node you are adding.
On the new node to add to the cluster, perform the following tasks.
2. Ensure that you configure and test a fencing device for the new cluster node. For information
on configuring fencing devices, see Chapter 5, Fencing: Configuring STONITH.
The following command shuts down the specified node and removes it from the cluster configuration
file, co ro syn c.co n f , on all of the other nodes in the cluster. For information on removing all
information about the cluster from the cluster nodes entirely, thereby destroying the cluster
permanently, refer to Section 4.6, “ Removing the Cluster Configuration” .
The following command puts the specified node into standby mode. The specified node is no longer
able to host resources. Any resources currently active on the node will be moved to another node. If
you specify the - - all, this command puts all nodes into standby mode.
You can use this command when updating a resource's packages. You can also use this command
when testing a configuration, to simulate recovery without actually shutting down a node.
The following command removes the specified node from standby mode. After running this command,
the specified node is then able to host resources. If you specify the - - all, this command removes all
nodes from standby mode.
Note that when you execute the p cs clu st er st an d b y command, this adds constraints to the
resources to prevent them from running on the indicated node. When you execute the p cs clu st er
u n st an d b y command, this removes the constraints. This does not necessarily move the resources
back to the indicated node; where the resources can run at that point depends on how you have
configured your resources initially. For information on resource constraints, refer to Chapter 7,
Resource Constraints.
By default, the root user and any user who is a member of the group h aclien t has full read/write
access to the cluster configuration. As of Red Hat Enterprise Linux 7.1, you can use the p cs acl
22
Chapt er 4 . Clust er Creat ion and Administ rat ion
command to set permissions for local users to allow read-only or read-write access to the cluster
configuration by using access control lists (ACLs).
1. Execute the p cs acl ro le creat e... command to create a role which defines the permissions
for that role.
2. Assign the role you created to a user with the p cs acl u ser creat e command.
The following example procedure provides read-only access for a cluster configuration to a local
user named ro u ser.
1. This procedure requires that the user ro u ser exists on the local system and that the user
ro u ser is a member of the group h aclien t .
# ad d u ser ro u ser
# u sermo d - a - G h aclien t ro u ser
3. Create a role named read - o n ly with read-only permissions for the cib.
# p cs acl ro le creat e read - o n ly d escrip t io n = "R ead access t o clu st er" read
xp at h /cib
4. Create the user ro u ser in the pcs ACL system and assign that user the read - o n ly role.
# p cs acl
User: rouser
Roles: read-only
Role: read-only
D escription: Read access to cluster
Permission: read xpath /cib (read-only-read)
The following example procedure provides write access for a cluster configuration to a local user
named wu ser.
1. This procedure requires that the user wu ser exists on the local system and that the user
wu ser is a member of the group h aclien t .
# ad d u ser wu ser
# u sermo d - a - G h aclien t wu ser
3. Create a role named writ e- access with write permissions for the cib.
23
High Availabilit y Add- O n Reference
4. Create the user wu ser in the pcs ACL system and assign that user the writ e- access role.
# p cs acl
User: rouser
Roles: read-only
User: wuser
Roles: write-access
Role: read-only
D escription: Read access to cluster
Permission: read xpath /cib (read-only-read)
Role: write-access
D escription: Full Access
Permission: write xpath /cib (write-access-write)
For further information about cluster ACLs, see the help screen for the p cs acl command.
To remove all cluster configuration files and stop all cluster services, thus permanently destroying a
cluster, use the following command.
Warning
This command permanently removes any cluster configuration that has been created. It is
recommended that you run p cs clu st er st o p before destroying the cluster.
pcs status
You can display a subset of information about the current status of the cluster with the following
commands.
The following command displays the status of the cluster, but not the cluster resources.
24
Chapt er 4 . Clust er Creat ion and Administ rat ion
If you need to stop a node in a cluster while continuing to provide the services running on that
cluster on another node, you can put the cluster node in standby mode. A node that is in standby
mode is no longer able to host resources. Any resource currently active on the node will be moved
to another node, or stopped if no other node is elgible to run the resource.
If you need to move an individual resource off the node on which it is currently running without
stopping that resource, you can use the p cs reso u rce mo ve command to move the resource to
a different node. For information on the p cs reso u rce mo ve command, see Section 8.1,
“ Manually Moving Resources Around the Cluster” .
When you execute the p cs reso u rce mo ve command, this adds a constraint to the resource to
prevent it from running on the node on which it is currently running. When you are ready to move
the resource back, you can execute the p cs reso u rce clear or the p cs co n st rain t d elet e
command to remove the constraint. This does not necessarily move the resources back to the
original node, however, since where the resources can run at that point depends on how you
have configured your resources initially. You can relocate a resource to a specified node with the
p cs reso u rce relo cat e ru n command, as described in Section 8.1.1, “ Moving a Resource from
its Current Node” .
If you need to stop a running resource entirely and prevent the cluster from starting it again, you
can use the p cs reso u rce d isab le command. For information on the p cs reso u rce d isab le
command, see Section 8.4, “ Enabling, D isabling, and Banning Cluster Resources” .
If you want to prevent Pacemaker from taking any action for a resource (for example, if you want
to disable recovery actions while performing maintenance on the resource, or if you need to
reload the /et c/sysco n f ig /p acemaker settings), use the p cs reso u rce u n man ag e command,
as described in Section 8.6, “ Managed Resources” . Pacemaker Remote connection resources
should never be unmanaged.
If you need to put the cluster in a state where no services will be started or stopped, you can set
the main t en an ce- mo d e cluster property. Putting the cluster into maintenance mode
automatically unmanages all resources. For information on setting cluster properties, see
Table 12.1, “ Cluster Properties” .
If you need to perform maintenance on a Pacemaker remote node, you can remove that node from
the cluster by disabling the remote node resource, as described in Section 9.4.7, “ System
Upgrades and pacemaker_remote” .
25
High Availabilit y Add- O n Reference
Just because a node is unresponsive, this does not mean it is not accessing your data. The only way
to be 100% sure that your data is safe, is to fence the node using STONITH so we can be certain that
the node is truly offline, before allowing the data to be accessed from another node.
STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case,
the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service
elsewhere.
Use the following command to view of list of all available STONITH agents. You specify a filter, then
this command displays only the STONITH agents that match the filter.
Note
To disable a fencing device/resource, you can set the t arg et - ro le as you would for a normal
resource.
Note
To prevent a specific node from using a fencing device, you can configure location
constraints for the fencing resource.
Table 5.1, “ General Properties of Fencing D evices” describes the general properties you can set for
fencing devices. Refer to Section 5.3, “ D isplaying D evice-Specific Fencing Options” for information
on fencing properties you can set for specific fencing devices.
Note
For information on more advanced fencing configuration properties, refer to Section 5.9,
“ Additional Fencing Configuration Options”
Field T yp e D ef au lt D escrip t io n
26
Chapt er 5. Fencing: Configuring ST O NIT H
Field T yp e D ef au lt D escrip t io n
p rio rit y integer 0 The priority of the stonith resource.
D evices are tried in order of highest
priority to lowest.
p cmk_h o st _map string A mapping of host names to ports
numbers for devices that do not support
host names. For example:
n o d e1:1;n o d e2:2,3 tells the cluster to
use port 1 for node1 and ports 2 and 3
for node2
p cmk_h o st _list string A list of machines controlled by this
device (Optional unless
p cmk_h o st _ch eck= st at ic- list ).
p cmk_h o st _ch eck string dynamic-list How to determine which machines are
controlled by the device. Allowed values:
d yn amic- list (query the device),
st at ic- list (check the p cmk_h o st _list
attribute), none (assume every device
can fence every machine)
For example, the following command displays the options for the fence agent for APC over
telnet/SSH.
# p cs st o n it h d escrib e f en ce_ap c
Stonith options for: fence_apc
ipaddr (required): IP Address or Hostname
login (required): Login Name
passwd: Login password or passphrase
passwd_script: Script to retrieve password
cmd_prompt: Force command prompt
secure: SSH connection
port (required): Physical plug number or name of virtual machine
identity_file: Identity file for ssh
switch: Physical switch number on device
inet4_only: Forces agent to use IPv4 addresses only
inet6_only: Forces agent to use IPv6 addresses only
ipport: TCP port to use for connection with device
action (required): Fencing Action
verbose: Verbose mode
debug: Write debug information to given file
version: D isplay version information and exit
help: D isplay help and exit
separator: Separator for CSV created by operation list
power_timeout: Test X seconds for status change after ON/OFF
shell_timeout: Wait X seconds for cmd prompt after issuing command
login_timeout: Wait X seconds for cmd prompt after login
power_wait: Wait X seconds after issuing ON/OFF
delay: Wait X seconds before fencing is started
27
High Availabilit y Add- O n Reference
If you use a single fence device for several nodes, using a different port of each node, you do not
need to create a device separately for each node. Instead you can use the p cmk_h o st _map option
to define which port goes to which node. For example, the following command creates a single
fencing device called myap c- west - 13 that uses an APC power switch called west - ap c and uses
port 15 for node west - 13.
The following example, however, uses the APC power switch named west - ap c to fence nodes west -
13 using port 15, west - 14 using port 17, west - 15 using port 18, and west - 16 using port 19.
Setting the p ro vid es= u n f en cin g meta option is not necessary when configuring a power-based
fence device, since the device itself is providing power to the node in order for it to boot (and attempt
to rejoin the cluster). The act of booting in this case implies that unfencing occurred.
The following command configures a stonith device named my- scsi- sh o o t er that uses the
f en ce_scsi fence agent, enabling unfencing for the device.
28
Chapt er 5. Fencing: Configuring ST O NIT H
Use the following command to modify or add options to a currently configured fencing device.
Use the following command to remove a fencing device from the current configuration.
You can confirm whether a specified node is currently powered off with the following command.
Note
If the node you specify is still running the cluster software or services normally controlled by
the cluster, data corruption/cluster failure will occur.
Field T yp e D ef au lt D escrip t io n
p cmk_h o st _arg u men t string port An alternate parameter to supply instead
of port. Some devices do not support the
standard port parameter or may provide
additional ones. Use this to specify an
alternate, device-specific, parameter that
should indicate the machine to be
fenced. A value of n o n e can be used to
tell the cluster not to supply any
additional parameters.
p cmk_reb o o t _act io n string reboot An alternate command to run instead of
reb o o t . Some devices do not support
the standard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the reboot action.
29
High Availabilit y Add- O n Reference
Field T yp e D ef au lt D escrip t io n
p cmk_reb o o t _t imeo u t time 60s Specify an alternate timeout to use for
reboot actions instead of st o n it h -
t imeo u t . Some devices need much
more/less time to complete than normal.
Use this to specify an alternate, device-
specific, timeout for reboot actions.
p cmk_reb o o t _ret ries integer 2 The maximum number of times to retry the
reb o o t command within the timeout
period. Some devices do not support
multiple connections. Operations may
fail if the device is busy with another task
so Pacemaker will automatically retry the
operation, if there is time remaining. Use
this option to alter the number of times
Pacemaker retries reboot actions before
giving up.
p cmk_o f f _act io n string off An alternate command to run instead of
o f f . Some devices do not support the
standard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the off action.
p cmk_o f f _t imeo u t time 60s Specify an alternate timeout to use for off
actions instead of st o n it h - t imeo u t .
Some devices need much more or much
less time to complete than normal. Use
this to specify an alternate, device-
specific, timeout for off actions.
p cmk_o f f _ret ries integer 2 The maximum number of times to retry the
off command within the timeout period.
Some devices do not support multiple
connections. Operations may fail if the
device is busy with another task so
Pacemaker will automatically retry the
operation, if there is time remaining. Use
this option to alter the number of times
Pacemaker retries off actions before
giving up.
p cmk_list _act io n string list An alternate command to run instead of
list . Some devices do not support the
standard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the list action.
p cmk_list _t imeo u t time 60s Specify an alternate timeout to use for list
actions instead of st o n it h - t imeo u t .
Some devices need much more or much
less time to complete than normal. Use
this to specify an alternate, device-
specific, timeout for list actions.
30
Chapt er 5. Fencing: Configuring ST O NIT H
Field T yp e D ef au lt D escrip t io n
p cmk_list _ret ries integer 2 The maximum number of times to retry the
list command within the timeout period.
Some devices do not support multiple
connections. Operations may fail if the
device is busy with another task so
Pacemaker will automatically retry the
operation, if there is time remaining. Use
this option to alter the number of times
Pacemaker retries list actions before
giving up.
p cmk_mo n it o r_act io n string monitor An alternate command to run instead of
mo n it o r. Some devices do not support
the standard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the monitor action.
p cmk_mo n it o r_t imeo u t time 60s Specify an alternate timeout to use for
monitor actions instead of st o n it h -
t imeo u t . Some devices need much more
or much less time to complete than
normal. Use this to specify an alternate,
device-specific, timeout for monitor
actions.
p cmk_mo n it o r_ret ries integer 2 The maximum number of times to retry the
mo n it o r command within the timeout
period. Some devices do not support
multiple connections. Operations may
fail if the device is busy with another task
so Pacemaker will automatically retry the
operation, if there is time remaining. Use
this option to alter the number of times
Pacemaker retries monitor actions before
giving up.
p cmk_st at u s_act io n string status An alternate command to run instead of
st at u s. Some devices do not support
the standard commands or may provide
additional ones. Use this to specify an
alternate, device-specific, command that
implements the status action.
p cmk_st at u s_t imeo u t time 60s Specify an alternate timeout to use for
status actions instead of st o n it h -
t imeo u t . Some devices need much more
or much less time to complete than
normal. Use this to specify an alternate,
device-specific, timeout for status
actions.
31
High Availabilit y Add- O n Reference
Field T yp e D ef au lt D escrip t io n
p cmk_st at u s_ret ries integer 2 The maximum number of times to retry the
status command within the timeout
period. Some devices do not support
multiple connections. Operations may
fail if the device is busy with another task
so Pacemaker will automatically retry the
operation, if there is time remaining. Use
this option to alter the number of times
Pacemaker retries status actions before
giving up.
If a device fails, processing terminates for the current level. No further devices in that level are
exercised and the next level is attempted instead.
If all devices are successfully fenced, then that level has succeeded and no other levels are tried.
The operation is finished when a level has passed (success), or all levels have been attempted
(failed).
Use the following command to add a fencing level to a node. The devices are given as a comma-
separated list of stonith ids, which are attempted for the node at that level.
The following command lists all of the fencing levels that are currently configured.
In the following example, there are two fence devices configured for node rh 7- 2: an ilo fence device
called my_ilo and an apc fence device called my_ap c. These commands sets up fence levels so that
if the device my_ilo fails and is unable to fence the node, then Pacemaker will attempt to use the
device my_ap c. This example also shows the output of the p cs st o n it h level command after the
levels are configured.
# p cs st o n it h level ad d 1 rh 7- 2 my_ilo
# p cs st o n it h level ad d 2 rh 7- 2 my_ap c
# p cs st o n it h level
Node: rh7-2
Level 1 - my_ilo
Level 2 - my_apc
The following command removes the fence level for the specified node and devices. If no nodes or
devices are specified then the fence level you specify is removed from all nodes.
32
Chapt er 5. Fencing: Configuring ST O NIT H
The following command clears the fence levels on the specified node or stonith id. If you do not
specify a node or stonith id, all fence levels are cleared.
If you specify more than one stonith id, they must be separated by a comma and no spaces, as in the
following example.
The following command verifies that all fence devices and nodes specified in fence levels exist.
If the node never completely loses power, the node may not release its resources. This opens up the
possibility of nodes accessing these resources simultaneously and corrupting them.
Prior to Red Hat Enterprise Linux 7.2, you needed to explicitly configure different versions of the
devices which used either the 'on' or 'off' actions. Since Red Hat Enterprise Linux 7.2, it is now only
required to define each device once and to specify that both are required to fence the node, as in the
following example.
33
High Availabilit y Add- O n Reference
When you specify the - - g ro u p option, the resource is added to the resource group named. If the
group does not exist, this creates the group and adds this resource to the group. For information on
resource groups, refer to Section 6.5, “ Resource Groups” .
The - - b ef o re and - - af t er options specify the position of the added resource relative to a resource
that already exists in a resource group.
Specifying the - - d isab led option indicates that the resource is not started automatically.
The following command creates a resource with the name Virt u alIP of standard o cf , provider
h eart b eat , and type IPad d r2. The floating address of this resource is 192.168.0.120, the system
will check whether the resource is running every 30 seconds.
# p cs reso u rce creat e Virt u alIP o cf :h eart b eat :IPad d r2 ip = 19 2.16 8.0.120
cid r_n et mask= 24 o p mo n it o r in t erval= 30s
Alternately, you can omit the standard and provider fields and use the following command. This will
default to a standard of o cf and a provider of h eart b eat .
# p cs reso u rce creat e Virt u alIP IPad d r2 ip = 19 2.16 8.0.120 cid r_n et mask= 24 o p
mo n it o r in t erval= 30s
For example, the following command deletes an existing resource with a resource ID of Virt u alIP
For information on the resource_id, standard, provider, and type fields of the p cs reso u rce creat e
command, refer to Section 6.2, “ Resource Properties” .
For information on defining resource parameters for individual resources, refer to Section 6.3,
“ Resource-Specific Parameters” .
For information on defining resource meta options, which are used by the cluster to decide how a
resource should behave, refer to Section 6.4, “ Resource Meta Options” .
34
Chapt er 6 . Configuring Clust er Resources
For information on defining the operations to perform on a resource, refer to Section 6.6,
“ Resource Operations” .
Specifying the - - clo n e creates a clone resource. Specifying the - - mast er creates a master/slave
resource. For information on resource clones and resources with multiple modes, refer to
Chapter 9, Advanced Resource Configuration.
Field D escrip t io n
resource_id Your name for the resource
standard The standard the script conforms to. Allowed values: o cf , service, u p st art ,
syst emd , lsb , st o n it h
type The name of the Resource Agent you wish to use, for example IPad d r or
Filesyst em
provider The OCF spec allows multiple vendors to supply the same resource agent.
Most of the agents shipped by Red Hat use h eart b eat as the provider.
Table 6.2, “ Commands to D isplay Resource Properties” . summarizes the commands that display the
available resource properties.
For example, the following command displays the parameters you can set for a resource of type LVM.
35
High Availabilit y Add- O n Reference
In addition to the resource-specific parameters, you can configure additional resource options for
any resource. These options are used by the cluster to decide how your resource should behave.
Table 6.3, “ Resource Meta Options” describes this options.
Field D ef au lt D escrip t io n
p rio rit y 0 If not all resources can be active, the cluster will
stop lower priority resources in order to keep
higher priority ones active.
t arg et - ro le St art ed What state should the cluster attempt to keep this
resource in? Allowed values:
36
Chapt er 6 . Configuring Clust er Resources
Field D ef au lt D escrip t io n
req u ires Calculated Indicates under what conditions the resource can
be started.
mig rat io n - t h resh o ld IN FIN IT Y How many failures may occur for this resource on a
(disabled) node, before this node is marked ineligible to host
this resource. For information on configuring the
mig rat io n - t h resh o ld option, refer to Section 8.2,
“ Moving Resources D ue to Failure” .
f ailu re- t imeo u t 0 (disabled) Used in conjunction with the mig rat io n -
t h resh o ld option, indicates how many seconds to
wait before acting as if the failure had not occurred,
and potentially allowing the resource back to the
node on which it failed. For information on
configuring the f ailu re- t imeo u t option, refer to
Section 8.2, “ Moving Resources D ue to Failure” .
mu lt ip le- act ive st o p _st art What should the cluster do if it ever finds the
resource active on more than one node. Allowed
values:
To change the default value of a resource option, use the following command.
37
High Availabilit y Add- O n Reference
For example, the following command resets the default value of reso u rce- st ickin ess to 100.
Omitting the options parameter from the p cs reso u rce d ef au lt s displays a list of currently
configured default values for resource options. The following example shows the output of this
command after you have reset the default value of reso u rce- st ickin ess to 100.
# p cs reso u rce d ef au lt s
resource-stickiness:100
Whether you have reset the default value of a resource meta option or not, you can set a resource
option for a particular resource to a value other than the default when you create the resource. The
following shows the format of the p cs reso u rce creat e command you use when specifying a value
for a resource meta option.
For example, the following command creates a resource with a reso u rce- st ickin ess value of 50.
# p cs reso u rce creat e Virt u alIP o cf :h eart b eat :IPad d r2 ip = 19 2.16 8.0.120
cid r_n et mask= 24 met a reso u rce- st ickin ess= 50
You can also set the value of a resource meta option for an existing resource, group, cloned
resource, or master resource with the following command.
In the following example, there is an existing resource named d u mmy_reso u rce. This command
sets the f ailu re- t imeo u t meta option to 20 seconds, so that the resource can attempt to restart on
the same node in 20 seconds.
After executing this command, you can display the values for the resource to verity that f ailu re-
t imeo u t = 20s is set.
For information on resource clone meta options, see Section 9.1, “ Resource Clones” . For information
on resource master meta options, see Section 9.2, “ MultiState Resources: Resources That Have
Multiple Modes” .
38
Chapt er 6 . Configuring Clust er Resources
One of the most common elements of a cluster is a set of resources that need to be located together,
start sequentially, and stop in the reverse order. To simplify this configuration, Pacemaker supports
the concept of groups.
You create a resource group with the following command, specifying the resources to include in the
group. If the group does not exist, this command creates the group. If the group exists, this command
adds additional resources to the group. The resources will start in the order you specify them with
this command, and will stop in the reverse order of their starting order.
You can use the - - b ef o re and - - af t er options of this command to specify the position of the added
resources relative to a resource that already exists in the group.
You can also add a new resource to an existing group when you create the resource, using the
following command. The resource you create is added to the group named group_name.
You remove a resource from a group with the following command. If there are no resources in the
group, this command removes the group itself.
The following example creates a resource group named sh o rt cu t that contains the existing
resources IPad d r and Email.
There is no limit to the number of resources a group can contain. The fundamental properties of a
group are as follows.
Resources are started in the order in which you specify them (in this example, IPad d r first, then
Email).
Resources are stopped in the reverse order in which you specify them. (Email first, then IPad d r).
If a resource in the group cannot run anywhere, then no resource specified after that resource is
allowed to run.
If Email cannot run anywhere, however, this does not affect IPad d r in any way.
Obviously as the group grows bigger, the reduced configuration effort of creating resource groups
can become significant.
39
High Availabilit y Add- O n Reference
A resource group inherits the following options from the resources that it contains: p rio rit y, t arg et -
ro le, is- man ag ed For information on resource options, refer to Table 6.3, “ Resource Meta Options” .
Stickiness, the measure of how much a resource wants to stay where it is, is additive in groups. Every
active resource of the group will contribute its stickiness value to the group’s total. So if the default
reso u rce- st ickin ess is 100, and a group has seven members, five of which are active, then the
group as a whole will prefer its current location with a score of 500.
Field D escrip t io n
id Unique name for the action. The system assigns this when you configure an
operation.
n ame The action to perform. Common values: mo n it o r, st art , st o p
in t erval How frequently (in seconds) to perform the operation. D efault value: 0,
meaning never.
t imeo u t How long to wait before declaring the action has failed. If you find that your
system includes a resource that takes a long time to start or stop or perform a
non-recurring monitor action at startup, and requires more time than the
system allows before declaring that the start action has failed, you can
increase this value from the default of 20 or the value of t imeo u t in " op
defaults" .
o n - f ail The action to take if this action ever fails. Allowed values:
* rest art - Stop the resource and start it again (possibly on a different node)
* st an d b y - Move all resources away from the node on which the resource
failed
en ab led If f alse, the operation is treated as if it does not exist. Allowed values: t ru e,
f alse
40
Chapt er 6 . Configuring Clust er Resources
You can configure monitoring operations when you create a resource, using the following command.
For example, the following command creates an IPad d r2 resource with a monitoring operation. The
new resource is called Virt u alIP with an IP address of 192.168.0.99 and a netmask of 24 on et h 2. A
monitoring operation will be performed every 30 seconds.
# p cs reso u rce creat e Virt u alIP o cf :h eart b eat :IPad d r2 ip = 19 2.16 8.0.9 9
cid r_n et mask= 24 n ic= et h 2 o p mo n it o r in t erval= 30s
Alternately, you can add a monitoring operation to an existing resource with the following command.
Note
You must specify the exact operation properties to properly remove an existing operation.
To change the values of a monitoring option, you can update the resource. For example, you can
create a Virt u alIP with the following command.
# p cs reso u rce creat e Virt u alIP o cf :h eart b eat :IPad d r2 ip = 19 2.16 8.0.9 9
cid r_n et mask= 24 n ic= et h 2
The change the stop timeout operation, execute the following command.
To set global default values for monitoring operations, use the following command.
41
High Availabilit y Add- O n Reference
For example, the following command sets a global default of a t imeo u t value of 240 seconds for all
monitoring operations.
To display the currently configured default values for monitoring operations, do not specify any
options when you execute the p cs reso u rce o p d ef au lt s command.
For example, following command displays the default monitoring operation values for a cluster which
has been configured with a t imeo u t value of 240 seconds.
# p cs reso u rce o p d ef au lt s
timeout: 240s
For example, if your system is configured with a resource named Virt u alIP and a resource named
Web Sit e, the p cs reso u rce sh o w command yields the following output.
# p cs reso u rce sh o w
VirtualIP (ocf::heartbeat:IPaddr2): Started
WebSite (ocf::heartbeat:apache): Started
To display a list of all configured resources and the parameters configured for those resources, use
the - - f u ll option of the p cs reso u rce sh o w command, as in the following example.
# p cs reso u rce sh o w - - f u ll
Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat)
Attributes: ip=192.168.0.120 cidr_netmask=24
Operations: monitor interval=30s
Resource: WebSite (type=apache class=ocf provider=heartbeat)
Attributes: statusurl=http://localhost/server-status configfile=/etc/httpd/conf/httpd.conf
Operations: monitor interval=1min
To display the configured parameters for a resource, use the following command.
For example, the following command displays the currently configured parameters for resource
Virt u alIP.
42
Chapt er 6 . Configuring Clust er Resources
The following sequence of commands show the initial values of the configured parameters for
resource Virt u alIP, the command to change the value of the ip parameter, and the values following
the update command.
Note
When configuring multiple monitor operations, you must ensure that no two operations are
performed at the same interval.
To configure additional monitoring operations for a resource that supports more in-depth checks at
different levels, you add an O C F_C H EC K _LEVEL= n option.
For example, if you configure the following IPad d r2 resource, by default this creates a monitoring
operation with an interval of 10 seconds and a timeout value of 20 seconds.
# p cs reso u rce creat e Virt u alIP o cf :h eart b eat :IPad d r2 ip = 19 2.16 8.0.9 9
cid r_n et mask= 24 n ic= et h 2
If the Virtual IP supports a different check with a depth of 10, the following command causes
Packemaker to perform the more advanced monitoring check every 60 seconds in addition to the
normal Virtual IP check every 10 seconds. (As noted, you should not configure the additional
monitoring operation with a 10-second interval as well.)
43
High Availabilit y Add- O n Reference
If you do not specify a resource_id, this command resets the resource status and f ailco u n t for all
resources.
44
Chapt er 7 . Resource Const raint s
lo cat io n constraints — A location constraint determines which nodes a resource can run on.
Location constraints are described in Section 7.1, “ Location Constraints” .
o rd er constraints — An order constraint determines the order in which the resources run. Order
constraints are described in Section 7.2, “ Order Constraints” .
As a shorthand for configuring a set of constraints that will locate a set of resources together and
ensure that the resources start sequentially and stop in reverse order, Pacemaker supports the
concept of resource groups. For information on resource groups, see Section 6.5, “ Resource
Groups” .
Table 7.1, “ Location Constraint Options” . summarizes the options for configuring location
constraints.
Field D escrip t io n
rsc A resource name
node A node’s name
sco re Value to indicate the preference for whether a resource should run on
or avoid a node.
45
High Availabilit y Add- O n Reference
Field D escrip t io n
reso u rce- d isco very
Value to indicate the preference for whether Pacemaker should
perform resource discovery on this node for the specified resource.
Limiting resource discovery to a subset of nodes the resource is
physically capable of running on can significantly boost
performance when a large set of nodes is present. When
pacemaker_remote is in use to expand the node count into the
hundreds of nodes range, this option should be considered. Possible
values include:
Note that setting this option to n ever or exclu sive allows the
possibility for the resource to be active in those locations without the
cluster’s knowledge. This can lead to the resource being active in
more than one location if the service is started outside the cluster's
control (for example, by syst emd or by an administrator). This can
also occur if the reso u rce- d isco very property is changed while
part of the cluster is down or suffering split-brain, or if the reso u rce-
d isco very property is changed for a resource and node while the
resource is active on that node. For this reason, using this option is
appropriate only when you have more than eight nodes and there is
a way to guarantee that the resource can run only in a particular
location (for example, when the required software is not installed
anywhere else).
always is the default reso u rce- d isco very value for a resource
location constraint.
The following command creates a location constraint for a resource to prefer the specified node or
nodes.
The following command creates a location constraint for a resource to avoid the specified node or
nodes.
There are two alternative strategies for specifying which nodes a resources can run on:
46
Chapt er 7 . Resource Const raint s
Opt-In Clusters — Configure a cluster in which, by default, no resource can run anywhere and
then selectively enable allowed nodes for specific resources. The procedure for configuring an
opt-in cluster is described in Section 7.1.1, “ Configuring an " Opt-In" Cluster” .
Opt-Out Clusters — Configure a cluster in which, by default, all resources an run anywhere and
then create location constraints for resources that are not allowed to run on specific nodes. The
procedure for configuring an opt-out cluster is described in Section 7.1.2, “ Configuring an " Opt-
Out" Cluster” .
Whether you should choose to configure an opt-in or opt-out cluster depends both on your personal
preference and the make-up of your cluster. If most of your resources can run on most of the nodes,
then an opt-out arrangement is likely to result in a simpler configuration. On the other hand, if most
resources can only run on a small subset of nodes an opt-in configuration might be simpler.
To create an opt-in cluster, set the symmet ric- clu st er cluster property to f alse to prevent resources
from running anywhere by default.
Enable nodes for individual resources. The following commands configure location constraints so
that the resource Web server prefers node examp le- 1, the resource D at ab ase prefers node
examp le- 2, and both resources can fail over to node examp le- 3 if their preferred node fails.
To create an opt-out cluster, set the symmet ric- clu st er cluster property to t ru e to allow resources
to run everywhere by default.
The following commands will then yield a configuration that is equivalent to the example in
Section 7.1.1, “ Configuring an " Opt-In" Cluster” . Both resources can fail over to node examp le- 3 if
their preferred node fails, since every node has an implicit score of 0.
Note that it is not necessary to specify a score of INFINITY in these commands, since that is the
default value for the score.
47
High Availabilit y Add- O n Reference
Table 7.2, “ Properties of an Order Constraint” . summarizes the properties and options for
configuring order constraints.
Field D escrip t io n
resource_id The name of a resource on which an action is performed.
action The action to perform on a resource. Possible values of the action
property are as follows:
kin d option How to enforce the constraint. The possible values of the kin d option are
as follows:
symmet rical option If true, which is the default, stop the resources in the reverse order. D efault
value: t ru e
A mandatory constraints indicates that the second resource you specify cannot run without the first
resource you specify being active. This is the default value of the kin d option. Leaving the default
value ensures that the second resource you specify will react when the first resource you specify
changes state.
If the first resource you specified resource was running and is stopped, the second resource you
specified will also be stopped (if it is running).
48
Chapt er 7 . Resource Const raint s
If the first resource you specified resource was not running and cannot be started, the second
resource you specified will be stopped (if it is running).
If the first resource you specified is (re)started while the second resource you specified is running,
the second resource you specified will be stopped and restarted.
When the kin d = O p t io n al option is specified for an order constraint, the constraint is considered
optional and only applies if both resources are executing the specified actions. Any change in state
by the first resource you specify will have no effect on the second resource you specify.
The following command configures an advisory ordering constraint for the resources named
Virt u alIP and d u mmy_reso u rce.
A common situation is for an administrator to create a chain of ordered resources, where, for
example, resource A starts before resource B which starts before resource C. If your configuration
requires that you create a set of resources that is colocated and started in order, you can configure a
resource group that contains those resources, as described in Section 6.5, “ Resource Groups” . If,
however, you need to configure resources to start in order and the resources are not necessarily
colocated, you can create an order constraint on a set or sets of resources with the p cs co n st rain t
o rd er set command.
You can set the following options for a set of resources with the p cs co n st rain t o rd er set
command.
seq u en t ial, which can be set to t ru e or f alse to indicate whether the set of resources must be
ordered relative to each other.
Setting seq u en t ial to f alse allows a set to be ordered relative to other sets in the ordering
constraint, without its members being ordered relative to each other. Therefore, this option makes
sense only if multiple sets are listed in the constraint; otherwise, the constraint has no effect.
req u ire- all, which can be set to t ru e or f alse to indicate whether all of the resources in the set
must be active before continuing. Setting req u ire- all to f alse means that only one resource in
the set needs to be started before continuing on to the next set. Setting req u ire- all to f alse has
no effect unless used in conjunction with unordered sets, which are sets for which seq u en t ial is
set to f alse.
You can set the following constraint options for a set of resources following the set o p t io n s
parameter of the p cs co n st rain t o rd er set command.
sco re, to indicate the degree of preference for this constraint. For information on this option, see
Table 7.3, “ Properties of a Colocation Constraint” .
pcs constraint order set resource1 resource2 [resourceN]... [options] [set resourceX resourceY ...
[options]] [setoptions [constraint_options]]
49
High Availabilit y Add- O n Reference
If you have three resources named D 1, D 2, and D 3, the following command configures them as an
ordered resource set.
# p cs co n st rain t o rd er set D 1 D 2 D 3
Use the following command to remove resources from any ordering constraint.
There is an important side effect of creating a colocation constraint between two resources: it affects
the order in which resources are assigned to a node. This is because you cannot place resource A
relative to resource B unless you know where resource B is. So when you are creating colocation
constraints, it is important to consider whether you should colocate resource A with resource B or
resource B with resource A.
Another thing to keep in mind when creating colocation constraints is that, assuming resource A is
collocated with resource B, the cluster will also take into account resource A's preferences when
deciding which node to choose for resource B.
For information on master and slave resources, see Section 9.2, “ MultiState Resources: Resources
That Have Multiple Modes” .
Table 7.3, “ Properties of a Colocation Constraint” . summarizes the properties and options for
configuring colocation constraints.
Field D escrip t io n
source_resource The colocation source. If the constraint cannot be satisfied, the cluster
may decide not to allow the resource to run at all.
target_resource The colocation target. The cluster will decide where to put this resource
first and then decide where to put the source resource.
score Positive values indicate the resource should run on the same node.
Negative values indicate the resources should not run on the same
node. A value of + IN FIN IT Y, the default value, indicates that the
source_resource must run on the same node as the target_resource. A
value of - IN FIN IT Y indicates that the source_resource must not run on
the same node as the target_resource.
50
Chapt er 7 . Resource Const raint s
Mandatory placement occurs any time the constraint's score is + IN FIN IT Y or - IN FIN IT Y. In such
cases, if the constraint cannot be satisfied, then the source_resource is not permitted to run. For
sco re= IN FIN IT Y, this includes cases where the target_resource is not active.
If you need myreso u rce1 to always run on the same machine as myreso u rce2, you would add the
following constraint:
# p cs co n st rain t co lo cat io n ad d myreso u rce1 wit h myreso u rce2 sco re= IN FIN IT Y
Because IN FIN IT Y was used, if myreso u rce2 cannot run on any of the cluster nodes (for whatever
reason) then myreso u rce1 will not be allowed to run.
Alternatively, you may want to configure the opposite, a cluster in which myreso u rce1 cannot run
on the same machine as myreso u rce2. In this case use sco re= - IN FIN IT Y
# p cs co n st rain t co lo cat io n ad d myreso u rce1 wit h myreso u rce2 sco re= - IN FIN IT Y
Again, by specifying - IN FIN IT Y, the constraint is binding. So if the only place left to run is where
myreso u rce2 already is, then myreso u rce1 may not run anywhere.
If mandatory placement is about " must" and " must not" , then advisory placement is the " I would
prefer if" alternative. For constraints with scores greater than - IN FIN IT Y and less than IN FIN IT Y, the
cluster will try to accommodate your wishes but may ignore them if the alternative is to stop some of
the cluster resources. Advisory colocation constraints can combine with other elements of the
configuration to behave as if they were mandatory.
If your configuration requires that you create a set of resources that is colocated and started in order,
you can configure a resource group that contains those resources, as described in Section 6.5,
“ Resource Groups” . If, however, you need to colocate a set of resources but the resources do not
necessarily need to start in order, you can create a colocation constraint on a set or sets of
resources with the p cs co n st rain t co lo cat io n set command.
You can set the following options for a set of resources with the p cs co n st rain t co lo cat io n set
command.
seq u en t ial, which can be set to t ru e or f alse to indicate whether the members of the set must be
colocated with each other.
Setting seq u en t ial to f alse allows the members of this set to be colocated with another set listed
later in the constraint, regardless of which members of this set are active. Therefore, this option
makes sense only if another set is listed after this one in the constraint; otherwise, the constraint
has no effect.
ro le, which can be set to St o p p ed , St art ed , Mast er, or Slave. For information on multi-state
resources, see Section 9.2, “ MultiState Resources: Resources That Have Multiple Modes” .
You can set the following constraint options for a set of resources following the set o p t io n s
parameter of the p cs co n st rain t co lo cat io n set command.
kin d , to indicate how to enforce the constraint. For information on this option, see Table 7.2,
“ Properties of an Order Constraint” .
symmet rical, to indicate the order in which to stop the resources. If true, which is the default, stop
51
High Availabilit y Add- O n Reference
When listing members of a set, each member is colocated with the one before it. For example, " set A
B" means " B is colocated with A" . However, when listing multiple sets, each set is colocated with the
one after it. For example, " set C D sequential=false set A B" means " set C D (where C and D have no
relation between each other) is colocated with set A B (where B is colocated with A)" .
pcs constraint colocation set resource1 resource2 [resourceN]... [options] [set resourceX resourceY
... [options]] [setoptions [constraint_options]]
The following command lists all current location, order, and colocation constraints.
If reso u rces is specified, location constraints are displayed per resource. This is the default
behavior.
If specific resources or nodes are specified, then only information about those resources or nodes
is displayed.
The following command lists all current ordering constraints. If the - - f u ll option is specified, show
the internal constraint ID s.
The following command lists all current colocation constraints. If the - - f u ll option is specified, show
the internal constraint ID s.
The following command lists the constraints that reference specific resources.
52
Chapt er 8 . Managing Clust er Resources
You can override the cluster and force resources to move from their current location. There are two
occasions when you would want to do this:
When a node is under maintenance, and you need to move all resources running on that node to
a different node
To move all resources running on a node to a different node, you put the node in standby mode. For
information on putting a cluster node in standby node, refer to Section 4.4.5, “ Standby Mode” .
You can move individually specified resources in either of the following ways.
You can use the p cs reso u rce mo ve command to move a resource off a node on which it is
currently running, as described in Section 8.1.1, “ Moving a Resource from its Current Node” .
You can use the p cs reso u rce relo cat e ru n command to move a resource to its preferred node,
as determined by current cluster status, constraints, location of resources and other settings. For
information on this command, see Section 8.1.2, “ Moving a Resource to its Preferred Node” .
To move a resource off the node on which it is currently running, use the following command,
specifying the resource_id of the node as defined. Specify the d est in at io n _n o d e. if you want to
indicate on which node to run the resource that you are moving.
Note
When you execute the p cs reso u rce mo ve command, this adds a constraint to the resource
to prevent it from running on the node on which it is currently running. You can execute the
p cs reso u rce clear or the p cs co n st rain t d elet e command to remove the constraint. This
does not necessarily move the resources back to the original node; where the resources can
run at that point depends on how you have configured your resources initially.
If you specify the - - mast er parameter of the p cs reso u rce b an command, the scope of the
constraint is limited to the master role and you must specify master_id rather than resource_id.
53
High Availabilit y Add- O n Reference
You can optionally configure a lif et ime parameter for the p cs reso u rce mo ve command to
indicate a period of time the constraint should remain. You specify the units of a lif et ime parameter
according to the format defined in ISO 8601, which requires that you specify the unit as a capital
letter such as Y (for years), M (for months), W (for weeks), D (for days), H (for hours), M (for minutes),
and S (for seconds).
To distinguish a unit of minutes(M) from a unit of months(M), you must specify PT before indicating
the value in minutes. For example, a lif et ime parameter of 5M indicates an interval of five months,
while a lif et ime parameter of PT5M indicates an interval of five minutes.
The lif et ime parameter is checked at intervals defined by the clu st er- rech eck- in t erval cluster
property. By default this value is 15 minutes. If your configuration requires that you check this
parameter more frequently, you can reset this value with the following command.
You can optionally configure a - - wait [ = n] parameter for the p cs reso u rce b an command to
indicate the number of seconds to wait for the resource to start on the destination node before
returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, the
default resource timeout will be used.
The following command moves the resource reso u rce1 to node examp le- n o d e2 and prevents it
from moving back to the node on which it was originally running for one hour and thirty minutes.
The following command moves the resource reso u rce1 to node examp le- n o d e2 and prevents it
from moving back to the node on which it was originally running for thirty minutes.
After a resource has moved, either due to a failover or to an administrator manually moving the node,
it will not necessarily move back to its original node even after the circumstances that caused the
failover have been corrected. To relocate resources to their preferred node, use the following
command. A preferred node is determined by the current cluster status, constraints, resource
location, and other settings and may change over time.
If you do not specify any resources, all resource are relocated to their preferred nodes.
This command calculates the preferred node for each resource while ignoring resource stickiness.
After calculating the preferred node, it creates location constraints which will cause the resources to
move to their preferred nodes. Once the resources have been moved, the constraints are deleted
automatically. To remove all constraints created by the p cs reso u rce relo cat e ru n command, you
can enter the p cs reso u rce relo cat e clear command. To display the current status of resources
and their optimal node ignoring resource stickiness, enter the p cs reso u rce relo cat e sh o w
command.
54
Chapt er 8 . Managing Clust er Resources
When you create a resource, you can configure the resource so that it will move to a new node after a
defined number of failures by setting the mig rat io n - t h resh o ld option for that resource. Once the
threshold has been reached, this node will no longer be allowed to run the failed resource until:
The administrator manually resets the resource's f ailco u n t using the p cs reso u rce f ailco u n t
command.
Note
Setting a mig rat io n - t h resh o ld for a resource is not the same as configuring a resource for
migration, in which the resource moves to another location without loss of state.
The following example adds a migration threshold of 10 to the resource named d u mmy_reso u rce,
which indicates that the resource will move to a new node after 10 failures.
You can add a migration threshold to the defaults for the whole cluster with the following command.
To determine the resource's current failure status and limits, use the p cs reso u rce f ailco u n t
command.
There are two exceptions to the migration threshold concept; they occur when a resource either fails
to start or fails to stop. If the cluster property st art - f ailu re- is- f at al is set to t ru e (which is the
default), start failures cause the f ailco u n t to be set to IN FIN IT Y and thus always cause the
resource to move immediately.
Stop failures are slightly different and crucial. If a resource fails to stop and STONITH is enabled,
then the cluster will fence the node in order to be able to start the resource elsewhere. If STONITH is
not enabled, then the cluster has no way to continue and will not try to start the resource elsewhere,
but will try to stop it again after the failure timeout.
Setting up the cluster to move resources when external connectivity is lost is a two step process.
1. Add a p in g resource to the cluster. The p in g resource uses the system utility of the same
name to test if a list of machines (specified by D NS host name or IPv4/IPv6 address) are
reachable and uses the results to maintain a node attribute called p in g d .
2. Configure a location constraint for the resource that will move the resource to a different node
when connectivity is lost.
Table 6.1, “ Resource Properties” describes the properties you can set for a p in g resource.
55
High Availabilit y Add- O n Reference
Field D escrip t io n
d amp en The time to wait (dampening) for further changes to occur. This
prevents a resource from bouncing around the cluster when cluster
nodes notice the loss of connectivity at slightly different times.
mu lt ip lier The number of connected ping nodes gets multiplied by this value to
get a score. Useful when there are multiple ping nodes configured.
h o st _list The machines to contact in order to determine the current connectivity
status. Allowed values include resolvable D NS host names, IPv4 and
IPv6 addresses. The entries in the host list are space separated.
The following example configures a location constraint rule for the existing resource named
Web server. This will cause the Web server resource to move to a host that is able to ping
www.examp le.co m if the host that it is currently running on cannot ping www.examp le.co m
You can manually stop a running resource and prevent the cluster from starting it again with the
following command. D epending on the rest of the configuration (constraints, options, failures, and
so on), the resource may remain started. If you specify the - - wait option, p cs will wait up to 30
seconds (or 'n' seconds, as specified) for the resource to stop and then return 0 if the resource is
stopped or 1 if the resource has not stopped.
You can use the following command to allow the cluster to start a resource. D epending on the rest of
the configuration, the resource may remain stopped. If you specify the - - wait option, p cs will wait up
to 30 seconds (or 'n' seconds, as specified) for the resource to start and then return 0 if the resource
is started or 1 if the resource has not started.
Use the following command to prevent a resource from running on a specified node, or on the current
node if no node is specified.
Note that when you execute the p cs reso u rce b an command, this adds a -INFINITY location
constraint to the resource to prevent it from running on the indicated node. You can execute the p cs
56
Chapt er 8 . Managing Clust er Resources
reso u rce clear or the p cs co n st rain t d elet e command to remove the constraint. This does not
necessarily move the resources back to the indicated node; where the resources can run at that point
depends on how you have configured your resources initially. For information on resource
constraints, refer to Chapter 7, Resource Constraints.
If you specify the - - mast er parameter of the p cs reso u rce b an command, the scope of the
constraint is limited to the master role and you must specify master_id rather than resource_id.
You can optionally configure a lif et ime parameter for the p cs reso u rce b an command to indicate
a period of time the constraint should remain. For information on specifying units for the lif et ime
parameter and on specifying the intervals at which the lif et ime parameter should be checked, see
Section 8.1, “ Manually Moving Resources Around the Cluster” .
You can optionally configure a - - wait [ = n] parameter for the p cs reso u rce b an command to
indicate the number of seconds to wait for the resource to start on the destination node before
returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, the
default resource timeout will be used.
You can use the d eb u g - st art parameter of the p cs reso u rce command to force a specified
resource to start on the current node, ignoring the cluster recommendations and printing the output
from starting the resource. This is mainly used for debugging resources; starting resources on a
cluster is (almost) always done by Pacemaker and not directly with a p cs command. If your resource
is not starting, it is usually due to either a misconfiguration of the resource (which you debug in the
system log), constraints that the resource from starting, or the resource being disabled. You can use
this command to test resource configuration, but it should not normally be used to start resources in
a cluster.
The easiest way to stop a recurring monitor is to delete it. However, there can be times when you only
want to disable it temporarily. In such cases, add en ab led = "f alse" to the operation’s definition.
When you want to reinstate the monitoring operation, set en ab led = "t ru e" to the operation's
definition.
You can set a resource to u n man ag ed mode, which indicates that the resource is still in the
configuration but Pacemaker does not manage the resource.
The following command sets resources to man ag ed mode, which is the default state.
57
High Availabilit y Add- O n Reference
You can specify the name of a resource group with the p cs reso u rce man ag e or p cs reso u rce
u n man ag e command. The command will act on all of the resources in the group, so that you can set
all of the resources in a group to man ag ed or u n man ag ed mode with a single command and then
manage the contained resources individually.
58
Chapt er 9 . Advanced Resource Configurat ion
Note
Only resources that can be active on multiple nodes at the same time are suitable for cloning.
For example, a Filesyst em resource mounting a non-clustered file system such as ext 4 from
a shared memory device should not be cloned. Since the ext 4 partition is not cluster aware,
this file system is not suitable for read/write operations occurring from multiple nodes at the
same time.
You can create a resource and a clone of that resource at the same time with the following command.
You cannot create a resource group and a clone of that resource group in a single command.
Alternately, you can create a clone of a previously-created resource or resource group with the
following command.
Note
Note
When configuring constraints, always use the name of the group or clone.
59
High Availabilit y Add- O n Reference
When you create a clone of a resource, the clone takes on the name of the resource with - clo n e
appended to the name. The following commands creates a resource of type ap ach e named
web f arm and a clone of that resource named web f arm- clo n e.
Use the following command to remove a clone of a resource or a resource group. This does not
remove the resource or resource group itself.
Table 9.1, “ Resource Clone Options” describes the options you can specify for a cloned resource.
Field D escrip t io n
p rio rit y, t arg et - ro le, Options inherited from resource that is being cloned, as described in
is- man ag ed Table 6.3, “ Resource Meta Options” .
clo n e- max How many copies of the resource to start. D efaults to the number of nodes
in the cluster.
clo n e- n o d e- max How many copies of the resource can be started on a single node; the
default value is 1.
n o t if y When stopping or starting a copy of the clone, tell all the other copies
beforehand and when the action was successful. Allowed values: f alse,
t ru e. The default value is f alse.
g lo b ally- u n iq u e D oes each copy of the clone perform a different function? Allowed values:
f alse, t ru e
In most cases, a clone will have a single copy on each active cluster node. You can, however, set
clo n e- max for the resource clone to a value that is less than the total number of nodes in the cluster.
If this is the case, you can indicate which nodes the cluster should preferentially assign copies to
with resource location constraints. These constraints are written no differently to those for regular
60
Chapt er 9 . Advanced Resource Configurat ion
The following command creates a location constraint for the cluster to preferentially assign resource
clone web f arm- clo n e to n o d e1.
Ordering constraints behave slightly differently for clones. In the example below, web f arm- st at s will
wait until all copies of web f arm- clo n e that need to be started have done so before being started
itself. Only if no copies of web f arm- clo n e can be started then web f arm- st at s will be prevented
from being active. Additionally, web f arm- clo n e will wait for web f arm- st at s to be stopped before
stopping itself.
Colocation of a regular (or group) resource with a clone means that the resource can run on any
machine with an active copy of the clone. The cluster will choose a copy based on where the clone is
running and the resource's own location preferences.
Colocation between clones is also possible. In such cases, the set of allowed locations for the clone
is limited to nodes on which the clone is (or will be) active. Allocation is then performed as normally.
The following command creates a colocation constraint to ensure that the resource web f arm- st at s
runs on the same node as an active copy of web f arm- clo n e.
To achieve a stable allocation pattern, clones are slightly sticky by default. If no value for reso u rce-
st ickin ess is provided, the clone will use a value of 1. Being a small value, it causes minimal
disturbance to the score calculations of other resources but is enough to prevent Pacemaker from
needlessly moving copies around the cluster.
9.2. Mult iSt at e Resources: Resources T hat Have Mult iple Modes
Multistate resources are a specialization of Clone resources. They allow the instances to be in one of
two operating modes; these are called Mast er and Slave. The names of the modes do not have
specific meanings, except for the limitation that when an instance is started, it must come up in the
Slave state.
You can create a resource as a master/slave clone with the following single command.
Alternately, you can create a master/slave resource from a previously-created resource or resource
group with the following command: When you use this command, you can specify a name for the
master/slave clone. If you do not specify a name, the name of the master/slave clone will be
resource_id- mast er or group_name- mast er.
61
High Availabilit y Add- O n Reference
Table 9.2, “ Properties of a Multistate Resource” describes the options you can specify for a multistate
resource.
Field D escrip t io n
id Your name for the multistate resource
p rio rit y, t arg et - ro le, is- See Table 6.3, “ Resource Meta Options” .
man ag ed
clo n e- max, clo n e- n o d e- max, See Table 9.1, “ Resource Clone Options” .
n o t if y, g lo b ally- u n iq u e,
o rd ered , in t erleave
mast er- max How many copies of the resource can be promoted to
mast er status; default 1.
mast er- n o d e- max How many copies of the resource can be promoted to
mast er status on a single node; default 1.
To add a monitoring operation for the master resource only, you can add an additional monitor
operation to the resource. Note, however, that every monitor operation on a resource must have a
different interval.
The following example configures a monitor operation with an interval of 11 seconds on the master
resource for ms_reso u rce. This monitor operation is in addition to the default monitor operation
with the default monitor interval of 10 seconds.
In most cases, a multistate resources will have a single copy on each active cluster node. If this is not
the case, you can indicate which nodes the cluster should preferentially assign copies to with
resource location constraints. These constraints are written no differently than those for regular
resources.
For information on resource location constraints, see Section 7.1, “ Location Constraints” .
You can create a colocation constraint which specifies whether the resources are master or slave
resources. The following command creates a resource colocation constraint.
When configuring an ordering constraint that includes multistate resources, one of the actions that
you can specify for the resources is p ro mo t e, indicating that the resource be promoted from slave to
master. Additionally, you can specify an action of d emo t e, indicated that the resource be demoted
from master to slave.
62
Chapt er 9 . Advanced Resource Configurat ion
For information on resource order constraints, see Section 7.2, “ Order Constraints” .
To achieve a stable allocation pattern, multistate resources are slightly sticky by default. If no value
for reso u rce- st ickin ess is provided, the multistate resource will use a value of 1. Being a small
value, it causes minimal disturbance to the score calculations of other resources but is enough to
prevent Pacemaker from needlessly moving copies around the cluster.
When configuring a virtual domain as a resource, take the following considerations into account:
Once a virtual domain is a cluster resource, it should not be started, stopped, or migrated except
through the cluster tools.
D o not configure a virtual domain that you have configured as a cluster resource to start when its
host boots.
All nodes must have access to the necessary configuration files and storage devices for each
managed virtual domain.
If you want the cluster to manage services within the virtual domain itself, you can configure the
virtual domain as a guest node. For information on configuring guest nodes, see Section 9.4, “ The
pacemaker_remote Service”
For information on configuring virtual domains, see the Virtualization D eployment and
Administration Guide.
Table 9.3, “ Resource Options for Virtual D omain Resources” describes the resource options you can
configure for a Virt u alD o main resource.
Field D ef au lt D escrip t io n
co n f ig (required) Absolute path to the lib virt
configuration file for this virtual domain.
h yp erviso r System Hypervisor URI to connect to. You can determine
dependent the system's default URI by running the virsh - -
q u iet u ri command.
63
High Availabilit y Add- O n Reference
Field D ef au lt D escrip t io n
f o rce_st o p 0 Always forcefully shut down (" destroy" ) the domain
on stop. The default behavior is to resort to a
forceful shutdown only after a graceful shutdown
attempt has failed. You should set this to t ru e only
if your virtual domain (or your virtualization
backend) does not support graceful shutdown.
mig rat io n _t ran sp o rt System Transport used to connect to the remote hypervisor
dependent while migrating. If this parameter is omitted, the
resource will use lib virt 's default transport to
connect to the remote hypervisor.
mig rat io n _n et wo rk_s Use a dedicated migration network. The migration
u f f ix URI is composed by adding this parameter's value
to the end of the node name. If the node name is a
fully qualified domain name (FQD N), insert the
suffix immediately prior to the first period (.) in the
FQD N. Ensure that this composed host name is
locally resolveable and the associated IP address
is reachable through the favored network.
mo n it o r_scrip t s To additionally monitor services within the virtual
domain, add this parameter with a list of scripts to
monitor. Note: When monitor scripts are used, the
st art and mig rat e_f ro m operations will complete
only when all monitor scripts have completed
successfully. Be sure to set the timeout of these
operations to accommodate this delay
au t o set _u t iliz at io n _c t ru e If set to t ru e, the agent will detect the number of
pu d o main U 's vC PU s from virsh , and put it into the
CPU utilization of the resource when the monitor is
executed.
au t o set _u t iliz at io n _h t ru e If set it true, the agent will detect the number of Max
v_memo ry memo ry from virsh , and put it into the
h v_memo ry utilization of the source when the
monitor is executed.
mig rat ep o rt random highport This port will be used in the q emu migrate URI. If
unset, the port will be a random highport.
sn ap sh o t Path to the snapshot directory where the virtual
machine image will be stored. When this parameter
is set, the virtual machine's RAM state will be saved
to a file in the snapshot directory when stopped. If
on start a state file is present for the domain, the
domain will be restored to the same state it was in
right before it stopped last. This option is
incompatible with the f o rce_st o p option.
In addition to the Virt u alD o main resource options, you can configure the allo w- mig rat e metadata
option to allow live migration of the resource to another node. When this option is set to t ru e, the
resource can be migrated without loss of state. When this option is set to f alse, which is the default
state, the virtual domain will be shut down on the first node and then restarted on the second node
when it is moved from one node to the other.
The following example configures a Virt u alD o main resource named VM. Since the allo w- mig rat e
option is set to t ru e a p cs mo ve VM nodeX command would be done as a live migration.
64
Chapt er 9 . Advanced Resource Configurat ion
The p acemaker_remo t e service allows nodes not running co ro syn c to integrate into the cluster
and have the cluster manage their resources just as if they were real cluster nodes.
Among the capabilities that the p acemaker_remo t e service provides are the following:
The p acemaker_remo t e service allows you to scale beyond the co ro syn c 16-node limit.
cluster node — A node running the High Availability services (p acemaker and co ro syn c).
remote node — A node running p acemaker_remo t e to remotely integrate into the cluster without
requiring co ro syn c cluster membership. A remote node is configured as a cluster resource that
uses the o cf :p acemaker:remo t e resource agent.
guest node — A virtual guest node running the p acemaker_remo t e service. A guest node is
configured using the remo t e- n o d e metadata option of a resource agent such as
o cf :p acemaker:Virt u alD o main . The virtual guest resource is managed by the cluster; it is both
started by the cluster and integrated into the cluster as a remote node.
LXC — A Linux Container defined by the lib virt - lxc Linux container driver.
A Pacemaker cluster running the p acemaker_remo t e service has the following characteristics.
Remote nodes and guest nodes run the p acemaker_remo t e service (with very little configuration
required on the virtual machine side).
The cluster stack (p acemaker and co ro syn c), running on the cluster nodes, connects to the
p acemaker_remo t e service on the remote nodes, allowing them to integrate into the cluster.
The cluster stack (p acemaker and co ro syn c), running on the cluster nodes, launches the guest
nodes and immediately connects to the p acemaker_remo t e service on the guest nodes,
allowing them to integrate into the cluster.
The key difference between the cluster nodes and the remote and guest nodes that the cluster nodes
manage is that the remote and guest nodes are not running the cluster stack. This means the remote
and guest nodes have the following limitations:
65
High Availabilit y Add- O n Reference
On the other hand, remote nodes and guest nodes are not bound to the scalability limits associated
with the cluster stack.
Other than these noted limitations, the remote nodes behave just like cluster nodes in respect to
resource management, and the remote and guest nodes can themselves be fenced. The cluster is
fully capable of managing and monitoring resources on each remote and guest node: You can build
constraints against them, put them in standby, or perform any other action you perform on cluster
nodes with the p cs commands. Remote and guest nodes appear in cluster status output just as
cluster nodes do.
The connection between cluster nodes and pacemaker_remote is secured using Transport Layer
Security (TLS) with pre-shared key (PSK) encryption and authentication over TCP (using port 3121
by default). This means both the cluster node and the node running p acemaker_remo t e must
share the same private key. By default this key must be placed at /et c/p acemaker/au t h key on both
cluster nodes and remote nodes.
When configuring a virtual machine or LXC resource to act as a guest node, you create a
Virt u alD o main resource, which manages the virtual machine. For descriptions of the options you
can set for a Virt u alD o main resource, see Table 9.3, “ Resource Options for Virtual D omain
Resources” .
In addition to the Virt u alD o main resource options, you can configure metadata options to both
enable the resource as a guest node and define the connection parameters. Table 9.4, “ Metadata
Options for Configuring KVM/LXC Resources as Remote Nodes” describes these metadata options.
Field D ef au lt D escrip t io n
remo t e- n o d e <none> The name of the guest node this resource defines.
This both enables the resource as a guest node
and defines the unique name used to identify the
guest node. WARNING: This value cannot overlap
with any resource or node ID s.
remo t e- p o rt 3121 Configures a custom port to use for the guest
connection to p acemaker_remo t e.
remo t e- ad d r remo t e- n o d e The IP address or host name to connect to if remote
value used as node’s name is not the host name of the guest
host name
remo t e- co n n ect - 60s Amount of time before a pending guest connection
t imeo u t will time out
You configure a remote node as a cluster resource with the p cs reso u rce creat e command,
specifying o cf :p acemaker:remo t e as the resource type. Table 9.5, “ Resource Options for Remote
Nodes” describes the resource options you can configure for a remo t e resource.
66
Chapt er 9 . Advanced Resource Configurat ion
Field D ef au lt D escrip t io n
reco n n ect _in t erval 0 Time in seconds to wait before attempting to
reconnect to a remote node after an active
connection to the remote node has been severed.
This wait is recurring. If reconnect fails after the
wait period, a new reconnect attempt will be made
after observing the wait time. When this option is in
use, Pacemaker will keep attempting to reach out
and connect to the remote node indefinitely after
each wait interval.
server Server location to connect to. This can be an IP
address or host name.
p o rt TCP port to connect to.
If you need to change the default port or au t h key location for either Pacemaker or
p acemaker_remo t e, there are environment variables you can set that affect both of those daemons.
These environment variables can be enabled by placing them in the /et c/sysco n f ig /p acemaker file
as follows.
Note that when you change the default key location on a particular node (cluster node, guest node or
remote node), it is sufficient to set PCMK_authkey_location on that node (and put the key in that
location). It is not necessary that the location be the same on every node, although doing so makes
administration easier.
When changing the default port used by a particular guest node or remote node, the
PC MK _remo t e_p o rt variable must be set in that node's /et c/sysco n f ig /p acemaker file, and the
cluster resource creating the guest node or remote node connection must also be configured with the
same port number (using the remo t e- p o rt metadata option for guest nodes, or the p o rt option for
remote nodes).
This section provides a high-level summary overview of the steps to perform to have Pacemaker
launch a virtual machine and to integrate that machine as a guest node, using lib virt and KVM
virtual guests.
1. After installing the virtualization software and enabling the lib virt d service on the cluster
nodes, put the same encryption key with the path /et c/p acemaker/au t h key on every cluster
node and virtual machine. This secures remote communication and authentication.
Run the following set of commands on every node to create the au t h key directory with secure
permissions.
67
High Availabilit y Add- O n Reference
The following command shows one method to create an encryption key. You should create
the key only once and then copy it to all of the nodes.
3. Give each virtual machine a static network address and unique host name, which should be
known to all nodes. For information on setting a static IP address for the guest virtual
machine, see the Virtualization Deployment and Administration Guide.
4. To create the Virt u alD o main resource agent for the management of the virtual machine,
Pacemaker requires the virtual machine's xml config file to be dumped to a file on disk. For
example, if you created a virtual machine named g u est 1, dump the xml to a file somewhere
on the host. You can use a file name of your choosing; this example uses
/et c/p acemaker/g u est 1.xml.
5. If it is running, shut down the guest node. Pacemaker will start the node when it is configured
in the cluster.
6. Create the Virt u alD o main resource, configuring the remo t e- n o t e resource meta option to
indicate that the virtual machine is a guest node capable of running resources.
In the example below, the meta-attribute remo t e- n o d e= g u est 1 tells pacemaker that this
resource is a guest node with the host name g u est 1 that is capable of being integrated into
the cluster. The cluster will attempt to contact the virtual machine’s p acemaker_remo t e
service at the host name g u est 1 after it launches.
# p cs reso u rce creat e vm- g u est 1 Virt u alD o main h yp erviso r= "q emu :///syst em"
co n f ig = "/virt u al_mach in es/vm- g u est 1.xml" met a remo t e- n o d e= g u est 1
7. After creating the Virt u alD o main resource, you can treat the guest node just as you would
treat any other node in the cluster. For example, you can create a resource and place a
resource constraint on the resource to run on the guest node as in the following commands,
which are run from a cluster node. As of Red Hat Enterprise Linux 7.3, you can include guest
nodes in groups, which allows you to group a storage device, file system, and VM.
68
Chapt er 9 . Advanced Resource Configurat ion
This section provides a high-level summary overview of the steps to perform to configure a
Pacemaker remote node and to integrate that node into an existing Pacemaker cluster environment.
1. On the node that you will be configuring as a remote node, allow cluster-related services
through the local firewall.
Note
If you are using ip t ab les directly, or some other firewall solution besides f irewalld ,
simply open the following ports: TCP ports 2224 and 3121.
3. All nodes (both cluster nodes and remote nodes) must have the same authentication key
installed for the communication to work correctly. If you already have a key on an existing
node, use that key and copy it to the remote node. Otherwise, create a new key on the remote
node.
Run the following set of commands on the remote node to create a directory for the
authentication key with secure permissions.
The following command shows one method to create an encryption key on the remote node.
5. On the cluster node, create a location for the shared authentication key with the same path as
the authentication key on the remote node and copy the key into that directory. In this
example, the key is copied from the remote node where the key was created.
6. Run the following command from a cluster node to create a remo t e resource. In this case the
remote node is remo t e1.
69
High Availabilit y Add- O n Reference
7. After creating the remo t e resource, you can treat the remote node just as you would treat any
other node in the cluster. For example, you can create a resource and place a resource
constraint on the resource to run on the remote node as in the following commands, which
are run from a cluster node.
Warning
8. Configure fencing resources for the remote node. Remote nodes are fenced the same way as
cluster nodes. Configure fencing resources for use with remote nodes the same as you would
with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only
cluster nodes are capable of actually executing a fencing operation against another node.
As of Red Hat Enterprise Linux 7.3, if the p acemaker_remo t e service is stopped on an active
Pacemaker remote node, the cluster will gracefully migrate resources off the node before stopping the
node. This allows you to perform software upgrades and other routine maintenance procedures
without removing the node from the cluster. Once p acemaker_remo t e is shut down, however, the
cluster will immediately try to reconnect. If p acemaker_remo t e is not restarted within the resource's
monitor timeout, the cluster will consider the monitor operation as failed.
If you wish to avoid monitor failures when the p acemaker_remo t e service is stopped on an active
Pacemaker remote node, you can use the following procedure to take the node out of the cluster
before performing any system administration that might stop p acemaker_remo t e
Warning
For Red Hat Enterprise Linux release 7.2 and earlier, if p acemaker_remo t e stops on a node
that is currently integrated into a cluster, the cluster will fence that node. If the stop happens
automatically as part of a yu m u p d at e process, the system could be left in an unusable state
(particularly if the kernel is also being upgraded at the same time as p acemaker_remo t e).
For Red Hat Enterprise Linux release 7.2 and earlier you must use the following procedure to
take the node out of the cluster before performing any system administration that might stop
p acemaker_remo t e.
1. Stop the node's connection resource with the p cs reso u rce d isab le resourcename, which
will move all services off the node. For guest nodes, this will also stop the VM, so the VM must
be started outside the cluster (for example, using virsh ) to perform any maintenance.
3. When ready to return the node to the cluster, re-enable the resource with the p cs reso u rce
en ab le.
70
Chapt er 9 . Advanced Resource Configurat ion
Use the following command to convert an existing Virt u alD o main resource into a guest node. You
do not need to run this command if the resource was originally created as a guest node.
Use the following command to disable a resource configured as a guest node on the specified host.
Pacemaker decides where to place a resource according to the resource allocation scores on every
node. The resource will be allocated to the node where the resource has the highest score. This
allocation score is derived from a combination of factors, including resource constraints, reso u rce-
st ickin ess settings, prior failure history of a resource on each node, and utilization of each node.
If the resource allocation scores on all the nodes are equal, by the default placement strategy
Pacemaker will choose a node with the least number of allocated resources for balancing the load. If
the number of resources on each node is equal, the first eligible node listed in the CIB will be chosen
to run the resource.
Often, however, different resources use significantly different proportions of a node’s capacities
(such as memory or I/O). You cannot always balance the load ideally by taking into account only the
number of resources allocated to a node. In addition, if resources are placed such that their
combined requirements exceed the provided capacity, they may fail to start completely or they may
run run with degraded performance. To take these factors into account, Pacemaker allows you to
configure the following components:
To configure the capacity that a node provides or a resource requires, you can use utilization attributes
for nodes and resources. You do this by setting a utilization variable for a resource and assigning a
value to that variable to indicate what the resource requires, and then setting that same utilization
variable for a node and assigning a value to that variable to indicate what that node provides.
You can name utilization attributes according to your preferences and define as many name and
value pairs as your configuration needs. The values of utlization attributes must be integers.
As of Red Hat Enterprise Linux 7.3, you can set utilization attributes with the p cs command.
The following example configures a utilization attribute of CPU capacity for two nodes, setting this
attribute as the variable cp u . It also configures a utilization attribute of RAM capacity, setting this
attribute as the variable memo ry. In this example:
Node 1 is defined as providing a CPU capacity of two and a RAM capacity of 2048
71
High Availabilit y Add- O n Reference
Node 2 is defined as providing a CPU capacity of four and a RAM capacity of 2048
The following example specifies the same utilization attributes that three different resources require.
In this example:
resource d u mmy- small requires a CPU capacity of 1 and a RAM capacity of 1024
resource d u mmy- med iu m requires a CPU capacity of 2 and a RAM capacity of 2048
resource d u mmy- larg e requires a CPU capacity of 1 and a RAM capacityy of 3072
A node is considered eligible for a resource if it has sufficient free capacity to satisfy the resource’s
requirements, as defined by the utilization attributes.
After you have configured the capacities your nodes provide and the capacities your resources
require, you need to set the p lacemen t - st rat eg y cluster property, otherwise the capacity
configurations have no effect. For information on setting cluster properties, see Chapter 12,
Pacemaker Cluster Properties.
Four values are available for the p lacemen t - st rat eg y cluster property:
d ef au lt — Utilization values are not taken into account at all. Resources are allocated according
to allocation scores. If scores are equal, resources are evenly distributed across nodes.
u t iliz at io n — Utilization values are taken into account only when deciding whether a node is
considered eligible (i.e. whether it has sufficient free capacity to satisfy the resource’s
requirements). Load-balancing is still done based on the number of resources allocated to a
node.
b alan ced — Utilization values are taken into account when deciding whether a node is eligible
to serve a resource and when load-balancing, so an attempt is made to spread the resources in a
way that optimizes resource performance.
min imal — Utilization values are taken into account only when deciding whether a node is
eligible to serve a resource. For load-balancing, an attempt is made to concentrate the resources
on as few nodes as possible, thereby enabling possible power savings on the remaining nodes.
The following example command sets the value of p lacemen t - st rat eg y to b alan ced . After running
this command, Pacemaker will ensure the load from your resources will be distributed evenly
throughout the cluster, without the need for complicated sets of colocation constraints.
72
Chapt er 9 . Advanced Resource Configurat ion
Pacemaker determines which node is preferred when allocating resources according to the following
strategy.
The node with the highest node weight gets consumed first. Node weight is a score maintained by
the cluster to represent node health.
The node that has the least number of allocated resources gets consumed first.
If the numbers of allocated resources are equal, the first eligible node listed in the CIB gets
consumed first.
The node that has the most free capacity gets consumed first.
If the free capacities of the nodes are equal, the node that has the least number of allocated
resources gets consumed first.
If the free capacities of the nodes are equal and the number of allocated resources is equal,
the first eligible node listed in the CIB gets consumed first.
If the p lacemen t - st rat eg y cluster property is min imal, the first eligible node listed in the CIB
gets consumed first.
9 .5 .3.2 . No de Capacit y
Pacemaker determines which node has the most free capacity according to the following strategy.
If only one type of utilization attribute has been defined, free capacity is a simple numeric
comparison.
If multiple types of utilization attributes have been defined, then the node that is numerically
highest in the the most attribute types has the most free capacity. For example:
If NodeA has more free CPUs, and NodeB has more free memory, then their free capacities are
equal.
If NodeA has more free CPUs, while NodeB has more free memory and storage, then NodeB
has more free capacity.
Pacemaker determines which resource is allocated first according to the following strategy.
The resource that has the highest priority gets allocated first. For information on setting priority for
a resource, see Table 6.3, “ Resource Meta Options” .
If the priorities of the resources are equal, the resource that has the highest score on the node
where it is running gets allocated first, to prevent resource shuffling.
If the resource scores on the nodes where the resources are running are equal or the resources
are not running, the resource that has the highest score on the preferred node gets allocated first.
If the resource scores on the preferred node are equal in this case, the first runnable resource
listed in the CIB gets allocated first.
73
High Availabilit y Add- O n Reference
To ensure that Pacemaker's placement strategy for resources works most effectively, you should take
the following considerations into account when configuring your system.
If the physical capacity of your nodes is being used to near maximum under normal conditions,
then problems could occur during failover. Even without the utilization feature, you may start to
experience timeouts and secondary failures.
Build some buffer into the capabilities you configure for the nodes.
Advertise slightly more node resources than you physically have, on the assumption the that a
Pacemaker resource will not use 100% of the configured amount of CPU, memory, and so forth all
the time. This practice is sometimes called overcommit.
If the cluster is going to sacrifice services, it should be the ones you care about east. Ensure that
resource priorities are properly set so that your most important resources are scheduled first. For
informatoin on setting resource priorities, see Table 6.3, “ Resource Meta Options” .
74
Chapt er 1 0 . Clust er Q uorum
There are some special features of quorum configuration that you can set when you create a cluster
with the p cs clu st er set u p command. Table 10.1, “ Quorum Options” summarizes these options.
T ab le 10.1. Q u o ru m O p t io n s
O p t io n D escrip t io n
- - au t o _t ie_b reaker When enabled, the cluster can suffer up to 50% of the nodes
failing at the same time, in a deterministic fashion. The cluster
partition, or the set of nodes that are still in contact with the
n o d eid configured in au t o _t ie_b reaker_n o d e (or lowest
n o d eid if not set), will remain quorate. The other nodes will be
inquorate.
- - wait _f o r_all When enabled, the cluster will be quorate for the first time only
after all nodes have been visible at least once at the same time.
- - last _man _st an d in g When enabled, the cluster can dynamically recalculate
exp ect ed _vo t es and quorum under specific circumstances. You
must enable wait _f o r_all when you enable this option. The
last _man _st an d in g option is incompatible with quorum
devices.
-- The time, in milliseconds, to wait before recalculating
last _man _st an d in g _win d o exp ect ed _vo t es and quorum after a cluster loses nodes.
w
For further information about configuring and using these options, see the vo t eq u o ru m(5) man
page.
75
High Availabilit y Add- O n Reference
10.2. Quorum Administ rat ion Commands (Red Hat Ent erprise Linux 7.3
and Lat er)
Once a cluster is running, you can enter the following cluster quorum commands.
If you take nodes out of a cluster for a long period of time and the loss of those nodes would cause
quorum loss, you can change the value of the exp ect ed _vo t es parameter for the live cluster with
the p cs q u o ru m exp ect ed - vo t es command. This allows the cluster to continue operation when it
does not have quorum.
Warning
Changing the expected votes in a live cluster should be done with extreme caution. If less than
50% of the cluster is running because you have manually changed the expected votes, then
the other nodes in the cluster could be started separately and run cluster services, causing
data corruption and other unexpected results. If you change this value, you should ensure
that the wait _f o r_all parameter is enabled.
The following command sets the expected votes in the live cluster to the specified value. This affects
the live cluster only and does not change the configuration file; the value of exp ect ed _vo t es is
reset to the value in the configuration file in the event of a reload.
10.3. Modifying Quorum Opt ions (Red Hat Ent erprise Linux 7.3 and
lat er)
As of Red Hat Enterprise Linux 7.3, you can modify general quorum options for your cluster with the
p cs q u o ru m u p d at e command. Executing this command requires that the cluster be stopped. For
information on the quorum options, see the vo t eq u o ru m(5) man page.
The following series of commands modifies the wait _f o r_all quorum option and displays the
updated status of the option. Note that the system does not allow you to execute this command while
the cluster is running.
76
Chapt er 1 0 . Clust er Q uorum
[root@node1:~]# p cs q u o ru m co n f ig
Options:
wait_for_all: 1
Note
This command should be used with extreme caution. Before issuing this command, it is
imperative that you ensure that nodes that are not currently in the cluster are switched off and
have no access to shared resources.
# p cs clu st er q u o ru m u n b lo ck
Important
The quorum device feature is provided for technical preview only. For details on what
" technical preview" means, see Technology Preview Features Support Scope.
As of Red Hat Enterprise Linux 7.3, you can configure a separate quorum device which acts as a
third-party arbitration device for the cluster. Its primary use is to allow a cluster to sustain more node
failures than standard quorum rules allow. A quorum device is recommended for clusters with an
even number of nodes and highly recommended for two-node clusters.
You must take the following into account when configuring a quorum device.
77
High Availabilit y Add- O n Reference
It is recommended that a quorum device be run on a different physical network at the same site as
the cluster that uses the quorum device. Ideally, the quorum device host should be in a separate
rack than the main cluster, or at least on a separate PSU and not on the same network segment
as the corosync ring or rings.
You cannot use more than one quorum device in a cluster at the same time.
Although you cannot use more than one quorum device in a cluster at the same time, a single
quorum device may be used by several clusters at the same time. Each cluster using that quorum
device can use different algorithms and quorum options, as those are stored on the cluster nodes
themselves. For example, a single quorum device can be used by one cluster with an f f sp lit
(fifty/fifty split) algorithm and by a second cluster with an lms (last man standing) algorithm.
Configuring a quorum device for a cluster requires that you install the following packages:
This section provides a sample procedure to configure a quorum device in a Red Hat high
availability cluster. The following procedure configures a quorum device and adds it to the cluster. In
this example:
The quorum device model is n et , which is currently the only supported model. The n et model
supports the following algorithms:
f f sp lit : fifty-fifty split. This provides exactly one vote to the partition with the highest number of
active nodes.
lms: last-man-standing. If the node is the only one left in the cluster that can see the q n et d
server, then it returns a vote.
Warning
The LMS algorithm allows the cluster to remain quorate even with only one remaining
node, but it also means that the voting power of the quorum device is great since it is the
same as number_of_nodes - 1. Losing connection with the quorum device means losing
number_of_nodes - 1 votes, which means that only a cluster with all nodes active can
remain quorate (by overvoting the quorum device); any other cluster becomes
inquorate.
78
Chapt er 1 0 . Clust er Q uorum
For more detailed information on the implementation of these algorithms, see the co ro syn c-
q d evice(8) man page.
The following procedure configures a quorum device and adds that quorum device to a cluster.
1. On the node that you will use to host your quorum device, configure the quorum device with
the following command. This command configures and starts the quorum device model n et
and configures the device to start on boot.
After configuring the quorum device, you can check its status. This should show that the
co ro syn c- q n et d daemon is running and, at this point, there are no clients connected to it.
The - - f u ll command option provides detailed output.
[root@qdevice:~]# p cs q d evice st at u s n et - - f u ll
QNetd address: *:5403
TLS: Supported (client certificate required)
Connected clients: 0
Connected clusters: 0
Maximum send/receive size: 32768/32768 bytes
2. From one of the nodes in the existing cluster, authenticate user h aclu st er on the node that is
hosting the quorum device.
Before adding the quorum device, you can check the current configuration and status for the
quorum device for later comparison. The output for these commands indicates that the cluster
is not yet using a quorum device.
[root@node1:~]# p cs q u o ru m co n f ig
Options:
[root@node1:~]# p cs q u o ru m st at u s
Quorum information
------------------
D ate: Wed Jun 29 13:15:36 2016
Quorum provider: corosync_votequorum
Nodes: 2
Node ID : 1
Ring ID : 1/8272
Quorate: Yes
79
High Availabilit y Add- O n Reference
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 1
Flags: 2Node Quorate
Membership information
----------------------
Nodeid Votes Qdevice Name
1 1 NR node1 (local)
2 1 NR node2
The following command adds the quorum device that you have previously created to the
cluster. You cannot use more than one quorum device in a cluster at the same time. However,
one quorum device can be used by several clusters at the same time. This example command
configures the quorum device to use the f f sp lit algorithm. For information on the
configuration options for the quorum device, see the co ro syn c- q d evice(8) man page.
From the cluster side, you can execute the following commands to see how the configuration
has changed.
[root@node1:~]# p cs q u o ru m co n f ig
Options:
D evice:
Model: net
algorithm: ffsplit
host: qdevice
The p cs q u o ru m st at u s command shows the quorum runtime status, indicating that the
quorum device is in use.
[root@node1:~]# p cs q u o ru m st at u s
80
Chapt er 1 0 . Clust er Q uorum
Quorum information
------------------
D ate: Wed Jun 29 13:17:02 2016
Quorum provider: corosync_votequorum
Nodes: 2
Node ID : 1
Ring ID : 1/8272
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate Qdevice
Membership information
----------------------
Nodeid Votes Qdevice Name
1 1 A,V,NMW node1 (local)
2 1 A,V,NMW node2
0 1 Qdevice
[root@node1:~]# p cs q u o ru m d evice st at u s
Qdevice information
-------------------
Model: Net
Node ID : 1
Configured node list:
0 Node ID = 1
1 Node ID = 2
Membership node list: 1, 2
Qdevice-net information
----------------------
Cluster name: mycluster
QNetd host: qdevice:5403
Algorithm: ffsplit
Tie-breaker: Node with lowest node ID
State: Connected
From the quorum device side, you can execute the following status command, which shows
the status of the co ro syn c- q n et d daemon.
[root@qdevice:~]# p cs q d evice st at u s n et - - f u ll
QNetd address: *:5403
TLS: Supported (client certificate required)
Connected clients: 2
Connected clusters: 1
Maximum send/receive size: 32768/32768 bytes
Cluster " mycluster" :
Algorithm: ffsplit
81
High Availabilit y Add- O n Reference
PCS provides the ability to manage the quorum device service on the local host (co ro syn c- q n et d ),
as shown in the following example commands. Note that these commands affect only the co ro syn c-
q n et d service.
The following sections describe the PCS commands that you can use to manage the quorum device
settings in a cluster, showing examples that are based on the quorum device configuration in
Section 10.5.2, “ Configuring a Quorum D evice” .
You can change the setting of a quorum device with the p cs q u o ru m d evice u p d at e command.
Warning
82
Chapt er 1 0 . Clust er Q uorum
Use the following command to remove a quorum device configured on a cluster node.
After you have removed a quorum device, you should see the following error message when
displaying the quorum device status.
[root@node1:~]# p cs q u o ru m d evice st at u s
Error: Unable to get quorum status: corosync-qdevice-tool: Can't connect to QD evice socket (is
QD evice running?): No such file or directory
To disable and stop a quorum device on the quorum device host and delete all of its configuration
files, use the following command.
83
High Availabilit y Add- O n Reference
Another use of rules might be to assign machines to different processing groups (using a node
attribute) based on time and to then use that attribute when creating location constraints.
Each rule can contain a number of expressions, date-expressions and even other rules. The results
of the expressions are combined based on the rule's b o o lean - o p field to determine if the rule
ultimately evaluates to t ru e or f alse. What happens next depends on the context in which the rule is
being used.
Field D escrip t io n
ro le Limits the rule to apply only when the resource is in that role. Allowed values:
St art ed , Slave, and Mast er. NOTE: A rule with ro le= "Mast er" cannot
determine the initial location of a clone instance. It will only affect which of
the active instances will be promoted.
sco re The score to apply if the rule evaluates to t ru e. Limited to use in rules that
are part of location constraints.
sco re- at t rib u t e The node attribute to look up and use as a score if the rule evaluates to
t ru e. Limited to use in rules that are part of location constraints.
b o o lean - o p How to combine the result of multiple expression objects. Allowed values:
an d and o r. The default value is an d .
Field D escrip t io n
valu e User supplied value for comparison
at t rib u t e The node attribute to test
t yp e D etermines how the value(s) should be tested. Allowed values: st rin g ,
in t eg er, versio n
84
Chapt er 1 1 . Pacemaker Rules
Field D escrip t io n
o p erat io n The comparison to perform. Allowed values:
Field D escrip t io n
st art A date/time conforming to the ISO8601 specification.
en d A date/time conforming to the ISO8601 specification.
o p erat io n Compares the current date/time with the start or the end date or both the start
and end date, depending on the context. Allowed values:
D ate specifications are used to create cron-like expressions relating to time. Each field can contain a
single number or a single range. Instead of defaulting to zero, any field not supplied is ignored.
For example, mo n t h d ays= "1" matches the first day of every month and h o u rs= "09 - 17" matches
the hours between 9 am and 5 pm (inclusive). However, you cannot specify weekd ays= "1,2" or
weekd ays= "1- 2,5- 6 " since they contain multiple ranges.
Field D escrip t io n
id A unique name for the date
85
High Availabilit y Add- O n Reference
Field D escrip t io n
h o u rs Allowed values: 0-23
mo n t h d ays Allowed values: 0-31 (depending on month and year)
weekd ays Allowed values: 1-7 (1=Monday, 7=Sunday)
yeard ays Allowed values: 1-366 (depending on the year)
mo n t h s Allowed values: 1-12
weeks Allowed values: 1-53 (depending on weekyear)
years Year according the Gregorian calendar
weekyears May differ from Gregorian years; for example, 2005- 001
O rd in al is also 2005- 01- 01 G reg o rian is also 2004 -
W53- 6 Weekly
mo o n Allowed values: 0-7 (0 is new, 4 is full moon).
D urations are used to calculate a value for en d when one is not supplied to in_range operations.
They contain the same fields as d at e_sp ec objects but without the limitations (ie. you can have a
duration of 19 months). Like d at e_sp ecs, any field not supplied is ignored.
To configure a rule, use the following command. If sco re is omitted, it defaults to INFINITY. If id is
omitted, one is generated from the constraint_id. The rule_type should be exp ressio n or
d at e_exp ressio n .
To remove a rule, use the following. If the rule that you are removing is the last rule in its constraint,
the constraint will be removed.
The following command configures an expression that is true if now is any time in the year 2005.
The following command configures an expression that is true from 9 am to 5 pm, Monday through
Friday. Note that the hours value of 16 matches up to 16:59:59, as the numeric value (hour) still
matches.
# p cs co n st rain t lo cat io n Web server ru le sco re= IN FIN IT Y d at e- sp ec h o u rs= "9 - 16 "
weekd ays= "1- 5"
The following command configures an expression that is true when there is a full moon on Friday the
thirteenth.
86
Chapt er 1 1 . Pacemaker Rules
You can use a rule to determine a resource's location with the following command.
d ef in ed |n o t _d ef in ed attribute
d at e- sp ec date_spec_options
87
High Availabilit y Add- O n Reference
Section 12.2, “ Setting and Removing Cluster Properties” describes how to set cluster properties.
Section 12.3, “ Querying Cluster Property Settings” describes how to list the currently set cluster
properties.
Table 12.1, “ Cluster Properties” summaries the Pacemaker cluster properties, showing the default
values of the properties and the possible values you can set for those properties.
Note
In addition to the properties described in this table, there are additional cluster properties that
are exposed by the cluster software. For these properties, it is recommended that you not
change their values from their defaults.
O p t io n D ef au lt D escrip t io n
b at ch - limit 30 The number of jobs that the transition engine (TE) is
allowed to execute in parallel. The " correct" value will
depend on the speed and load of your network and
cluster nodes.
mig rat io n - limit -1 The number of migration jobs that the TE is allowed to
(unlimited) execute in parallel on a node.
n o - q u o ru m- p o licy stop What to do when the cluster does not have quorum.
Allowed values:
symmet ric- clu st er true Indicates whether resources can run on any node by
default.
88
Chapt er 1 2 . Pacemaker Clust er Propert ies
O p t io n D ef au lt D escrip t io n
st o n it h - en ab led true Indicates that failed nodes and nodes with resources
that cannot be stopped should be fenced. Protecting
your data requires that you set this t ru e.
89
High Availabilit y Add- O n Reference
O p t io n D ef au lt D escrip t io n
main t en an ce- mo d e false Maintenance Mode tells the cluster to go to a " hands
off" mode, and not start or stop any services until told
otherwise. When maintenance mode is completed, the
cluster does a sanity check of the current state of any
services, and then stops or starts any that need it.
sh u t d o wn - escalat io n 20min The time after which to give up trying to shut down
gracefully and just exit. Advanced use only.
st o n it h - t imeo u t 60s How long to wait for a STONITH action to complete.
st o p - all- reso u rces false Should the cluster stop all resources.
d ef au lt - reso u rce- 5000 Indicates how much a resource prefers to stay where it
st ickin ess is. It is recommended that you set this value as a
resource/operation default rather than as a cluster
option.
is- man ag ed - d ef au lt true Indicates whether the cluster is allowed to start and
stop a resource. It is recommended that you set this
value as a resource/operation default rather than as a
cluster option.
en ab le- acl false (Red Hat Enterprise Linux 7.1 and later) Indicates
whether the cluster can use access control lists, as set
with the p cs acl command.
p lacemen t - st rat eg y d ef au lt Indicates whether and how the cluster will take
utilization attributes into account when determining
resource placement on cluster nodes. For information
on utilization attributes and placement strategies, see
Section 9.5, “ Utilization and Placement Strategy” .
For example, to set the value of symmet ric- clu st er to f alse, use the following command.
You can remove a cluster property from the configuration with the following command.
Alternately, you can remove a cluster property from a configuration by leaving the value field of the
p cs p ro p ert y set command blank. This restores that property to its default value. For example, if
you have previously set the symmet ric- clu st er property to f alse, the following command removes
the value you have set from the configuration and restores the value of symmet ric- clu st er to t ru e,
which is its default value.
90
Chapt er 1 2 . Pacemaker Clust er Propert ies
In most cases, when you use the p cs command to display values of the various cluster components,
you can use p cs list or p cs sh o w interchangeably. In the following examples, p cs list is the
format used to display an entire list of all settings for more than one property, while p cs sh o w is the
format used to display the values of a specific property.
To display the values of the property settings that have been set for the cluster, use the following p cs
command.
To display all of the values of the property settings for the cluster, including the default values of the
property settings that have not been explicitly set, use the following command.
To display the current value of a specific cluster property, use the following command.
For example, to display the current value of the clu st er- in f rast ru ct u re property, execute the
following command:
For informational purposes, you can display a list of all of the default values for the properties,
whether they have been set to a value other than the default or not, by using the following command.
91
High Availabilit y Add- O n Reference
As of Red Hat Enterprise Linux 7.3, you can configure Pacemaker alerts by means of alert agents,
which are external programs that the cluster calls in the same manner as the cluster calls resource
agents to handle resource configuration and operation. This is the preferred, simpler method of
configuring cluster alerts. Pacemaker alert agents are described in Section 13.1, “ Pacemaker Alert
Agents (Red Hat Enterprise Linux 7.3 and later)” .
The o cf :p acemaker:C lu st erMo n resource can monitor the cluster status and trigger alerts on
each cluster event. This resource runs the crm_mo n command in the background at regular
intervals. For information on the C lu st erMo n resource see Section 13.2, “ Event Notification with
Monitoring Resources” .
13.1. Pacemaker Alert Agent s (Red Hat Ent erprise Linux 7.3 and lat er)
You can create Pacemaker alert agents to take some external action when a cluster event occurs. The
cluster passes information about the event to the agent by means of environment variables. Agents
can do anything desired with this information, such as send an email message or log to a file or
update a monitoring system.
General information on configuring and administering alert agents is provided in Section 13.1.2,
“ Alert Creation” , Section 13.1.3, “ D isplaying, Modifying, and Removing Alerts” , Section 13.1.4,
“ Alert Recipients” , Section 13.1.5, “ Alert Meta Options” , and Section 13.1.6, “ Alert Configuration
Command Examples” .
You can write your own alert agents for a Pacemaker alert to call. For information on writing alert
agents, see Section 13.1.7, “ Writing an Alert Agent” .
When you use one of sample alert agents, you should review the script to ensure that it suits your
needs. These sample agents are provided as a starting point for custom scripts for specific cluster
environments.
To use one of the sample alert agents, you must install the agent on each node in the cluster. For
example, the following command installs the alert _sn mp .sh .samp le script as alert _sn mp .sh .
After you have installed the script, you can create an alert that uses the script. The following example
configures an alert that uses the installed alert _sn mp .sh alert agent to send cluster events as
SNMP traps. By default, the script will send all events except successful monitor calls to the SNMP
92
Chapt er 1 3. T riggering Script s for Clust er Event s
server. This example configures the timestamp format as a meta option. For information about meta
options, see Section 13.1.5, “ Alert Meta Options” . After configuring the alert, this example configures
a recipient for the alert and displays the alert configuration.
The following example installs the alert _smt p .sh agent and then configures an alert that uses the
installed alert agent to send cluster events as email messages. After configuring the alert, this
example configures a recipient and displays the alert configuration.
For more information on the format of the p cs alert creat e and p cs alert recip ien t ad d
commands, see Section 13.1.2, “ Alert Creation” and Section 13.1.4, “ Alert Recipients” .
The following command creates a cluster alert. The options that you configure are agent-specific
configuration values that are passed to the alert agent script at the path you specify as additional
environment variables. If you do not specify a value for id , one will be generated. For information on
alert meta options, Section 13.1.5, “ Alert Meta Options” .
Multiple alert agents may be configured; the cluster will call all of them for each event. Alert agents will
be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes,
but they will never be called on those nodes.
The following example creates a simple alert that will call my- scrip t .sh for each event.
For an example that shows how to create a cluster alert that uses one of the sample alert agents, see
Section 13.1.1, “ Using the Sample Alert Agents” .
93
High Availabilit y Add- O n Reference
The following command shows all configured alerts along with the values of the configured options.
The following command updates an existing alert with the specified alert-id value.
pcs alert update alert-id [path=path] [description=description] [options [option=value]...] [meta [meta-
option=value]...]
The following command removes an alert with the specified alert-id value.
Usually alerts are directed towards a recipient. Thus each alert may be additionally configured with
one or more recipients. The cluster will call the agent separately for each recipient.
The recipient may be anything the alert agent can recognize: an IP address, an email address, a file
name, or whatever the particular agent supports.
The following example command adds the alert recipient my- alert - recip ien t with a recipient ID of
my- recip ien t - id to the alert my- alert . This will configure the cluster to call the alert script that has
been configured for my- alert for each event, passing the recipient so me- ad d ress as an
environment variable.
# p cs alert recip ien t ad d my- alert my- alert - recip ien t id = my- recip ien t - id o p t io n s
valu e= so me- ad d ress
As with resource agents, meta options can be configured for alert agents to affect how Pacemaker
calls them. Table 13.1, “ Alert Meta Options” describes the alert meta options. Meta options can be
configured per alert agent as well as per recipient.
94
Chapt er 1 3. T riggering Script s for Clust er Event s
The following example configures an alert that calls the script my- scrip t .sh and then adds two
recipients to the alert. The first recipient has an ID of my- alert - recip ien t 1 and the second recipient
has an ID of my- alert - recip ien t 2. The script will get called twice for each event, with each call
using a 15-second timeout. One call will be passed to the recipient so meu ser@ examp le.co m with
a timestamp in the format % D % H:% M, while the other call will be passed to the recipient
o t h eru ser@ examp le.co m with a timestamp in the format % c. `
# p cs alert creat e id = my- alert p at h = /p at h /t o /my- scrip t .sh met a t imeo u t = 15s
# p cs alert recip ien t ad d my- alert so meu ser@ examp le.co m id = my- alert - recip ien t 1
met a t imest amp - f o rmat = % D % H :% M
# p cs alert recip ien t ad d my- alert o t h eru ser@ examp le.co m id = my- alert - recip ien t 2
met a t imest amp - f o rmat = % c
The following sequential examples show some basic alert configuration commands to show the
format to use to create alerts, add recipients, and display the configured alerts.
The following commands create a simple alert, add two recipients to the alert, and display the
configured values.
Since no alert ID value is specified, the system creates an alert ID value of alert .
The first recipient creation command specifies a recipient of rec_valu e. Since this command does
not specify a recipient ID , the value of alert - recip ien t is used as the recipient ID .
The second recipient creation command specifies a recipient of rec_valu e2. This command
specifies a recipient ID of my- recip ien t for the recipient.
This following commands add a second alert and a recipient for that alert. The alert ID for the second
alert is my- alert and the recipient value is my- o t h er- recip ien t . Since no recipient ID is specified,
the system provides a recipient id of my- alert - recip ien t .
95
High Availabilit y Add- O n Reference
The following commands modify the alert values for the alert my- alert and for the recipient my-
alert - recip ien t .
The following command removes the recipient my- alert - recip ien t from alert .
96
Chapt er 1 3. T riggering Script s for Clust er Event s
Recipients:
Recipient: alert-recipient (value=rec_value)
There are three types of Pacemaker alerts: node alerts, fencing alerts, and resource alerts. The
environment variables that are passed to the alert agents can differ, depending on the type of alert.
Table 13.2, “ Environment Variables Passed to Alert Agents” describes the environment variables that
are passed to alert agents and specifies when the environment variable is associated with a specific
alert type.
When writing an alert agent, you must take the following concerns into account.
Alert agents may be called with no recipient (if none is configured), so the agent must be able to
handle this situation, even if it only exits in that case. Users may modify the configuration in
stages, and add a recipient later.
97
High Availabilit y Add- O n Reference
If more than one recipient is configured for an alert, the alert agent will be called once per
recipient. If an agent is not able to run concurrently, it should be configured with only a single
recipient. The agent is free, however, to interpret the recipient as a list.
When a cluster event occurs, all alerts are fired off at the same time as separate processes.
D epending on how many alerts and recipients are configured and on what is done within the alert
agents, a significant load burst may occur. The agent could be written to take this into
consideration, for example by queueing resource-intensive actions into some other instance,
instead of directly executing them.
Alert agents are run as the h aclu st er user, which has a minimal set of permissions. If an agent
requires additional privileges, it is recommended to configure su d o to allow the agent to run the
necessary commands as another user with the appropriate privileges.
Take care to validate and sanitize user-configured parameters, such as C R M_alert _t imest amp
(whose content is specified by the user-configured t imest amp - f o rmat ), C R M_alert _recip ien t ,
and all alert options. This is necessary to protect against configuration errors. In addition, if some
user can modify the CIB without having h aclu st er-level access to the cluster nodes, this is a
potential security concern as well, and you should avoid the possibility of code injection.
If a cluster contains resources for which the o n f ail parameter is set to f en ce, there will be
multiple fence notifications on failure, one for each resource for which this parameter is set plus
one additional notification. Both the STONITH daemon and the crmd daemon will send
notifications. Pacemaker performs only one actual fence operation in this case, however, no
matter how many notifications are sent.
Note
The alerts interface is designed to be backward compatible with the external scripts interface
used by the o cf :p acemaker:C lu st erMo n resource. To preserve this compatibility, the
environment variables passed to alert agents are available prepended with C R M_n o t if y_ as
well as C R M_alert _. One break in compatibility is that the C lu st erMo n resource ran external
scripts as the root user, while alert agents are run as the h aclu st er user. For information on
configuring scripts that are triggered by the C lu st erMo n , see Section 13.2, “ Event
Notification with Monitoring Resources” .
The o cf :p acemaker:C lu st erMo n resource can monitor the cluster status and trigger alerts on
each cluster event. This resource runs the crm_mo n command in the background at regular
intervals.
By default, the crm_mo n command listens for resource events only; to enable listing for fencing
events you can provide the - - wat ch - f en cin g option to the command when you configure the
C lu st erMo n resource. The crm_mo n command does not monitor for membership issues but will
print a message when fencing is started and when monitoring is started for that node, which would
imply that a member just joined the cluster.
The C lu st erMo n resource can execute an external program to determine what to do with cluster
notifications by means of the ext ra_o p t io n s parameter. Table 13.3, “ Environment Variables Passed
to the External Monitor Program” lists the environment variables that are passed to that program,
which describe the type of cluster event that occurred.
T ab le 13.3. En viro n men t Variab les Passed t o t h e Ext ern al Mo n it o r Pro g ram
98
Chapt er 1 3. T riggering Script s for Clust er Event s
The following example configures a C lu st erMo n resource that executes the external program
crm_lo g g er.sh which will log the event notifications specified in the program.
The following procedure creates the crm_lo g g er.sh program that this resource will use.
1. On one node of the cluster, create the program that will log the event notifications.
3. Use the scp command to copy the crm_lo g g er.sh program to the other nodes of the cluster,
putting the program in the same location on those nodes and setting the same ownership
and permissions for the program.
The following example configures the C lu st erMo n resource, named C lu st erMo n - Ext ern al, that
runs the program /u sr/lo cal/b in /crm_lo g g er.sh . The C lu st erMo n resource outputs the cluster
status to an h t ml file, which is /var/www/h t ml/clu st er_mo n .h t ml in this example. The p id f ile
detects whether C lu st erMo n is already running; in this example that file is /var/ru n /crm_mo n -
ext ern al.p id . This resource is created as a clone so that it will run on every node in the cluster. The
wat ch - f en cin g is specified to enable monitoring of fencing events in addition to resource events,
including the start/stop/monitor, start/monitor. and stop of the fencing resource.
99
High Availabilit y Add- O n Reference
Note
The crm_mo n command that this resource executes and which could be run manually is as
follows:
The following example shows the format of the output of the monitoring notifications that this example
yields.
100
Chapt er 1 4 . Configuring Mult i- Sit e Clust ers wit h Pacemaker (T echnical Preview)
Important
The Booth ticket manager is provided for technical preview only. For details on what
" technical preview" means, see Technology Preview Features Support Scope.
When a cluster spans more than one site, issues with network connectivity between the sites can lead
to split-brain situations. When connectivity drops, there is no way for a node on one site to determine
whether a node on another site has failed or is still functioning with a failed site interlink. In addition,
it can be problematic to provide high availability services across two sites which are too far apart to
keep synchronous.
To address these issues, Red Hat Enterprise Linux release 7.3 and later provides the ability to
configure high availability clusters that span multiple sites through the use of a Booth cluster ticket
manager. The Booth ticket manager is a distributed service that is meant to be run on a different
physical network than the networks that connect the cluster nodes at particular sites. It yields
another, loose cluster, a Booth formation, that sits on top of the regular clusters at the sites. This
aggregated communication layer facilitates consensus-based decision processes for individual
Booth tickets.
A Booth ticket is a singleton in the Booth formation and represents a time-sensitive, movable unit of
authorization. Resources can be configured to require a certain ticket to run. This can ensure that
resources are run at only one site at a time, for which a ticket or tickets have been granted.
You can think of a Booth formation as an overlay cluster consisting of clusters running at different
sites, where all the original clusters are independent of each other. It is the Booth service which
communicates to the clusters whether they have been granted a ticket, and it is Pacemaker that
determines. whether to run resources in a cluster based on a Pacemaker ticket constraint. This
means that when using the ticket manager, each of the clusters can run its own resources as well as
shared resources. For example there can be resources A, B and C running only in one cluster,
resources D , E, and F running only in the other cluster, and resources G and H running in either of
the two clusters as determined by a ticket. It is also possible to have an additional resource J that
could run in either of the two clusters as determined by a separate ticket.
The following procedure provides an outline of the steps you follow to configure a multi-site
configuration that uses the Booth ticket manager.
The name of the Booth ticket that this configuration uses is ap ach et icket
These example commands assume that the cluster resources for an Apache service have been
configured as part of the resource group ap ach eg ro u p for each cluster. It is not required that the
resources and resource groups be the same on each cluster to configure a ticket constraint for those
101
High Availabilit y Add- O n Reference
resources, since the Pacemaker instance for each cluster is independent, but that is a common
failover scenario.
For a full cluster configuration procedure that configures an Apache service in a cluster, see the
example in High Availability Add-On Administration.
Note that at any time in the configuration procedure you can enter the p cs b o o t h co n f ig command
to display the booth configuration for the current node or cluster or the p cs b o o t h st at u s
command to display the current status of booth on the local node.
1. Create a Booth configuration on one node of one cluster. The addresses you specify for each
cluster and for the arbitrator must be IP addresses. For each cluster, you specify a floating IP
address.
This command creates the configuration files /et c/b o o t h /b o o t h .co n f and
/et c/b o o t h /b o o t h .key on the node from which it is run.
2. Create a ticket for the Booth configuration. This is the ticket that you will use to define the
resource constraint that will allow resources to run only when this ticket has been granted to
the cluster.
This basic failover configuration procedure uses only one ticket, but you can create
additional tickets for more complicated scenarios where each ticket is associated with a
different resource or resources.
[cluster1-node1 ~] # p cs b o o t h syn c
4. From the arbitrator node, pull the Booth configuration to the arbitrator. If you have not
previously done so, you must first authenticate p cs to the node from which you are pulling
the configuration.
5. Pull the Booth configuration to the other cluster and synchronize to all the nodes of that
cluster. As with the arbitrator node, if you have not previously done so, you must first
authenticate p cs to the node from which you are pulling the configuration.
102
Chapt er 1 4 . Configuring Mult i- Sit e Clust ers wit h Pacemaker (T echnical Preview)
Note
You must not manually start or enable Booth on any of the nodes of the clusters since
Booth runs as a Pacemaker resource in those clusters.
[arbitrator-node ~] # p cs b o o t h st art
[arbitrator-node ~] # p cs b o o t h en ab le
7. Configure Booth to run as a cluster resource on both cluster sites. This creates a resource
group with b o o t h - ip and b o o t h - service as members of that group.
8. Add a ticket constraint to the resource group you have defined for each cluster.
You can run the following command to display the currently configured ticket constraints.
9. Grant the ticket you created for this setup to the first cluster.
Note that it is not necessary to have defined ticket constraints before granting a ticket. Once
you have initially granted a ticket to a cluster, then Booth takes over ticket management
unless you override this manually with the p cs b o o t h t icket revo ke command. For
information on the p cs b o o t h administration commands, see the PCS help screen for the
p cs b o o t h command.
It is possible to add or remove tickets at any time, even after completing this procedure. After adding
or removing a ticket, however, you must synchronize the configuration files to the other nodes and
clusters as well as to the arbitrator and grant the ticket as is shown in this procedure.
For information on additional Booth administration commands that you can use for cleaning up and
removing Booth configuration files, tickets, and resources, see the PCS help screen for the p cs
b o o t h command.
103
High Availabilit y Add- O n Reference
Red Hat Enterprise Linux 6.5 and later releases support cluster configuration with Pacemaker, using
the p cs configuration tool. Section A.2, “ Pacemaker Installation in Red Hat Enterprise Linux 6 and
Red Hat Enterprise Linux 7” summarizes the Pacemaker installation differences between Red Hat
Enterprise Linux 6 and Red Hat Enterprise Linux 7.
Table A.1, “ Comparison of Cluster Configuration with rgmanager and with Pacemaker” provides a
comparative summary of how you configure the components of a cluster with rg man ag er in Red Hat
Enterprise Linux 6 and with Pacemaker in Red Hat Enterprise Linux 7.
104
Appendix A. Clust er Creat ion in Red Hat Ent erprise Linux 6 and Red Hat Ent erprise Linux 7
Controlling access to For lu ci, the root user or a user The p csd gui requires that you
configuration tools with lu ci permissions can access authenticate as user h aclu st er,
lu ci. All access requires the ricci which is the common system user.
password for the node. The root user can set the password
for h aclu st er.
Cluster creation Name the cluster and define which Name the cluster and include nodes
nodes to include in the cluster with with p cs clu st er set u p command
lu ci or ccs, or directly edit the or with the p csd Web UI. You can
clu st er.co n f file. add nodes to an existing cluster
with the p cs clu st er n o d e ad d
command or with the p csd Web UI.
Propagating cluster When configuration a cluster with Propagation of the cluster and
configuration to all lu ci, propagation is automatic. Pacemaker configuration files,
nodes With ccs, use the - - syn c option. co ro syn c.co n f and cib .xml, is
You can also use the cman _t o o l automatic on cluster setup or when
versio n - r command. adding a node or resource.
Global cluster The following feature are supported Pacemaker in Red Hat Enterprise
properties with rg man ag er in Red Hat Linux 7 supports the following
Enterprise Linux 6: features for a cluster:
Logging You can set global and daemon- See the file
specific logging configuration. /et c/sysco n f ig /p acemaker for
information on how to configure
logging manually.
105
High Availabilit y Add- O n Reference
Cluster status On lu ci, the current status of the You can display the current cluster
cluster is visible in the various status with the p cs st at u s
components of the interface, which command.
can be refreshed. You can use the -
- g et co n f option of the ccs
command to see current the
configuration file. You can use the
clu st at command to display
cluster status.
Resources You add resources of defined types You add resources of defined types
and configure resource-specific and configure resource-specific
properties with lu ci or the ccs properties with the p cs reso u rce
command, or by editing the creat e command or with the p csd
clu st er.co n f configuration file. Web UI. For general information on
configuring cluster resources with
Pacemaker refer to Chapter 6,
Configuring Cluster Resources.
106
Appendix A. Clust er Creat ion in Red Hat Ent erprise Linux 6 and Red Hat Ent erprise Linux 7
Resource With lu ci, you can manage You can temporarily disable a node
administration: clusters, individual cluster nodes, so that it cannot host resources
Moving, starting, and cluster services. With the ccs with the p cs clu st er st an d b y
stopping resources command, you can manage cluster. command, which causes the
You can use the clu svad m to resources to migrate. You can stop
manage cluster services. a resource with the p cs reso u rce
d isab le command.
Removing a cluster With lu ci, you can select all nodes You can remove a cluster
configuration in a cluster for deletion to delete a configuration with the p cs clu st er
completely cluster entirely. You can also d est ro y command.
remove the clu st er.co n f from
each node in the cluster.
Resources active on No equivalent. With Pacemaker, you can clone
multiple nodes, resources so that they can run in
resources active on multiple nodes, and you can define
multiple nodes in cloned resources as master and
multiple modes slave resources so that they can
run in multiple modes. For
information on cloned resources
and master/slave resources, refer to
Chapter 9, Advanced Resource
Configuration.
107
High Availabilit y Add- O n Reference
A.2. Pacemaker Inst allat ion in Red Hat Ent erprise Linux 6 and Red Hat
Ent erprise Linux 7
Red Hat Enterprise Linux 6.5 and later releases support cluster configuration with Pacemaker, using
the p cs configuration tool. There are, however, some differences in cluster installation between Red
Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 when using Pacemaker.
The following commands install the Red Hat High Availability Add-On software packages that
Pacemaker requires in Red Hat Enterprise Linux 6 and prevent co ro syn c from starting without
cman . You must enter these commands on each node in the cluster.
On each node in the cluster, you set up a password for the p cs administration account named
h aclu st er, and you start and enable the p csd service.
On one node in the cluster, you then authenticate the administration account for the nodes of the
cluster.
In Red Hat Enterprise Linux 7, you run the following commands on each node in the cluster to install
the Red Hat High Availability Add-On software packages that Pacemaker requires, set up a password
for the p cs administration account named h aclu st er, and start and enable the p csd service,
108
Appendix A. Clust er Creat ion in Red Hat Ent erprise Linux 6 and Red Hat Ent erprise Linux 7
In Red Hat Enterprise Linux 7, as in Red Hat Enterprise Linux 6, you authenticate the administration
account for the nodes of the cluster by running the following command on one node in the cluster.
For further information on installation in Red Hat Enterprise Linux 7, see Chapter 1, Red Hat High
Availability Add-On Configuration and Management Reference Overview and Chapter 4, Cluster Creation
and Administration.
109
High Availabilit y Add- O n Reference
Index
- , Cluster Creation
A
Act io n
- Property
- enabled, Resource Operations
- id, Resource Operations
- interval, Resource Operations
- name, Resource Operations
- on-fail, Resource Operations
- timeout, Resource Operations
110
Appendix B. Revision Hist ory
B
b at ch - limit , Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
C
C lo n e
- Option
- clone-max, Creating and Removing a Cloned Resource
- clone-node-max, Creating and Removing a Cloned Resource
- globally-unique, Creating and Removing a Cloned Resource
- interleave, Creating and Removing a Cloned Resource
- notify, Creating and Removing a Cloned Resource
- ordered, Creating and Removing a Cloned Resource
111
High Availabilit y Add- O n Reference
- D uration, D urations
- Rule, Pacemaker Rules
- boolean-op, Pacemaker Rules
- role, Pacemaker Rules
- score, Pacemaker Rules
- score-attribute, Pacemaker Rules
112
Appendix B. Revision Hist ory
C o n st rain t Exp ressio n , N o d e At t rib u t e Exp ressio n s, T ime/D at e B ased Exp ressio n s
C o n st rain t R u le, Pacemaker R u les
C o n st rain t s
- Colocation, Colocation of Resources
- Location
- id, Location Constraints
- score, Location Constraints
D
d amp en , Mo vin g R eso u rces D u e t o C o n n ect ivit y C h an g es
- Ping Resource Option, Moving Resources D ue to Connectivity Changes
D u rat io n , D u rat io n s
E
en ab le- acl, Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
113
High Availabilit y Add- O n Reference
en ab lin g
- resources, Enabling and D isabling Cluster Resources
F
f ailu re- t imeo u t , R eso u rce Met a O p t io n s
- Resource Option, Resource Meta Options
G
g lo b ally- u n iq u e, C reat in g an d R emo vin g a C lo n ed R eso u rce
- Clone Option, Creating and Removing a Cloned Resource
H
h o st _list , Mo vin g R eso u rces D u e t o C o n n ect ivit y C h an g es
- Ping Resource Option, Moving Resources D ue to Connectivity Changes
I
id , R eso u rce Pro p ert ies, R eso u rce O p erat io n s, D at e Sp ecif icat io n s
- Action Property, Resource Operations
- D ate Specification, D ate Specifications
- Location Constraints, Location Constraints
- Multi-State Property, MultiState Resources: Resources That Have Multiple Modes
- Resource, Resource Properties
K
kin d , O rd er C o n st rain t s
- Order Constraints, Order Constraints
114
Appendix B. Revision Hist ory
Lo cat io n
- D etermine by Rules, Using Rules to D etermine Resource Location
- score, Location Constraints
M
main t en an ce- mo d e, Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
mo n t h s, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
mo o n , D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
Mu lt i- St at e
- Option
- master-max, MultiState Resources: Resources That Have Multiple Modes
- master-node-max, MultiState Resources: Resources That Have Multiple
Modes
- Property
- id, MultiState Resources: Resources That Have Multiple Modes
115
High Availabilit y Add- O n Reference
N
n ame, R eso u rce O p erat io n s
- Action Property, Resource Operations
O
o n - f ail, R eso u rce O p erat io n s
- Action Property, Resource Operations
O p t io n
- batch-limit, Summary of Cluster Properties and Options
- clone-max, Creating and Removing a Cloned Resource
- clone-node-max, Creating and Removing a Cloned Resource
- cluster-delay, Summary of Cluster Properties and Options
- cluster-infrastructure, Summary of Cluster Properties and Options
- cluster-recheck-interval, Summary of Cluster Properties and Options
- dampen, Moving Resources D ue to Connectivity Changes
- dc-version, Summary of Cluster Properties and Options
- default-action-timeout, Summary of Cluster Properties and Options
- default-resource-stickiness, Summary of Cluster Properties and Options
- enable-acl, Summary of Cluster Properties and Options
- failure-timeout, Resource Meta Options
- globally-unique, Creating and Removing a Cloned Resource
- host_list, Moving Resources D ue to Connectivity Changes
- interleave, Creating and Removing a Cloned Resource
- is-managed, Resource Meta Options
- is-managed-default, Summary of Cluster Properties and Options
- last-lrm-refresh, Summary of Cluster Properties and Options
- maintenance-mode, Summary of Cluster Properties and Options
- master-max, MultiState Resources: Resources That Have Multiple Modes
- master-node-max, MultiState Resources: Resources That Have Multiple Modes
- migration-limit, Summary of Cluster Properties and Options
- migration-threshold, Resource Meta Options
- multiple-active, Resource Meta Options
- multiplier, Moving Resources D ue to Connectivity Changes
- no-quorum-policy, Summary of Cluster Properties and Options
- notify, Creating and Removing a Cloned Resource
- ordered, Creating and Removing a Cloned Resource
- pe-error-series-max, Summary of Cluster Properties and Options
- pe-input-series-max, Summary of Cluster Properties and Options
- pe-warn-series-max, Summary of Cluster Properties and Options
- placement-strategy, Summary of Cluster Properties and Options
116
Appendix B. Revision Hist ory
O rd er
- kind, Order Constraints
O rd er C o n st rain t s, O rd er C o n st rain t s
- symmetrical, Order Constraints
O rd erin g , O rd er C o n st rain t s
o verview
- features, new and changed, New and Changed Features
P
p e- erro r- series- max, Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
Pro p ert y
- enabled, Resource Operations
117
High Availabilit y Add- O n Reference
Q
Q u eryin g
- Cluster Properties, Querying Cluster Property Settings
R
R emo vin g
- Cluster Properties, Setting and Removing Cluster Properties
R emo vin g Pro p ert ies, Set t in g an d R emo vin g C lu st er Pro p ert ies
req u ires, R eso u rce Met a O p t io n s
R eso u rce, R eso u rce Pro p ert ies
- Constraint
- Attribute Expression, Node Attribute Expressions
- D ate Specification, D ate Specifications
- D ate/Time Expression, Time/D ate Based Expressions
- D uration, D urations
- Rule, Pacemaker Rules
- Constraints
- Colocation, Colocation of Resources
- Order, Order Constraints
- Location
- D etermine by Rules, Using Rules to D etermine Resource Location
- Property
- id, Resource Properties
- provider, Resource Properties
118
Appendix B. Revision Hist ory
reso u rces
- cleanup, Cluster Resources Cleanup
- disabling, Enabling and D isabling Cluster Resources
- enabling, Enabling and D isabling Cluster Resources
S
sco re, Lo cat io n C o n st rain t s, Pacemaker R u les
- Constraint Rule, Pacemaker Rules
- Location Constraints, Location Constraints
Set t in g
- Cluster Properties, Setting and Removing Cluster Properties
Set t in g Pro p ert ies, Set t in g an d R emo vin g C lu st er Pro p ert ies
sh u t d o wn - escalat io n , Su mmary o f C lu st er Pro p ert ies an d O p t io n s
- Cluster Option, Summary of Cluster Properties and Options
119
High Availabilit y Add- O n Reference
st at u s
- display, D isplaying Cluster Status
T
t arg et - ro le, R eso u rce Met a O p t io n s
- Resource Option, Resource Meta Options
U
u t iliz at io n at t rib u t es, U t iliz at io n an d Placemen t St rat eg y
V
valu e, N o d e At t rib u t e Exp ressio n s
- Constraint Expression, Node Attribute Expressions
W
weekd ays, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
120
Appendix B. Revision Hist ory
Y
yeard ays, D at e Sp ecif icat io n s
- D ate Specification, D ate Specifications
121