RHEL - 8 - SystemAdmin Guide-7
RHEL - 8 - SystemAdmin Guide-7
RHEL 8.3 introduced Ansible roles for automated deployments of Policy-Based Decryption (PBD)
solutions using Clevis and Tang. The rhel-system-roles package contains these system roles, related
examples, and also the reference documentation.
The nbde_client System Role enables you to deploy multiple Clevis clients in an automated way. Note
that the nbde_client role supports only Tang bindings, and you cannot use it for TPM2 bindings at the
moment.
The nbde_client role requires volumes that are already encrypted using LUKS. This role supports to bind
a LUKS-encrypted volume to one or more Network-Bound (NBDE) servers - Tang servers. You can
either preserve the existing volume encryption with a passphrase or remove it. After removing the
passphrase, you can unlock the volume only using NBDE. This is useful when a volume is initially
encrypted using a temporary key or password that you should remove after you provision the system.
If you provide both a passphrase and a key file, the role uses what you have provided first. If it does not
find any of these valid, it attempts to retrieve a passphrase from an existing binding.
PBD defines a binding as a mapping of a device to a slot. This means that you can have multiple bindings
for the same device. The default slot is slot 1.
The nbde_client role provides also the state variable. Use the present value for either creating a new
binding or updating an existing one. Contrary to a clevis luks bind command, you can use state:
present also for overwriting an existing binding in its device slot. The absent value removes a specified
binding.
Using the nbde_client System Role, you can deploy and manage a Tang server as part of an automated
disk encryption solution. This role supports the following features:
Additional resources
For a detailed reference on Network-Bound Disk Encryption (NBDE) role variables, install the
rhel-system-roles package, and see the README.md and README.html files in the
/usr/share/doc/rhel-system-roles/nbde_client/ and /usr/share/doc/rhel-system-
roles/nbde_server/ directories.
For example system-roles playbooks, install the rhel-system-roles package, and see the
/usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/ directories.
For more information on RHEL System Roles, see Introduction to RHEL System Roles
Prerequisites
Access and permissions to one or more managed nodes, which are systems you want to
configure with the nbde_server System Role.
Access and permissions to a control node, which is a system from which Red Hat Ansible Core
configures other systems.
On the control node:
IMPORTANT
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible
Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line
utilities such as ansible, ansible-playbook, connectors such as docker and podman, and
many plugins and modules. For information on how to obtain and install Ansible Engine,
see the How to download and install Red Hat Ansible Engine Knowledgebase article.
RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package),
which contains the Ansible command-line utilities, commands, and a small set of built-in
Ansible plugins. RHEL provides this package through the AppStream repository, and it
has a limited scope of support. For more information, see the Scope of support for the
Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream
repositories Knowledgebase article.
Procedure
1. Prepare your playbook containing settings for Tang servers. You can either start from the
scratch, or use one of the example playbooks from the /usr/share/ansible/roles/rhel-system-
roles.nbde_server/examples/ directory.
# cp /usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/simple_deploy.yml
./my-tang-playbook.yml
# vi my-tang-playbook.yml
3. Add the required parameters. The following example playbook ensures deploying of your Tang
server and a key rotation:
---
- hosts: all
vars:
108
CHAPTER 16. USING THE NBDE_CLIENT AND NBDE_SERVER SYSTEM ROLES
nbde_server_rotate_keys: yes
roles:
- rhel-system-roles.nbde_server
Where: * inventory-file is the inventory file. * logging-playbook.yml is the playbook you use.
IMPORTANT
To ensure that networking for a Tang pin is available during early boot by using the
grubby tool on the systems where Clevis is installed:
Additional resources
For more information, install the rhel-system-roles package, and see the /usr/share/doc/rhel-
system-roles/nbde_server/ and usr/share/ansible/roles/rhel-system-roles.nbde_server/
directories.
NOTE
The nbde_client System Role supports only Tang bindings. This means that you cannot
use it for TPM2 bindings at the moment.
Prerequisites
Access and permissions to one or more managed nodes, which are systems you want to
configure with the nbde_client System Role.
Access and permissions to a control node, which is a system from which Red Hat Ansible Core
configures other systems.
The rhel-system-roles package is installed on the system from which you want to run the
playbook.
Procedure
1. Prepare your playbook containing settings for Clevis clients. You can either start from the
scratch, or use one of the example playbooks from the /usr/share/ansible/roles/rhel-system-
roles.nbde_client/examples/ directory.
109
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL
# cp /usr/share/ansible/roles/rhel-system-roles.nbde_client/examples/high_availability.yml
./my-clevis-playbook.yml
# vi my-clevis-playbook.yml
3. Add the required parameters. The following example playbook configures Clevis clients for
automated unlocking of two LUKS-encrypted volumes by when at least one of two Tang servers
is available:
---
- hosts: all
vars:
nbde_client_bindings:
- device: /dev/rhel/root
encryption_key_src: /etc/luks/keyfile
servers:
- http://server1.example.com
- http://server2.example.com
- device: /dev/rhel/swap
encryption_key_src: /etc/luks/keyfile
servers:
- http://server1.example.com
- http://server2.example.com
roles:
- rhel-system-roles.nbde_client
IMPORTANT
To ensure that networking for a Tang pin is available during early boot by using the
grubby tool on the system where Clevis is installed:
Additional resources
For details about the parameters and additional information about the NBDE Client System
Role, install the rhel-system-roles package, and see the /usr/share/doc/rhel-system-
roles/nbde_client/ and /usr/share/ansible/roles/rhel-system-roles.nbde_client/ directories.
110
CHAPTER 17. REQUESTING CERTIFICATES USING RHEL SYSTEM ROLES
Requesting a new certificate from IdM CA using the certificate System Role
The role uses certmonger as the certificate provider, and currently supports issuing and renewing self-
signed certificates and using the IdM integrated certificate authority (CA).
You can use the following variables in your Ansible playbook with the certificate System Role:
certificate_wait
to specify if the task should wait for the certificate to be issued.
certificate_requests
to represent each certificate to be issued and its parameters.
Additional resources
Preparing a control node and managed nodes to use RHEL System Roles
With the certificate System Role, you can use Ansible Core to issue self-signed certificates.
This process uses the certmonger provider and requests the certificate through the getcert command.
NOTE
By default, certmonger automatically tries to renew the certificate before it expires. You
can disable this by setting the auto_renew parameter in the Ansible playbook to no.
Prerequisites
You have the rhel-system-roles package installed on the system from which you want to run
the playbook.
111
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL
Procedure
$ *touch inventory.file*
2. Open your inventory file and define the hosts on which you want to request the certificate, for
example:
[webserver]
server.idm.example.com
Set hosts to include the hosts on which you want to request the certificate, such as
webserver.
Set the name parameter to the desired name of the certificate, such as mycert.
Set the dns parameter to the domain to be included in the certificate, such as
*.example.com.
---
- hosts: webserver
vars:
certificate_requests:
- name: mycert
dns: "*.example.com"
ca: self-sign
roles:
- rhel-system-roles.certificate
Additional resources
With the certificate System Role, you can use anible-core to issue certificates while using an IdM server
with an integrated certificate authority (CA). Therefore, you can efficiently and consistently manage the
certificate trust chain for multiple systems when using IdM as the CA.
This process uses the certmonger provider and requests the certificate through the getcert command.
NOTE
By default, certmonger automatically tries to renew the certificate before it expires. You
can disable this by setting the auto_renew parameter in the Ansible playbook to no.
Prerequisites
You have the rhel-system-roles package installed on the system from which you want to run
the playbook.
Procedure
$ *touch inventory.file*
2. Open your inventory file and define the hosts on which you want to request the certificate, for
example:
[webserver]
server.idm.example.com
Set hosts to include the hosts on which you want to request the certificate, such as
webserver.
Set the name parameter to the desired name of the certificate, such as mycert.
Set the dns parameter to the domain to be included in the certificate, such as
www.example.com.
---
- hosts: webserver
113
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL
vars:
certificate_requests:
- name: mycert
dns: www.example.com
principal: HTTP/[email protected]
ca: ipa
roles:
- rhel-system-roles.certificate
Additional resources
In the following example, the administrator ensures stopping the httpd service before a self-signed
certificate for www.example.com is issued or renewed, and restarting it afterwards.
NOTE
By default, certmonger automatically tries to renew the certificate before it expires. You
can disable this by setting the auto_renew parameter in the Ansible playbook to no.
Prerequisites
You have the rhel-system-roles package installed on the system from which you want to run
the playbook.
Procedure
$ *touch inventory.file*
2. Open your inventory file and define the hosts on which you want to request the certificate, for
example:
114
CHAPTER 17. REQUESTING CERTIFICATES USING RHEL SYSTEM ROLES
[webserver]
server.idm.example.com
Set hosts to include the hosts on which you want to request the certificate, such as
webserver.
Set the name parameter to the desired name of the certificate, such as mycert.
Set the dns parameter to the domain to be included in the certificate, such as
www.example.com.
Set the ca parameter to the CA you want to use to issue the certificate, such as self-
sign.
Set the run_before parameter to the command you want to execute before this
certificate is issued or renewed, such as systemctl stop httpd.service.
Set the run_after parameter to the command you want to execute after this certificate
is issued or renewed, such as systemctl start httpd.service.
---
- hosts: webserver
vars:
certificate_requests:
- name: mycert
dns: www.example.com
ca: self-sign
run_before: systemctl stop httpd.service
run_after: systemctl start httpd.service
roles:
- rhel-system-roles.certificate
Additional resources
115
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL
Using the kdump role enables you to specify where to save the contents of the system’s memory for
later analysis.
For more information about RHEL System Roles and how to apply them, see Introduction to
RHEL System Roles.
Additional resources
For details about the parameters used in kdump and additional information about the kdump
System Role, see the /usr/share/ansible/roles/rhel-system-roles.tlog/README.md file.
WARNING
The kdump role replaces the kdump configuration of the managed hosts entirely by
replacing the /etc/kdump.conf file. Additionally, if the kdump role is applied, all
previous kdump settings are also replaced, even if they are not specified by the role
variables, by replacing the /etc/sysconfig/kdump file.
116
CHAPTER 18. CONFIGURING KDUMP USING RHEL SYSTEM ROLES
Prerequisites
You have the rhel-system-roles package installed on the system from which you want to run
the playbook.
You have an inventory file which lists the systems on which you want to deploy kdump.
Procedure
---
- hosts: kdump-test
vars:
kdump_path: /var/crash
roles:
- rhel-system-roles.kdump
Additional resources
For a detailed reference on kdump role variables, see the README.md or README.html files in
the /usr/share/doc/rhel-system-roles/kdump directory.
See Preparing the control node and managed nodes to use RHEL System Roles
117
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL
Using the storage role enables you to automate administration of file systems on disks and logical
volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7.
For more information about RHEL System Roles and how to apply them, see Introduction to
RHEL System Roles.
Complete LVM volume groups including their logical volumes and file systems
With the storage role, you can perform the following tasks:
118
CHAPTER 19. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
Your storage role configuration affects only the file systems, volumes, and pools that you list in the
following variables.
storage_volumes
List of file systems on all unpartitioned disks to be managed.
storage_volumes can also include raid volumes.
storage_pools
List of pools to be managed.
Currently the only supported pool type is LVM. With LVM, pools represent volume groups (VGs).
Under each pool there is a list of volumes to be managed by the role. With LVM, each volume
corresponds to a logical volume (LV) with a file system.
WARNING
The storage role can create a file system only on an unpartitioned, whole disk or a
logical volume (LV). It cannot create the file system on a partition.
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
roles:
- rhel-system-roles.storage
The volume name (barefs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
You can omit the fs_type: xfs line because XFS is the default file system in RHEL 8.
To create the file system on an LV, provide the LVM setup under the disks: attribute,
119
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL
To create the file system on an LV, provide the LVM setup under the disks: attribute,
including the enclosing volume group. For details, see Example Ansible playbook to manage
logical volumes.
Do not provide the path to the LV device.
Additional resources
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
roles:
- rhel-system-roles.storage
This playbook adds the file system to the /etc/fstab file, and mounts the file system
immediately.
If the file system on the /dev/sdb device or the mount point directory do not exist, the
playbook creates them.
Additional resources
Example 19.3. A playbook that creates a mylv logical volume in the myvg volume group
- hosts: all
vars:
120
CHAPTER 19. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
storage_pools:
- name: myvg
disks:
- sda
- sdb
- sdc
volumes:
- name: mylv
size: 2G
fs_type: ext4
mount_point: /mnt/data
roles:
- rhel-system-roles.storage
/dev/sda
/dev/sdb
/dev/sdc
If the myvg volume group already exists, the playbook adds the logical volume to the volume
group.
If the myvg volume group does not exist, the playbook creates it.
The playbook creates an Ext4 file system on the mylv logical volume, and persistently
mounts the file system at /mnt.
Additional resources
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
121
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL
mount_options: discard
roles:
- rhel-system-roles.storage
Additional resources
Example 19.5. A playbook that creates Ext4 on /dev/sdb and mounts it at /mnt/data
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext4
fs_label: label-name
mount_point: /mnt/data
roles:
- rhel-system-roles.storage
The playbook persistently mounts the file system at the /mnt/data directory.
Additional resources
Example 19.6. A playbook that creates Ext3 on /dev/sdb and mounts it at/mnt/data
122
CHAPTER 19. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext3
fs_label: label-name
mount_point: /mnt/data
roles:
- rhel-system-roles.storage
The playbook persistently mounts the file system at the /mnt/data directory.
Additional resources
---
- name: Create a disk device mounted on /opt/barefs
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- /dev/sdb
size: 12 GiB
fs_type: ext4
mount_point: /opt/barefs
roles:
- rhel-system-roles.storage
If the volume in the previous example already exists, to resize the volume, you need to run the
same playbook, just with a different value for the parameter size. For example:
123
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL
---
- name: Create a disk device mounted on /opt/barefs
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- /dev/sdb
size: 10 GiB
fs_type: ext4
mount_point: /opt/barefs
roles:
- rhel-system-roles.storage
The volume name (barefs in the example) is currently arbitrary. The Storage role identifies
the volume by the disk device listed under the disks: attribute.
NOTE
Using the Resizing action in other file systems can destroy the data on the device you
are working on.
Additional resources
WARNING
Using the Resizing action in other file systems can destroy the data on the device
you are working on.
Example 19.9. A playbook that resizes existing mylv1 and myvl2 logical volumes in the myvg
volume group
---
- hosts: all
vars:
storage_pools:
- name: myvg
124
CHAPTER 19. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES
disks:
- /dev/sda
- /dev/sdb
- /dev/sdc
volumes:
- name: mylv1
size: 10 GiB
fs_type: ext4
mount_point: /opt/mount1
- name: mylv2
size: 50 GiB
fs_type: ext4
mount_point: /opt/mount2
The Ext4 file system on the mylv1 volume, which is mounted at /opt/mount1, resizes to
10 GiB.
The Ext4 file system on the mylv2 volume, which is mounted at /opt/mount2, resizes to
50 GiB.
Additional resources
---
- name: Create a disk device with swap
- hosts: all
vars:
storage_volumes:
- name: swap_fs
type: disk
disks:
- /dev/sdb
size: 15 GiB
fs_type: swap
roles:
- rhel-system-roles.storage
The volume name (swap_fs in the example) is currently arbitrary. The storage role identifies
125
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL
The volume name (swap_fs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
Additional resources
Prerequisites
You have the rhel-system-roles package installed on the system from which you want to run
the playbook.
You have an inventory file detailing the systems on which you want to deploy a RAID volume
using the storage System Role.
Procedure
---
- name: Configure the storage
hosts: managed-node-01.example.com
tasks:
- name: Create a RAID on sdd, sde, sdf, and sdg
include_role:
name: rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_volumes:
- name: data
type: raid
disks: [sdd, sde, sdf, sdg]
raid_level: raid0
raid_chunk_size: 32 KiB
mount_point: /mnt/data
state: present
126