0% found this document useful (0 votes)
85 views

RHEL - 8 - SystemAdmin Guide-7

This document describes how to use the NBDE_CLIENT and NBDE_SERVER roles included in RHEL System Roles to automate the deployment of Clevis clients and Tang servers for policy-based disk encryption. The NBDE_CLIENT role allows binding encrypted volumes to Tang servers, while preserving or removing existing encryption passphrases. The NBDE_SERVER role can be used to deploy and manage multiple Tang servers, including key rotation. Examples are provided for playbooks to configure Clevis clients and Tang servers across managed nodes.

Uploaded by

anand
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

RHEL - 8 - SystemAdmin Guide-7

This document describes how to use the NBDE_CLIENT and NBDE_SERVER roles included in RHEL System Roles to automate the deployment of Clevis clients and Tang servers for policy-based disk encryption. The NBDE_CLIENT role allows binding encrypted volumes to Tang servers, while preserving or removing existing encryption passphrases. The NBDE_SERVER role can be used to deploy and manage multiple Tang servers, including key rotation. Examples are provided for playbooks to configure Clevis clients and Tang servers across managed nodes.

Uploaded by

anand
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

CHAPTER 16.

USING THE NBDE_CLIENT AND NBDE_SERVER SYSTEM ROLES

CHAPTER 16. USING THE NBDE_CLIENT AND NBDE_SERVER SYSTEM


ROLES

16.1. INTRODUCTION TO THE NBDE_CLIENT AND NBDE_SERVER SYSTEM


ROLES (CLEVIS AND TANG)
RHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration
interface to remotely manage multiple RHEL systems.

RHEL 8.3 introduced Ansible roles for automated deployments of Policy-Based Decryption (PBD)
solutions using Clevis and Tang. The rhel-system-roles package contains these system roles, related
examples, and also the reference documentation.

The nbde_client System Role enables you to deploy multiple Clevis clients in an automated way. Note
that the nbde_client role supports only Tang bindings, and you cannot use it for TPM2 bindings at the
moment.

The nbde_client role requires volumes that are already encrypted using LUKS. This role supports to bind
a LUKS-encrypted volume to one or more Network-Bound (NBDE) servers - Tang servers. You can
either preserve the existing volume encryption with a passphrase or remove it. After removing the
passphrase, you can unlock the volume only using NBDE. This is useful when a volume is initially
encrypted using a temporary key or password that you should remove after you provision the system.

If you provide both a passphrase and a key file, the role uses what you have provided first. If it does not
find any of these valid, it attempts to retrieve a passphrase from an existing binding.

PBD defines a binding as a mapping of a device to a slot. This means that you can have multiple bindings
for the same device. The default slot is slot 1.

The nbde_client role provides also the state variable. Use the present value for either creating a new
binding or updating an existing one. Contrary to a clevis luks bind command, you can use state:
present also for overwriting an existing binding in its device slot. The absent value removes a specified
binding.

Using the nbde_client System Role, you can deploy and manage a Tang server as part of an automated
disk encryption solution. This role supports the following features:

Rotating Tang keys

Deploying and backing up Tang keys

Additional resources

For a detailed reference on Network-Bound Disk Encryption (NBDE) role variables, install the
rhel-system-roles package, and see the README.md and README.html files in the
/usr/share/doc/rhel-system-roles/nbde_client/ and /usr/share/doc/rhel-system-
roles/nbde_server/ directories.

For example system-roles playbooks, install the rhel-system-roles package, and see the
/usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/ directories.

For more information on RHEL System Roles, see Introduction to RHEL System Roles

16.2. USING THE NBDE_SERVER SYSTEM ROLE FOR SETTING UP


107
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

16.2. USING THE NBDE_SERVER SYSTEM ROLE FOR SETTING UP


MULTIPLE TANG SERVERS
Follow the steps to prepare and apply an Ansible playbook containing your Tang server settings.

Prerequisites

Access and permissions to one or more managed nodes, which are systems you want to
configure with the nbde_server System Role.

Access and permissions to a control node, which is a system from which Red Hat Ansible Core
configures other systems.
On the control node:

The ansible-core and rhel-system-roles packages are installed.

IMPORTANT

RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible
Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line
utilities such as ansible, ansible-playbook, connectors such as docker and podman, and
many plugins and modules. For information on how to obtain and install Ansible Engine,
see the How to download and install Red Hat Ansible Engine Knowledgebase article.

RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package),
which contains the Ansible command-line utilities, commands, and a small set of built-in
Ansible plugins. RHEL provides this package through the AppStream repository, and it
has a limited scope of support. For more information, see the Scope of support for the
Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream
repositories Knowledgebase article.

An inventory file which lists the managed nodes.

Procedure

1. Prepare your playbook containing settings for Tang servers. You can either start from the
scratch, or use one of the example playbooks from the /usr/share/ansible/roles/rhel-system-
roles.nbde_server/examples/ directory.

# cp /usr/share/ansible/roles/rhel-system-roles.nbde_server/examples/simple_deploy.yml
./my-tang-playbook.yml

2. Edit the playbook in a text editor of your choice, for example:

# vi my-tang-playbook.yml

3. Add the required parameters. The following example playbook ensures deploying of your Tang
server and a key rotation:

---
- hosts: all

vars:

108
CHAPTER 16. USING THE NBDE_CLIENT AND NBDE_SERVER SYSTEM ROLES

nbde_server_rotate_keys: yes

roles:
- rhel-system-roles.nbde_server

4. Apply the finished playbook:

# ansible-playbook -i inventory-file my-tang-playbook.yml

Where: * inventory-file is the inventory file. * logging-playbook.yml is the playbook you use.

IMPORTANT

To ensure that networking for a Tang pin is available during early boot by using the
grubby tool on the systems where Clevis is installed:

# grubby --update-kernel=ALL --args="rd.neednet=1"

Additional resources

For more information, install the rhel-system-roles package, and see the /usr/share/doc/rhel-
system-roles/nbde_server/ and usr/share/ansible/roles/rhel-system-roles.nbde_server/
directories.

16.3. USING THE NBDE_CLIENT SYSTEM ROLE FOR SETTING UP


MULTIPLE CLEVIS CLIENTS
Follow the steps to prepare and apply an Ansible playbook containing your Clevis client settings.

NOTE

The nbde_client System Role supports only Tang bindings. This means that you cannot
use it for TPM2 bindings at the moment.

Prerequisites

Access and permissions to one or more managed nodes, which are systems you want to
configure with the nbde_client System Role.

Access and permissions to a control node, which is a system from which Red Hat Ansible Core
configures other systems.

The Ansible Core package is installed on the control machine.

The rhel-system-roles package is installed on the system from which you want to run the
playbook.

Procedure

1. Prepare your playbook containing settings for Clevis clients. You can either start from the
scratch, or use one of the example playbooks from the /usr/share/ansible/roles/rhel-system-
roles.nbde_client/examples/ directory.

109
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

# cp /usr/share/ansible/roles/rhel-system-roles.nbde_client/examples/high_availability.yml
./my-clevis-playbook.yml

2. Edit the playbook in a text editor of your choice, for example:

# vi my-clevis-playbook.yml

3. Add the required parameters. The following example playbook configures Clevis clients for
automated unlocking of two LUKS-encrypted volumes by when at least one of two Tang servers
is available:

---
- hosts: all

vars:
nbde_client_bindings:
- device: /dev/rhel/root
encryption_key_src: /etc/luks/keyfile
servers:
- http://server1.example.com
- http://server2.example.com
- device: /dev/rhel/swap
encryption_key_src: /etc/luks/keyfile
servers:
- http://server1.example.com
- http://server2.example.com

roles:
- rhel-system-roles.nbde_client

4. Apply the finished playbook:

# ansible-playbook -i host1,host2,host3 my-clevis-playbook.yml

IMPORTANT

To ensure that networking for a Tang pin is available during early boot by using the
grubby tool on the system where Clevis is installed:

# grubby --update-kernel=ALL --args="rd.neednet=1"

Additional resources

For details about the parameters and additional information about the NBDE Client System
Role, install the rhel-system-roles package, and see the /usr/share/doc/rhel-system-
roles/nbde_client/ and /usr/share/ansible/roles/rhel-system-roles.nbde_client/ directories.

110
CHAPTER 17. REQUESTING CERTIFICATES USING RHEL SYSTEM ROLES

CHAPTER 17. REQUESTING CERTIFICATES USING


RHEL SYSTEM ROLES
With the certificate System Role, you can use Red Hat Ansible Core to issue and manage certificates.

This chapter covers the following topics:

The certificate System Role

Requesting a new self-signed certificate using the certificate System Role

Requesting a new certificate from IdM CA using the certificate System Role

17.1. THE CERTIFICATE SYSTEM ROLE


Using the certificate System Role, you can manage issuing and renewing TLS and SSL certificates using
Ansible Core.

The role uses certmonger as the certificate provider, and currently supports issuing and renewing self-
signed certificates and using the IdM integrated certificate authority (CA).

You can use the following variables in your Ansible playbook with the certificate System Role:

certificate_wait
to specify if the task should wait for the certificate to be issued.
certificate_requests
to represent each certificate to be issued and its parameters.

Additional resources

See the /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file.

Preparing a control node and managed nodes to use RHEL System Roles

17.2. REQUESTING A NEW SELF-SIGNED CERTIFICATE USING THE


CERTIFICATE SYSTEM ROLE

With the certificate System Role, you can use Ansible Core to issue self-signed certificates.

This process uses the certmonger provider and requests the certificate through the getcert command.

NOTE

By default, certmonger automatically tries to renew the certificate before it expires. You
can disable this by setting the auto_renew parameter in the Ansible playbook to no.

Prerequisites

The Ansible Core package is installed on the control machine.

You have the rhel-system-roles package installed on the system from which you want to run
the playbook.

111
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

Procedure

1. Optional: Create an inventory file, for example inventory.file:

$ *touch inventory.file*

2. Open your inventory file and define the hosts on which you want to request the certificate, for
example:

[webserver]
server.idm.example.com

3. Create a playbook file, for example request-certificate.yml:

Set hosts to include the hosts on which you want to request the certificate, such as
webserver.

Set the certificate_requests variable to include the following:

Set the name parameter to the desired name of the certificate, such as mycert.

Set the dns parameter to the domain to be included in the certificate, such as
*.example.com.

Set the ca parameter to self-sign.

Set the rhel-system-roles.certificate role under roles.


This is the playbook file for this example:

---
- hosts: webserver

vars:
certificate_requests:
- name: mycert
dns: "*.example.com"
ca: self-sign

roles:
- rhel-system-roles.certificate

4. Save the file.

5. Run the playbook:

$ *ansible-playbook -i inventory.file request-certificate.yml*

Additional resources

See the /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file.

See the ansible-playbook(1) man page.

17.3. REQUESTING A NEW CERTIFICATE FROM IDM CA USING THE


112
CHAPTER 17. REQUESTING CERTIFICATES USING RHEL SYSTEM ROLES

17.3. REQUESTING A NEW CERTIFICATE FROM IDM CA USING THE


CERTIFICATE SYSTEM ROLE

With the certificate System Role, you can use anible-core to issue certificates while using an IdM server
with an integrated certificate authority (CA). Therefore, you can efficiently and consistently manage the
certificate trust chain for multiple systems when using IdM as the CA.

This process uses the certmonger provider and requests the certificate through the getcert command.

NOTE

By default, certmonger automatically tries to renew the certificate before it expires. You
can disable this by setting the auto_renew parameter in the Ansible playbook to no.

Prerequisites

The Ansible Core package is installed on the control machine.

You have the rhel-system-roles package installed on the system from which you want to run
the playbook.

Procedure

1. Optional: Create an inventory file, for example inventory.file:

$ *touch inventory.file*

2. Open your inventory file and define the hosts on which you want to request the certificate, for
example:

[webserver]
server.idm.example.com

3. Create a playbook file, for example request-certificate.yml:

Set hosts to include the hosts on which you want to request the certificate, such as
webserver.

Set the certificate_requests variable to include the following:

Set the name parameter to the desired name of the certificate, such as mycert.

Set the dns parameter to the domain to be included in the certificate, such as
www.example.com.

Set the principal parameter to specify the Kerberos principal, such as


HTTP/[email protected].

Set the ca parameter to ipa.

Set the rhel-system-roles.certificate role under roles.


This is the playbook file for this example:

---
- hosts: webserver

113
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

vars:
certificate_requests:
- name: mycert
dns: www.example.com
principal: HTTP/[email protected]
ca: ipa

roles:
- rhel-system-roles.certificate

4. Save the file.

5. Run the playbook:

$ *ansible-playbook -i inventory.file request-certificate.yml*

Additional resources

See the /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file.

See the ansible-playbook(1) man page.

17.4. SPECIFYING COMMANDS TO RUN BEFORE OR AFTER


CERTIFICATE ISSUANCE USING THE CERTIFICATE SYSTEM ROLE
With the certificate Role, you can use Ansible Core to execute a command before and after a certificate
is issued or renewed.

In the following example, the administrator ensures stopping the httpd service before a self-signed
certificate for www.example.com is issued or renewed, and restarting it afterwards.

NOTE

By default, certmonger automatically tries to renew the certificate before it expires. You
can disable this by setting the auto_renew parameter in the Ansible playbook to no.

Prerequisites

The Ansible Core package is installed on the control machine.

You have the rhel-system-roles package installed on the system from which you want to run
the playbook.

Procedure

1. Optional: Create an inventory file, for example inventory.file:

$ *touch inventory.file*

2. Open your inventory file and define the hosts on which you want to request the certificate, for
example:

114
CHAPTER 17. REQUESTING CERTIFICATES USING RHEL SYSTEM ROLES

[webserver]
server.idm.example.com

3. Create a playbook file, for example request-certificate.yml:

Set hosts to include the hosts on which you want to request the certificate, such as
webserver.

Set the certificate_requests variable to include the following:

Set the name parameter to the desired name of the certificate, such as mycert.

Set the dns parameter to the domain to be included in the certificate, such as
www.example.com.

Set the ca parameter to the CA you want to use to issue the certificate, such as self-
sign.

Set the run_before parameter to the command you want to execute before this
certificate is issued or renewed, such as systemctl stop httpd.service.

Set the run_after parameter to the command you want to execute after this certificate
is issued or renewed, such as systemctl start httpd.service.

Set the rhel-system-roles.certificate role under roles.


This is the playbook file for this example:

---
- hosts: webserver
vars:
certificate_requests:
- name: mycert
dns: www.example.com
ca: self-sign
run_before: systemctl stop httpd.service
run_after: systemctl start httpd.service

roles:
- rhel-system-roles.certificate

4. Save the file.

5. Run the playbook:

$ *ansible-playbook -i inventory.file request-certificate.yml*

Additional resources

See the /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file.

See the ansible-playbook(1) man page.

115
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

CHAPTER 18. CONFIGURING KDUMP USING


RHEL SYSTEM ROLES
To manage kdump using Ansible, you can use the kdump role, which is one of the RHEL System Roles
available in RHEL 8.

Using the kdump role enables you to specify where to save the contents of the system’s memory for
later analysis.

For more information about RHEL System Roles and how to apply them, see Introduction to
RHEL System Roles.

18.1. THE KDUMP RHEL SYSTEM ROLE


The kdump System Role enables you to set basic kernel dump parameters on multiple systems.

18.2. KDUMP ROLE PARAMETERS


The parameters used for the kdump RHEL System Roles are:

Role Variable Description

kdump_path The path to which vmcore is written. If


kdump_target is not null, path is relative to that
dump target. Otherwise, it must be an absolute path
in the root file system.

Additional resources

The makedumpfile(8) man page.

For details about the parameters used in kdump and additional information about the kdump
System Role, see the /usr/share/ansible/roles/rhel-system-roles.tlog/README.md file.

18.3. CONFIGURING KDUMP USING RHEL SYSTEM ROLES


You can set basic kernel dump parameters on multiple systems using the kdump System Role by
running an Ansible playbook.


WARNING

The kdump role replaces the kdump configuration of the managed hosts entirely by
replacing the /etc/kdump.conf file. Additionally, if the kdump role is applied, all
previous kdump settings are also replaced, even if they are not specified by the role
variables, by replacing the /etc/sysconfig/kdump file.

116
CHAPTER 18. CONFIGURING KDUMP USING RHEL SYSTEM ROLES

Prerequisites

The Ansible Core package is installed on the control machine.

You have the rhel-system-roles package installed on the system from which you want to run
the playbook.

You have an inventory file which lists the systems on which you want to deploy kdump.

Procedure

1. Create a new playbook.yml file with the following content:

---
- hosts: kdump-test
vars:
kdump_path: /var/crash
roles:
- rhel-system-roles.kdump

2. Optional: Verify playbook syntax.

# ansible-playbook --syntax-check playbook.yml

3. Run the playbook on your inventory file:

# ansible-playbook -i inventory_file /path/to/file/playbook.yml

Additional resources

For a detailed reference on kdump role variables, see the README.md or README.html files in
the /usr/share/doc/rhel-system-roles/kdump directory.

See Preparing the control node and managed nodes to use RHEL System Roles

Documentation installed with the rhel-system-roles package /usr/share/ansible/roles/rhel-


system-roles.kdump/README.html

117
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

CHAPTER 19. MANAGING LOCAL STORAGE USING


RHEL SYSTEM ROLES
To manage LVM and local file systems (FS) using Ansible, you can use the storage role, which is one of
the RHEL System Roles available in RHEL 8.

Using the storage role enables you to automate administration of file systems on disks and logical
volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7.

For more information about RHEL System Roles and how to apply them, see Introduction to
RHEL System Roles.

19.1. INTRODUCTION TO THE STORAGE RHEL SYSTEM ROLE


The storage role can manage:

File systems on disks which have not been partitioned

Complete LVM volume groups including their logical volumes and file systems

MD RAID volumes and their file systems

With the storage role, you can perform the following tasks:

Create a file system

Remove a file system

Mount a file system

Unmount a file system

Create LVM volume groups

Remove LVM volume groups

Create logical volumes

Remove logical volumes

Create RAID volumes

Remove RAID volumes

Create LVM volume groups with RAID

Remove LVM volume groups with RAID

Create encrypted LVM volume groups

Create LVM logical volumes with RAID

19.2. PARAMETERS THAT IDENTIFY A STORAGE DEVICE IN THE


STORAGE RHEL SYSTEM ROLE

118
CHAPTER 19. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES

Your storage role configuration affects only the file systems, volumes, and pools that you list in the
following variables.

storage_volumes
List of file systems on all unpartitioned disks to be managed.
storage_volumes can also include raid volumes.

Partitions are currently unsupported.

storage_pools
List of pools to be managed.
Currently the only supported pool type is LVM. With LVM, pools represent volume groups (VGs).
Under each pool there is a list of volumes to be managed by the role. With LVM, each volume
corresponds to a logical volume (LV) with a file system.

19.3. EXAMPLE ANSIBLE PLAYBOOK TO CREATE AN XFS FILE


SYSTEM ON A BLOCK DEVICE
This section provides an example Ansible playbook. This playbook applies the storage role to create an
XFS file system on a block device using the default parameters.


WARNING

The storage role can create a file system only on an unpartitioned, whole disk or a
logical volume (LV). It cannot create the file system on a partition.

Example 19.1. A playbook that creates XFS on /dev/sdb

---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
roles:
- rhel-system-roles.storage

The volume name (barefs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.

You can omit the fs_type: xfs line because XFS is the default file system in RHEL 8.

To create the file system on an LV, provide the LVM setup under the disks: attribute,
119
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

To create the file system on an LV, provide the LVM setup under the disks: attribute,
including the enclosing volume group. For details, see Example Ansible playbook to manage
logical volumes.
Do not provide the path to the LV device.

Additional resources

The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file.

19.4. EXAMPLE ANSIBLE PLAYBOOK TO PERSISTENTLY MOUNT A


FILE SYSTEM
This section provides an example Ansible playbook. This playbook applies the storage role to
immediately and persistently mount an XFS file system.

Example 19.2. A playbook that mounts a file system on /dev/sdb to /mnt/data

---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
roles:
- rhel-system-roles.storage

This playbook adds the file system to the /etc/fstab file, and mounts the file system
immediately.

If the file system on the /dev/sdb device or the mount point directory do not exist, the
playbook creates them.

Additional resources

The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file.

19.5. EXAMPLE ANSIBLE PLAYBOOK TO MANAGE LOGICAL


VOLUMES
This section provides an example Ansible playbook. This playbook applies the storage role to create an
LVM logical volume in a volume group.

Example 19.3. A playbook that creates a mylv logical volume in the myvg volume group

- hosts: all
vars:

120
CHAPTER 19. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES

storage_pools:
- name: myvg
disks:
- sda
- sdb
- sdc
volumes:
- name: mylv
size: 2G
fs_type: ext4
mount_point: /mnt/data
roles:
- rhel-system-roles.storage

The myvg volume group consists of the following disks:

/dev/sda

/dev/sdb

/dev/sdc

If the myvg volume group already exists, the playbook adds the logical volume to the volume
group.

If the myvg volume group does not exist, the playbook creates it.

The playbook creates an Ext4 file system on the mylv logical volume, and persistently
mounts the file system at /mnt.

Additional resources

The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file.

19.6. EXAMPLE ANSIBLE PLAYBOOK TO ENABLE ONLINE BLOCK


DISCARD
This section provides an example Ansible playbook. This playbook applies the storage role to mount an
XFS file system with online block discard enabled.

Example 19.4. A playbook that enables online block discard on /mnt/data/

---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data

121
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

mount_options: discard
roles:
- rhel-system-roles.storage

Additional resources

Example Ansible playbook to persistently mount a file system

The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file.

19.7. EXAMPLE ANSIBLE PLAYBOOK TO CREATE AND MOUNT AN


EXT4 FILE SYSTEM
This section provides an example Ansible playbook. This playbook applies the storage role to create and
mount an Ext4 file system.

Example 19.5. A playbook that creates Ext4 on /dev/sdb and mounts it at /mnt/data

---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext4
fs_label: label-name
mount_point: /mnt/data
roles:
- rhel-system-roles.storage

The playbook creates the file system on the /dev/sdb disk.

The playbook persistently mounts the file system at the /mnt/data directory.

The label of the file system is label-name.

Additional resources

The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file.

19.8. EXAMPLE ANSIBLE PLAYBOOK TO CREATE AND MOUNT AN


EXT3 FILE SYSTEM
This section provides an example Ansible playbook. This playbook applies the storage role to create and
mount an Ext3 file system.

Example 19.6. A playbook that creates Ext3 on /dev/sdb and mounts it at/mnt/data

122
CHAPTER 19. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES

---
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext3
fs_label: label-name
mount_point: /mnt/data
roles:
- rhel-system-roles.storage

The playbook creates the file system on the /dev/sdb disk.

The playbook persistently mounts the file system at the /mnt/data directory.

The label of the file system is label-name.

Additional resources

The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file.

19.9. EXAMPLE ANSIBLE PLAYBOOK TO RESIZE AN EXISTING EXT4


OR EXT3 FILE SYSTEM USING THE STORAGE RHEL SYSTEM ROLE
This section provides an example Ansible playbook. This playbook applies the storage role to resize an
existing Ext4 or Ext3 file system on a block device.

Example 19.7. A playbook that set up a single volume on a disk

---
- name: Create a disk device mounted on /opt/barefs
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- /dev/sdb
size: 12 GiB
fs_type: ext4
mount_point: /opt/barefs
roles:
- rhel-system-roles.storage

If the volume in the previous example already exists, to resize the volume, you need to run the
same playbook, just with a different value for the parameter size. For example:

Example 19.8. A playbook that resizes ext4 on /dev/sdb

123
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

---
- name: Create a disk device mounted on /opt/barefs
- hosts: all
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- /dev/sdb
size: 10 GiB
fs_type: ext4
mount_point: /opt/barefs
roles:
- rhel-system-roles.storage

The volume name (barefs in the example) is currently arbitrary. The Storage role identifies
the volume by the disk device listed under the disks: attribute.

NOTE

Using the Resizing action in other file systems can destroy the data on the device you
are working on.

Additional resources

The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file.

19.10. EXAMPLE ANSIBLE PLAYBOOK TO RESIZE AN EXISTING FILE


SYSTEM ON LVM USING THE STORAGE RHEL SYSTEM ROLE
This section provides an example Ansible playbook. This playbook applies the storage
RHEL System Role to resize an LVM logical volume with a file system.


WARNING

Using the Resizing action in other file systems can destroy the data on the device
you are working on.

Example 19.9. A playbook that resizes existing mylv1 and myvl2 logical volumes in the myvg
volume group

---

- hosts: all
vars:
storage_pools:
- name: myvg

124
CHAPTER 19. MANAGING LOCAL STORAGE USING RHEL SYSTEM ROLES

disks:
- /dev/sda
- /dev/sdb
- /dev/sdc
volumes:
- name: mylv1
size: 10 GiB
fs_type: ext4
mount_point: /opt/mount1
- name: mylv2
size: 50 GiB
fs_type: ext4
mount_point: /opt/mount2

- name: Create LVM pool over three disks


include_role:
name: rhel-system-roles.storage

This playbook resizes the following existing file systems:

The Ext4 file system on the mylv1 volume, which is mounted at /opt/mount1, resizes to
10 GiB.

The Ext4 file system on the mylv2 volume, which is mounted at /opt/mount2, resizes to
50 GiB.

Additional resources

The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file.

19.11. EXAMPLE ANSIBLE PLAYBOOK TO CREATE A SWAP VOLUME


USING THE STORAGE RHEL SYSTEM ROLE
This section provides an example Ansible playbook. This playbook applies the storage role to create a
swap volume, if it does not exist, or to modify the swap volume, if it already exist, on a block device using
the default parameters.

Example 19.10. A playbook that creates or modify an existing XFS on /dev/sdb

---
- name: Create a disk device with swap
- hosts: all
vars:
storage_volumes:
- name: swap_fs
type: disk
disks:
- /dev/sdb
size: 15 GiB
fs_type: swap
roles:
- rhel-system-roles.storage

The volume name (swap_fs in the example) is currently arbitrary. The storage role identifies
125
Red Hat Enterprise Linux 8 Administration and configuration tasks using System Roles in RHEL

The volume name (swap_fs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.

Additional resources

The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file.

19.12. CONFIGURING A RAID VOLUME USING THE STORAGE SYSTEM


ROLE
With the storage System Role, you can configure a RAID volume on RHEL using Red Hat Ansible
Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a
RAID volume to suit your requirements.

Prerequisites

The Ansible Core package is installed on the control machine.

You have the rhel-system-roles package installed on the system from which you want to run
the playbook.

You have an inventory file detailing the systems on which you want to deploy a RAID volume
using the storage System Role.

Procedure

1. Create a new playbook.yml file with the following content:

---
- name: Configure the storage
hosts: managed-node-01.example.com
tasks:
- name: Create a RAID on sdd, sde, sdf, and sdg
include_role:
name: rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_volumes:
- name: data
type: raid
disks: [sdd, sde, sdf, sdg]
raid_level: raid0
raid_chunk_size: 32 KiB
mount_point: /mnt/data
state: present

126

You might also like