Red Hat Enterprise Linux-8-System Design Guide-En-US
Red Hat Enterprise Linux-8-System Design Guide-En-US
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This content covers how to start using Red Hat Enterprise Linux 8. To learn about Red Hat
Enterprise Linux technology capabilities and limits, see https://access.redhat.com/articles/rhel-
limits.
Table of Contents
Table of Contents
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
..............
. . . . . . .I.. DESIGN
PART . . . . . . . . .OF
. . . .INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
..............
.CHAPTER
. . . . . . . . . . 1.. .COMPOSING
. . . . . . . . . . . . . .A. .CUSTOMIZED
. . . . . . . . . . . . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . IMAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
..............
1.1. RHEL IMAGE BUILDER DESCRIPTION 17
1.1.1. RHEL image builder terminology 17
1.1.2. RHEL image builder output formats 17
1.1.3. Supported architectures for image builds 18
1.1.4. Additional resources 19
1.2. INSTALLING RHEL IMAGE BUILDER 19
1.2.1. RHEL image builder system requirements 19
1.2.2. Installing RHEL image builder 19
1.2.3. Reverting to lorax-composer RHEL image builder backend 21
1.3. CREATING SYSTEM IMAGES BY USING RHEL IMAGE BUILDER CLI 22
1.3.1. Introducing the RHEL image builder command-line interface 22
1.3.2. Using RHEL image builder as a non-root user 22
1.3.3. Creating a blueprint by using the command-line interface 22
1.3.4. Editing a blueprint with command-line interface 24
1.3.5. Creating a system image with RHEL image builder in the command-line interface 26
1.3.6. Basic RHEL image builder command-line commands 27
1.3.7. RHEL image builder blueprint format 28
1.3.8. Supported image customizations 29
1.3.8.1. Selecting a distribution 30
1.3.8.2. Selecting a package group 30
1.3.8.3. Embedding a container 31
1.3.8.4. Setting the image hostname 31
1.3.8.5. Specifying additional users 31
1.3.8.6. Specifying additional groups 32
1.3.8.7. Setting SSH key for existing users 33
1.3.8.8. Appending a kernel argument 33
1.3.8.9. Building RHEL images by using the real-time kernel 33
1.3.8.10. Setting time zone and NTP 35
1.3.8.11. Customizing the locale settings 36
1.3.8.12. Customizing firewall 36
1.3.8.13. Enabling or disabling services 37
1.3.8.14. Specifying a partition mode 38
1.3.8.15. Specifying a custom file system configuration 38
1.3.8.15.1. Specifying customized files in the blueprint 41
1.3.8.15.2. Specifying customized directories in the blueprint 41
1.3.9. Packages installed by RHEL image builder 44
1.3.10. Enabled services on custom images 49
1.4. CREATING SYSTEM IMAGES BY USING RHEL IMAGE BUILDER WEB CONSOLE INTERFACE 50
1.4.1. Accessing the RHEL image builder dashboard in the RHEL web console 50
1.4.2. Creating a blueprint in the web console interface 50
1.4.3. Importing a blueprint in the RHEL image builder web console interface 54
1.4.4. Exporting a blueprint from the RHEL image builder web console interface 55
1.4.5. Creating a system image by using RHEL image builder in the web console interface 55
1.5. PREPARING AND UPLOADING CLOUD IMAGES BY USING RHEL IMAGE BUILDER 56
1.5.1. Preparing and uploading AMI images to AWS 56
1.5.1.1. Preparing to manually upload AWS AMI images 56
1
Red Hat Enterprise Linux 8 System Design Guide
. . . . . . . . . . . 2.
CHAPTER . . PERFORMING
. . . . . . . . . . . . . . . AN
. . . .AUTOMATED
. . . . . . . . . . . . . . INSTALLATION
. . . . . . . . . . . . . . . . .USING
. . . . . . .KICKSTART
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
..............
. . . . . . . . . . . 3.
CHAPTER . . ADVANCED
. . . . . . . . . . . . . CONFIGURATION
. . . . . . . . . . . . . . . . . . .OPTIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82
..............
. . . . . . . . . . . 4.
CHAPTER . . .KICKSTART
. . . . . . . . . . . .REFERENCES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83
..............
. . . . . . .II.. .DESIGN
PART . . . . . . . . .OF
. . . SECURITY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84
..............
. . . . . . . . . . . 5.
CHAPTER . . SECURING
. . . . . . . . . . . . RHEL
. . . . . . DURING
. . . . . . . . . AND
. . . . . RIGHT
. . . . . . . .AFTER
. . . . . . . INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
..............
5.1. DISK PARTITIONING 85
5.2. RESTRICTING NETWORK CONNECTIVITY DURING THE INSTALLATION PROCESS 85
5.3. INSTALLING THE MINIMUM AMOUNT OF PACKAGES REQUIRED 86
5.4. POST-INSTALLATION PROCEDURES 86
5.5. DISABLING SMT TO PREVENT CPU SECURITY ISSUES BY USING THE WEB CONSOLE 86
.CHAPTER
. . . . . . . . . . 6.
. . .USING
. . . . . . .SYSTEM-WIDE
. . . . . . . . . . . . . . . .CRYPTOGRAPHIC
. . . . . . . . . . . . . . . . . . .POLICIES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88
..............
6.1. SYSTEM-WIDE CRYPTOGRAPHIC POLICIES 88
6.2. CHANGING THE SYSTEM-WIDE CRYPTOGRAPHIC POLICY 90
6.3. SWITCHING THE SYSTEM-WIDE CRYPTOGRAPHIC POLICY TO MODE COMPATIBLE WITH EARLIER
RELEASES 91
6.4. SETTING UP SYSTEM-WIDE CRYPTOGRAPHIC POLICIES IN THE WEB CONSOLE 91
6.5. EXCLUDING AN APPLICATION FROM FOLLOWING SYSTEM-WIDE CRYPTO POLICIES 93
6.5.1. Examples of opting out of system-wide crypto policies 94
6.6. CUSTOMIZING SYSTEM-WIDE CRYPTOGRAPHIC POLICIES WITH SUBPOLICIES 95
6.7. DISABLING SHA-1 BY CUSTOMIZING A SYSTEM-WIDE CRYPTOGRAPHIC POLICY 97
6.8. CREATING AND SETTING A CUSTOM SYSTEM-WIDE CRYPTOGRAPHIC POLICY 97
6.9. ENHANCING SECURITY WITH THE FUTURE CRYPTOGRAPHIC POLICY USING THE CRYPTO_POLICIES
RHEL SYSTEM ROLE 98
6.10. ADDITIONAL RESOURCES 101
. . . . . . . . . . . 7.
CHAPTER . . CONFIGURING
. . . . . . . . . . . . . . . . APPLICATIONS
. . . . . . . . . . . . . . . . .TO
. . . USE
. . . . . CRYPTOGRAPHIC
. . . . . . . . . . . . . . . . . . . HARDWARE
. . . . . . . . . . . . . THROUGH
. . . . . . . . . . . .PKCS
. . . . . .#11
..................
2
Table of Contents
102
7.1. CRYPTOGRAPHIC HARDWARE SUPPORT THROUGH PKCS #11 102
7.2. AUTHENTICATING BY SSH KEYS STORED ON A SMART CARD 102
7.3. CONFIGURING APPLICATIONS FOR AUTHENTICATION WITH CERTIFICATES ON SMART CARDS 104
7.4. USING HSMS PROTECTING PRIVATE KEYS IN APACHE 104
7.5. USING HSMS PROTECTING PRIVATE KEYS IN NGINX 105
7.6. ADDITIONAL RESOURCES 105
. . . . . . . . . . . 8.
CHAPTER . . .USING
. . . . . . .SHARED
. . . . . . . . . SYSTEM
. . . . . . . . . CERTIFICATES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
...............
8.1. THE SYSTEM-WIDE TRUSTSTORE 106
8.2. ADDING NEW CERTIFICATES 106
8.3. MANAGING TRUSTED SYSTEM CERTIFICATES 107
. . . . . . . . . . . 9.
CHAPTER . . .SCANNING
. . . . . . . . . . . .THE
. . . . SYSTEM
. . . . . . . . . .FOR
. . . . .SECURITY
. . . . . . . . . . .COMPLIANCE
. . . . . . . . . . . . . . .AND
. . . . .VULNERABILITIES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
...............
9.1. CONFIGURATION COMPLIANCE TOOLS IN RHEL 109
9.2. RED HAT SECURITY ADVISORIES OVAL FEED 109
9.3. VULNERABILITY SCANNING 111
9.3.1. Red Hat Security Advisories OVAL feed 111
9.3.2. Scanning the system for vulnerabilities 112
9.3.3. Scanning remote systems for vulnerabilities 112
9.4. CONFIGURATION COMPLIANCE SCANNING 113
9.4.1. Configuration compliance in RHEL 113
9.4.2. Possible results of an OpenSCAP scan 114
9.4.3. Viewing profiles for configuration compliance 115
9.4.4. Assessing configuration compliance with a specific baseline 116
9.5. REMEDIATING THE SYSTEM TO ALIGN WITH A SPECIFIC BASELINE 117
9.6. REMEDIATING THE SYSTEM TO ALIGN WITH A SPECIFIC BASELINE USING AN SSG ANSIBLE
PLAYBOOK 118
9.7. CREATING A REMEDIATION ANSIBLE PLAYBOOK TO ALIGN THE SYSTEM WITH A SPECIFIC BASELINE
119
9.8. CREATING A REMEDIATION BASH SCRIPT FOR A LATER APPLICATION 120
9.9. SCANNING THE SYSTEM WITH A CUSTOMIZED PROFILE USING SCAP WORKBENCH 121
9.9.1. Using SCAP Workbench to scan and remediate the system 121
9.9.2. Customizing a security profile with SCAP Workbench 123
9.9.3. Additional resources 125
9.10. SCANNING CONTAINER AND CONTAINER IMAGES FOR VULNERABILITIES 125
9.11. ASSESSING SECURITY COMPLIANCE OF A CONTAINER OR A CONTAINER IMAGE WITH A SPECIFIC
BASELINE 126
9.12. CHECKING INTEGRITY WITH AIDE 127
9.12.1. Installing AIDE 127
9.12.2. Performing integrity checks with AIDE 128
9.12.3. Updating an AIDE database 129
9.12.4. File-integrity tools: AIDE and IMA 129
9.12.5. Additional resources 130
9.13. ENCRYPTING BLOCK DEVICES USING LUKS 130
9.13.1. LUKS disk encryption 130
9.13.2. LUKS versions in RHEL 131
9.13.3. Options for data protection during LUKS2 re-encryption 132
9.13.4. Encrypting existing data on a block device using LUKS2 133
9.13.5. Encrypting existing data on a block device using LUKS2 with a detached header 135
9.13.6. Encrypting a blank block device using LUKS2 137
9.13.7. Configuring the LUKS passphrase in the web console 139
9.13.8. Changing the LUKS passphrase in the web console 139
9.13.9. Creating a LUKS2 encrypted volume by using the storage RHEL system role 141
3
Red Hat Enterprise Linux 8 System Design Guide
. . . . . . . . . . . 10.
CHAPTER . . . USING
. . . . . . . .SELINUX
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171
..............
10.1. GETTING STARTED WITH SELINUX 171
10.1.1. Introduction to SELinux 171
10.1.2. Benefits of running SELinux 172
10.1.3. SELinux examples 173
10.1.4. SELinux architecture and packages 174
10.1.5. SELinux states and modes 175
10.2. CHANGING SELINUX STATES AND MODES 175
10.2.1. Permanent changes in SELinux states and modes 175
10.2.2. Changing SELinux to permissive mode 176
10.2.3. Changing SELinux to enforcing mode 177
10.2.4. Enabling SELinux on systems that previously had it disabled 178
10.2.5. Disabling SELinux 180
10.2.6. Changing SELinux modes at boot time 181
10.3. TROUBLESHOOTING PROBLEMS RELATED TO SELINUX 182
10.3.1. Identifying SELinux denials 182
10.3.2. Analyzing SELinux denial messages 183
10.3.3. Fixing analyzed SELinux denials 184
10.3.4. Creating a local SELinux policy module 187
10.3.5. SELinux denials in the Audit log 189
10.3.6. Additional resources 190
. . . . . . .III.
PART . . DESIGN
. . . . . . . . . OF
. . . .NETWORK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
...............
. . . . . . . . . . . 11.
CHAPTER . . .CONFIGURING
. . . . . . . . . . . . . . . .IP
. . NETWORKING
. . . . . . . . . . . . . . . WITH
. . . . . . IFCFG
. . . . . . . FILES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193
...............
11.1. CONFIGURING AN INTERFACE WITH STATIC NETWORK SETTINGS USING IFCFG FILES 193
11.2. CONFIGURING AN INTERFACE WITH DYNAMIC NETWORK SETTINGS USING IFCFG FILES 194
11.3. MANAGING SYSTEM-WIDE AND PRIVATE CONNECTION PROFILES WITH IFCFG FILES 194
4
Table of Contents
. . . . . . . . . . . 12.
CHAPTER . . . GETTING
. . . . . . . . . . STARTED
. . . . . . . . . . .WITH
. . . . . .IPVLAN
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196
...............
12.1. IPVLAN MODES 196
12.2. COMPARISON OF IPVLAN AND MACVLAN 196
12.3. CREATING AND CONFIGURING THE IPVLAN DEVICE USING IPROUTE2 197
. . . . . . . . . . . 13.
CHAPTER . . . REUSING
. . . . . . . . . . THE
. . . . . SAME
. . . . . . .IP
. . ADDRESS
. . . . . . . . . . .ON
. . . .DIFFERENT
. . . . . . . . . . . . INTERFACES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
...............
13.1. PERMANENTLY REUSING THE SAME IP ADDRESS ON DIFFERENT INTERFACES 199
13.2. TEMPORARILY REUSING THE SAME IP ADDRESS ON DIFFERENT INTERFACES 200
13.3. ADDITIONAL RESOURCES 202
. . . . . . . . . . . 14.
CHAPTER . . . SECURING
. . . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
...............
14.1. USING SECURE COMMUNICATIONS BETWEEN TWO SYSTEMS WITH OPENSSH 203
14.1.1. Generating SSH key pairs 203
14.1.2. Setting key-based authentication as the only method on an OpenSSH server 204
14.1.3. Caching your SSH credentials by using ssh-agent 205
14.1.4. Authenticating by SSH keys stored on a smart card 206
14.1.5. Additional resources 207
14.2. PLANNING AND IMPLEMENTING TLS 207
14.2.1. SSL and TLS protocols 207
14.2.2. Security considerations for TLS in RHEL 8 208
14.2.2.1. Protocols 209
14.2.2.2. Cipher suites 209
14.2.2.3. Public key length 210
14.2.3. Hardening TLS configuration in applications 210
14.2.3.1. Configuring the Apache HTTP server to use TLS 210
14.2.3.2. Configuring the Nginx HTTP and proxy server to use TLS 211
14.2.3.3. Configuring the Dovecot mail server to use TLS 211
14.3. SETTING UP AN IPSEC VPN 212
14.3.1. Libreswan as an IPsec VPN implementation 212
14.3.2. Authentication methods in Libreswan 213
14.3.3. Installing Libreswan 215
14.3.4. Creating a host-to-host VPN 215
14.3.5. Configuring a site-to-site VPN 216
14.3.6. Configuring a remote access VPN 217
14.3.7. Configuring a mesh VPN 218
14.3.8. Deploying a FIPS-compliant IPsec VPN 222
14.3.9. Protecting the IPsec NSS database by a password 224
14.3.10. Configuring an IPsec VPN to use TCP 225
14.3.11. Configuring automatic detection and usage of ESP hardware offload to accelerate an IPsec connection
226
14.3.12. Configuring ESP hardware offload on a bond to accelerate an IPsec connection 227
14.3.13. Configuring VPN connections with IPsec by using RHEL system roles 228
14.3.13.1. Creating a host-to-host VPN with IPsec by using the vpn RHEL system role 229
14.3.13.2. Creating an opportunistic mesh VPN connection with IPsec by using the vpn RHEL system role 231
14.3.14. Configuring IPsec connections that opt out of the system-wide crypto policies 233
14.3.15. Troubleshooting IPsec VPN configurations 233
14.3.16. Configuring a VPN connection with control-center 237
14.3.17. Configuring a VPN connection using nm-connection-editor 242
14.3.18. Additional resources 245
14.4. USING MACSEC TO ENCRYPT LAYER-2 TRAFFIC IN THE SAME PHYSICAL NETWORK 245
14.4.1. How MACsec increases security 245
14.4.2. Configuring a MACsec connection by using nmcli 246
14.5. USING AND CONFIGURING FIREWALLD 247
5
Red Hat Enterprise Linux 8 System Design Guide
6
Table of Contents
7
Red Hat Enterprise Linux 8 System Design Guide
. . . . . . .IV.
PART . . .DESIGN
. . . . . . . . OF
. . . .HARD
. . . . . . .DISK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .329
...............
. . . . . . . . . . . 15.
CHAPTER . . . OVERVIEW
. . . . . . . . . . . . OF
. . . .AVAILABLE
. . . . . . . . . . . . FILE
. . . . . SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
................
15.1. TYPES OF FILE SYSTEMS 330
15.2. LOCAL FILE SYSTEMS 331
15.3. THE XFS FILE SYSTEM 331
15.4. THE EXT4 FILE SYSTEM 332
15.5. COMPARISON OF XFS AND EXT4 333
15.6. CHOOSING A LOCAL FILE SYSTEM 334
15.7. NETWORK FILE SYSTEMS 335
15.8. SHARED STORAGE FILE SYSTEMS 335
15.9. CHOOSING BETWEEN NETWORK AND SHARED STORAGE FILE SYSTEMS 336
15.10. VOLUME-MANAGING FILE SYSTEMS 336
. . . . . . . . . . . 16.
CHAPTER . . . MOUNTING
. . . . . . . . . . . . .AN
. . . SMB
. . . . . .SHARE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338
...............
16.1. SUPPORTED SMB PROTOCOL VERSIONS 338
16.2. UNIX EXTENSIONS SUPPORT 338
16.3. MANUALLY MOUNTING AN SMB SHARE 339
16.4. MOUNTING AN SMB SHARE AUTOMATICALLY WHEN THE SYSTEM BOOTS 340
16.5. CREATING A CREDENTIALS FILE TO AUTHENTICATE TO AN SMB SHARE 341
16.6. PERFORMING A MULTI-USER SMB MOUNT 341
16.6.1. Mounting a share with the multiuser option 342
16.6.2. Verifying if an SMB share is mounted with the multiuser option 342
16.6.3. Accessing a share as a user 342
16.7. FREQUENTLY USED SMB MOUNT OPTIONS 343
. . . . . . . . . . . 17.
CHAPTER . . . OVERVIEW
. . . . . . . . . . . . OF
. . . .PERSISTENT
. . . . . . . . . . . . . NAMING
. . . . . . . . . .ATTRIBUTES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .345
...............
17.1. DISADVANTAGES OF NON-PERSISTENT NAMING ATTRIBUTES 345
17.2. FILE SYSTEM AND DEVICE IDENTIFIERS 345
File system identifiers 346
Device identifiers 346
Recommendations 346
17.3. DEVICE NAMES MANAGED BY THE UDEV MECHANISM IN /DEV/DISK/ 346
17.3.1. File system identifiers 346
The UUID attribute in /dev/disk/by-uuid/ 346
The Label attribute in /dev/disk/by-label/ 347
17.3.2. Device identifiers 347
The WWID attribute in /dev/disk/by-id/ 347
8
Table of Contents
.CHAPTER
. . . . . . . . . . 18.
. . . GETTING
. . . . . . . . . . STARTED
. . . . . . . . . . .WITH
. . . . . .PARTITIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352
...............
18.1. CREATING A PARTITION TABLE ON A DISK WITH PARTED 352
18.2. VIEWING THE PARTITION TABLE WITH PARTED 353
18.3. CREATING A PARTITION WITH PARTED 354
18.4. SETTING A PARTITION TYPE WITH FDISK 355
18.5. RESIZING A PARTITION WITH PARTED 356
18.6. REMOVING A PARTITION WITH PARTED 358
. . . . . . . . . . . 19.
CHAPTER . . . GETTING
. . . . . . . . . . STARTED
. . . . . . . . . . .WITH
. . . . . .XFS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
................
19.1. THE XFS FILE SYSTEM 360
19.2. COMPARISON OF TOOLS USED WITH EXT4 AND XFS 361
.CHAPTER
. . . . . . . . . . 20.
. . . .MOUNTING
. . . . . . . . . . . . FILE
. . . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .362
...............
20.1. THE LINUX MOUNT MECHANISM 362
20.2. LISTING CURRENTLY MOUNTED FILE SYSTEMS 362
20.3. MOUNTING A FILE SYSTEM WITH MOUNT 363
20.4. MOVING A MOUNT POINT 364
20.5. UNMOUNTING A FILE SYSTEM WITH UMOUNT 364
20.6. MOUNTING AND UNMOUNTING FILE SYSTEMS IN THE WEB CONSOLE 365
20.7. COMMON MOUNT OPTIONS 365
.CHAPTER
. . . . . . . . . . 21.
. . . SHARING
..........A
. . MOUNT
. . . . . . . . .ON
. . . .MULTIPLE
. . . . . . . . . . . MOUNT
. . . . . . . . .POINTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367
...............
21.1. TYPES OF SHARED MOUNTS 367
21.2. CREATING A PRIVATE MOUNT POINT DUPLICATE 367
21.3. CREATING A SHARED MOUNT POINT DUPLICATE 368
21.4. CREATING A SLAVE MOUNT POINT DUPLICATE 370
21.5. PREVENTING A MOUNT POINT FROM BEING DUPLICATED 371
.CHAPTER
. . . . . . . . . . 22.
. . . .PERSISTENTLY
. . . . . . . . . . . . . . . . MOUNTING
. . . . . . . . . . . . .FILE
. . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .372
...............
22.1. THE /ETC/FSTAB FILE 372
22.2. ADDING A FILE SYSTEM TO /ETC/FSTAB 372
.CHAPTER
. . . . . . . . . . 23.
. . . .MOUNTING
. . . . . . . . . . . . FILE
. . . . . SYSTEMS
. . . . . . . . . . .ON
. . . .DEMAND
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .374
...............
23.1. THE AUTOFS SERVICE 374
23.2. THE AUTOFS CONFIGURATION FILES 374
23.3. CONFIGURING AUTOFS MOUNT POINTS 376
23.4. AUTOMOUNTING NFS SERVER USER HOME DIRECTORIES WITH AUTOFS SERVICE 377
23.5. OVERRIDING OR AUGMENTING AUTOFS SITE CONFIGURATION FILES 377
23.6. USING LDAP TO STORE AUTOMOUNTER MAPS 379
23.7. USING SYSTEMD.AUTOMOUNT TO MOUNT A FILE SYSTEM ON DEMAND WITH /ETC/FSTAB 380
23.8. USING SYSTEMD.AUTOMOUNT TO MOUNT A FILE SYSTEM ON-DEMAND WITH A MOUNT UNIT 381
.CHAPTER
. . . . . . . . . . 24.
. . . .USING
. . . . . . . SSSD
. . . . . . COMPONENT
. . . . . . . . . . . . . . .FROM
. . . . . . .IDM
. . . . TO
. . . .CACHE
. . . . . . . .THE
. . . . .AUTOFS
. . . . . . . . . MAPS
. . . . . . . . . . . . . . . . . . . . . . . . . . .382
...............
24.1. CONFIGURING AUTOFS MANUALLY TO USE IDM SERVER AS AN LDAP SERVER 382
24.2. CONFIGURING SSSD TO CACHE AUTOFS MAPS 383
.CHAPTER
. . . . . . . . . . 25.
. . . .SETTING
. . . . . . . . . .READ-ONLY
. . . . . . . . . . . . .PERMISSIONS
. . . . . . . . . . . . . . . FOR
. . . . . THE
. . . . .ROOT
. . . . . . .FILE
. . . . .SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .385
...............
25.1. FILES AND DIRECTORIES THAT ALWAYS RETAIN WRITE PERMISSIONS 385
9
Red Hat Enterprise Linux 8 System Design Guide
25.2. CONFIGURING THE ROOT FILE SYSTEM TO MOUNT WITH READ-ONLY PERMISSIONS ON BOOT 386
. . . . . . . . . . . 26.
CHAPTER . . . .MANAGING
. . . . . . . . . . . . STORAGE
. . . . . . . . . . . DEVICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .387
...............
26.1. SETTING UP STRATIS FILE SYSTEMS 387
26.1.1. What is Stratis 387
26.1.2. Components of a Stratis volume 387
26.1.3. Block devices usable with Stratis 388
Supported devices 388
Unsupported devices 389
26.1.4. Installing Stratis 389
26.1.5. Creating an unencrypted Stratis pool 389
26.1.6. Creating an unencrypted Stratis pool by using the web console 390
26.1.7. Creating an encrypted Stratis pool 391
26.1.8. Creating an encrypted Stratis pool by using the web console 393
26.1.9. Renaming a Stratis pool by using the web console 395
26.1.10. Setting overprovisioning mode in Stratis filesystem 396
26.1.11. Binding a Stratis pool to NBDE 397
26.1.12. Binding a Stratis pool to TPM 398
26.1.13. Unlocking an encrypted Stratis pool with kernel keyring 398
26.1.14. Unbinding a Stratis pool from supplementary encryption 399
26.1.15. Starting and stopping Stratis pool 399
26.1.16. Creating a Stratis file system 400
26.1.17. Creating a file system on a Stratis pool by using the web console 401
26.1.18. Mounting a Stratis file system 403
26.1.19. Setting up non-root Stratis filesystems in /etc/fstab using a systemd service 403
26.2. EXTENDING A STRATIS VOLUME WITH ADDITIONAL BLOCK DEVICES 404
26.2.1. Components of a Stratis volume 404
26.2.2. Adding block devices to a Stratis pool 405
26.2.3. Adding a block device to a Stratis pool by using the web console 405
26.2.4. Additional resources 406
26.3. MONITORING STRATIS FILE SYSTEMS 406
26.3.1. Stratis sizes reported by different utilities 407
26.3.2. Displaying information about Stratis volumes 407
26.3.3. Viewing a Stratis pool by using the web console 408
26.3.4. Additional resources 409
26.4. USING SNAPSHOTS ON STRATIS FILE SYSTEMS 409
26.4.1. Characteristics of Stratis snapshots 409
26.4.2. Creating a Stratis snapshot 410
26.4.3. Accessing the content of a Stratis snapshot 410
26.4.4. Reverting a Stratis file system to a previous snapshot 411
26.4.5. Removing a Stratis snapshot 411
26.4.6. Additional resources 412
26.5. REMOVING STRATIS FILE SYSTEMS 412
26.5.1. Components of a Stratis volume 412
26.5.2. Removing a Stratis file system 413
26.5.3. Deleting a file system from a Stratis pool by using the web console 413
26.5.4. Removing a Stratis pool 415
26.5.5. Deleting a Stratis pool by using the web console 416
26.5.6. Additional resources 416
26.6. GETTING STARTED WITH SWAP 416
26.6.1. Overview of swap space 416
26.6.2. Recommended system swap space 417
26.6.3. Creating an LVM2 logical volume for swap 418
10
Table of Contents
.CHAPTER
. . . . . . . . . . 27.
. . . .DEDUPLICATING
. . . . . . . . . . . . . . . . . .AND
. . . . .COMPRESSING
. . . . . . . . . . . . . . . . .STORAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
................
27.1. DEPLOYING VDO 440
27.1.1. Introduction to VDO 440
27.1.2. VDO deployment scenarios 440
KVM 440
File systems 441
Placement of VDO on iSCSI 441
LVM 442
Encryption 442
27.1.3. Components of a VDO volume 443
27.1.4. The physical and logical size of a VDO volume 444
27.1.5. Slab size in VDO 445
27.1.6. VDO requirements 445
27.1.6.1. VDO memory requirements 445
27.1.6.2. VDO storage space requirements 446
27.1.6.3. Placement of VDO in the storage stack 447
27.1.6.4. Examples of VDO requirements by physical size 448
27.1.7. Installing VDO 449
27.1.8. Creating a VDO volume 449
27.1.9. Mounting a VDO volume 451
27.1.10. Enabling periodic block discard 452
27.1.11. Monitoring VDO 452
27.2. MAINTAINING VDO 453
27.2.1. Managing free space on VDO volumes 453
27.2.1.1. The physical and logical size of a VDO volume 453
27.2.1.2. Thin provisioning in VDO 454
27.2.1.3. Monitoring VDO 455
27.2.1.4. Reclaiming space for VDO on file systems 455
27.2.1.5. Reclaiming space for VDO without a file system 456
27.2.1.6. Reclaiming space for VDO on Fibre Channel or Ethernet network 456
27.2.2. Starting or stopping VDO volumes 456
27.2.2.1. Started and activated VDO volumes 457
27.2.2.2. Starting a VDO volume 457
11
Red Hat Enterprise Linux 8 System Design Guide
. . . . . . .V.
PART . . DESIGN
. . . . . . . . .OF
. . . LOG
. . . . . .FILE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
................
.CHAPTER
. . . . . . . . . . 28.
. . . .AUDITING
. . . . . . . . . . .THE
. . . . .SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
................
28.1. LINUX AUDIT 479
12
Table of Contents
. . . . . . .VI.
PART . . .DESIGN
. . . . . . . . OF
. . . .KERNEL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
................
.CHAPTER
. . . . . . . . . . 29.
. . . .THE
. . . . .LINUX
. . . . . . .KERNEL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
................
29.1. WHAT THE KERNEL IS 496
29.2. RPM PACKAGES 496
29.3. THE LINUX KERNEL RPM PACKAGE OVERVIEW 497
13
Red Hat Enterprise Linux 8 System Design Guide
14
PROVIDING FEEDBACK ON RED HAT DOCUMENTATION
4. Enter your suggestion for improvement in the Description field. Include links to the relevant
parts of the documentation.
15
Red Hat Enterprise Linux 8 System Design Guide
16
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
NOTE
From RHEL 8.3 onward, the osbuild-composer back end replaces lorax-composer. The
new service provides REST APIs for image building.
Blueprint
A blueprint is a description of a customized system image. It lists the packages and customizations
that will be part of the system. You can edit blueprints with customizations and save them as a
particular version. When you create a system image from a blueprint, the image is associated with the
blueprint in the RHEL image builder interface.
Create blueprints in the TOML format.
Compose
Composes are individual builds of a system image, based on a specific version of a particular
blueprint. Compose as a term refers to the system image, the logs from its creation, inputs,
metadata, and the process itself.
Customizations
Customizations are specifications for the image that are not packages. This includes users, groups,
and SSH keys.
17
Red Hat Enterprise Linux 8 System Design Guide
ARM64 (aarch64)
IBM Z (s390x)
However, RHEL image builder does not support multi-architecture builds. It only builds images of the
18
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
However, RHEL image builder does not support multi-architecture builds. It only builds images of the
same system architecture that it is running on. For example, if RHEL image builder is running on an
x86_64 system, it can only build images for the x86_64 architecture.
System type A dedicated host or virtual machine. Note that RHEL image builder is not
supported in containers, including Red Hat Universal Base Images (UBI).
Processor 2 cores
Memory 4 GiB
Network Internet connectivity to the Red Hat Content Delivery Network (CDN).
NOTE
If you do not have internet connectivity, use RHEL image builder in isolated networks. For
that, you must override the default repositories to point to your local repositories to not
connect to Red Hat Content Delivery Network (CDN). Ensure that you have your content
mirrored internally or use Red Hat Satellite.
Additional resources
19
Red Hat Enterprise Linux 8 System Design Guide
Prerequisites
You are logged in to the RHEL host on which you want to install RHEL image builder.
The host is subscribed to Red Hat Subscription Manager (RHSM) or Red Hat Satellite.
You have enabled the BaseOS and AppStream repositories to be able to install the RHEL
image builder packages.
Procedure
cockpit-composer - This package enables access to the Web UI interface. The web
console is installed as a dependency of the cockpit-composer package.
3. If you want to use RHEL image builder in the web console, enable and start it.
4. Load the shell configuration script so that the autocomplete feature for the composer-cli
command starts working immediately without logging out and in:
$ source /etc/bash_completion.d/composer-cli
IMPORTANT
The osbuild-composer package is the new backend engine that will be the preferred
default and focus of all new functionality beginning with Red Hat Enterprise Linux 8.3 and
later. The previous backend lorax-composer package is considered deprecated, will only
receive select fixes for the remainder of the Red Hat Enterprise Linux 8 life cycle and will
be omitted from future major releases. It is recommended to uninstall lorax-composer in
favor of osbuild-composer.
Verification
20
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
Troubleshooting
You can use a system journal to track RHEL image builder activities. Additionally, you can find the log
messages in the file.
To find the journal output for traceback, run the following commands:
$ journalctl -u osbuild-worker*
$ journalctl -u osbuild-composer.service
The osbuild-composer backend, though much more extensible, does not currently achieve feature
parity with the previous lorax-composer backend.
Prerequisites
Procedure
# cat /etc/yum.conf
[main]
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
exclude=osbuild-composer weldr-client
21
Red Hat Enterprise Linux 8 System Design Guide
4. Enable and start the lorax-composer service to start after each reboot.
Additional resources
1. Create a blueprint or export (save) an existing blueprint definition to a plain text file
Apart from the basic subcommands to create a blueprint, the composer-cli command offers many
subcommands to examine the state of configured blueprints and composes.
Prerequisites
Procedure
To add a user to the weldr or root groups, run the following commands:
You can create a new blueprint by using the RHEL image builder command-line interface (CLI). The
22
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
You can create a new blueprint by using the RHEL image builder command-line interface (CLI). The
blueprint describes the final image and its customizations, such as packages, and kernel customizations.
Prerequisite
You are logged in as the root user or a user who is a member of the weldr group
Procedure
name = "BLUEPRINT-NAME"
description = "LONG FORM DESCRIPTION TEXT"
version = "0.0.1"
modules = []
groups = []
Replace BLUEPRINT-NAME and LONG FORM DESCRIPTION TEXT with a name and
description for your blueprint.
Replace 0.0.1 with a version number according to the Semantic Versioning scheme.
2. For every package that you want to be included in the blueprint, add the following lines to the
file:
[[packages]]
name = "package-name"
version = "package-version"
Replace package-name with the name of the package, such as httpd, gdb-doc, or coreutils.
Optionally, replace package-version with the version to use. This field supports dnf version
specifications:
For a specific version, use the exact version number such as 8.7.0.
3. Customize your blueprints to suit your needs. For example, disable Simultaneous Multi
Threading (SMT), add the following lines to the blueprint file:
[customizations.kernel]
append = "nosmt=force"
Note that [] and [[]] are different data structures expressed in TOML.
The [[packages]] header represents an array of tables. The first instance defines the array
and its first table element, for example, name = "package-name" and version = "package-
version", and each subsequent instance creates and defines a new table element in that
23
Red Hat Enterprise Linux 8 System Design Guide
4. Save the file, for example, as BLUEPRINT-NAME.toml and close the text editor.
NOTE
To create images using composer-cli as non-root, add your user to the weldr or
root groups.
Verification
List the existing blueprints to verify that the blueprint has been pushed and exists:
Check whether the components and versions listed in the blueprint and their dependencies are
valid:
If RHEL image builder is unable to solve the dependencies of a package from your custom
repositories, remove the osbuild-composer cache:
Additional resources
Composing a customized RHEL system image with proxy server (Red Hat Knowledgebase)
Prerequisites
24
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
Procedure
3. Edit the BLUEPRINT-NAME.toml file with a text editor and make your changes.
4. Before finishing the edits, verify that the file is a valid blueprint:
packages = []
b. Increase the version number, for example, from 0.0.1 to 0.1.0. Remember that RHEL image
builder blueprint versions must use the Semantic Versioning scheme. Note also that if you
do not change the version, the patch version component increases automatically.
NOTE
To import the blueprint back into RHEL image builder, supply the file name
including the .toml extension, while in other commands use only the blueprint
name.
Verification
1. To verify that the contents uploaded to RHEL image builder match your edits, list the contents
of blueprint:
2. Check whether the components and versions listed in the blueprint and their dependencies are
valid:
Additional resources
25
Red Hat Enterprise Linux 8 System Design Guide
1.3.5. Creating a system image with RHEL image builder in the command-line
interface
You can build a customized RHEL image by using the RHEL image builder command-line interface. For
that, you must specify a blueprint and an image type. Optionally, you can also specify a distribution. If
you do not specify a distribution, it will use the same distribution and version as the host system. The
architecture is also the same as the one on the host.
Prerequisites
You have a blueprint prepared for the image. See Creating a RHEL image builder blueprint
using the command-line interface.
Procedure
Replace BLUEPRINT-NAME with the name of the blueprint, and IMAGE-TYPE with the type of
the image. For the available values, see the output of the composer-cli compose types
command.
The compose process starts in the background and shows the composer Universally Unique
Identifier (UUID).
A finished compose shows the FINISHED status value. To identify your compose in the list, use
its UUID.
4. After the compose process is finished, download the resulting image file:
Replace UUID with the UUID value shown in the previous steps.
Verification
After you create your image, you can check the image creation progress using the following commands:
Download the metadata of the image to get a .tar file of the metadata for the compose:
26
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
The command creates a .tar file that contains the logs for the image creation. If the logs are
empty, you can check the journal.
Additional resources
Blueprint manipulation
Remove a blueprint
Push (import) a blueprint file in the TOML format into RHEL image builder
27
Red Hat Enterprise Linux 8 System Design Guide
Start a compose
# composer-cli help
Additional resources
name = "<BLUEPRINT-NAME>"
description = "<LONG FORM DESCRIPTION TEXT>"
version = "<VERSION>"
The BLUEPRINT-NAME and LONG FORM DESCRIPTION TEXT field are a name and description for
your blueprint.
28
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
The VERSION is a version number according to the Semantic Versioning scheme, and is present only
once for the entire blueprint file.
[[groups]]
name = "group-name"
The group entry describes a group of packages to be installed into the image. Groups use the
following package categories:
Mandatory
Default
Optional
The group-name is the name of the group, for example, anaconda-tools, widget, wheel or
users. Blueprints install the mandatory and default packages. There is no mechanism for
selecting optional packages.
[[packages]]
name = "<package-name>"
version = "<package-version>"
For a specific version, use the exact version number such as 8.7.0.
NOTE
There are no differences between packages and modules in the RHEL image builder tool.
Both are treated as RPM package dependencies.
Enabling a service
Between others. You can use several image customizations within blueprints. By using the
customizations, you can add packages and groups to the image that are not available in the default
29
Red Hat Enterprise Linux 8 System Design Guide
packages. To use these options, configure the customizations in the blueprint and import (push) it to
RHEL image builder.
Additional resources
Blueprint import fails after adding filesystem customization "size" (Red Hat Knowledgebase)
You can use the distro field to select the distribution to use when composing your images, or solving
dependencies in the blueprint. If distro is left blank it will use the host distribution. If you do not specify a
distribution, the blueprint uses the host distribution. In case you upgrade the host operating system, the
blueprints with no distribution set build images using the new operating system version. You cannot build
an operating system image that differs from the RHEL image builder host.
Customize the blueprint with the RHEL distribution to always build the specified RHEL image:
name = "blueprint_name"
description = "blueprint_version"
version = "0.1"
distro = "different_minor_version"
For example:
name = "tmux"
description = "tmux image with openssh"
version = "1.2.16"
distro = "rhel-9.5"
Replace "different_minor_version" to build a different minor version, for example, if you want to build a
RHEL 8.10 image, use distro = "rhel-810". On RHEL 8.10 image, you can build minor versions such as
RHEL 8.9 and earlier releases.
Customize the blueprint with package groups. The groups list describes the groups of packages that
you want to install into the image. The package groups are defined in the repository metadata. Each
group has a descriptive name that is used primarily for display in user interfaces, and an ID that is
commonly used in Kickstart files. In this case, you must use the ID to list a group. Groups have three
different ways of categorizing their packages: mandatory, default, and optional. Only mandatory and
default packages are installed in the blueprints. It is not possible to select optional packages.
The name attribute is a required string and must match exactly the package group id in the repositories.
NOTE
[[groups]]
name = "group_name"
30
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
Replace group_name with the name of the group. For example, anaconda-tools:
[[groups]]
name = "anaconda-tools"
You can customize your blueprint to embed the latest RHEL container. The containers list contains
objects with a source, and optionally, the tls-verify attribute.
The container list entries describe the container images to be embedded into the image.
source - Mandatory field. It is a reference to the container image at a registry. This example
uses the registry.access.redhat.com registry. You can specify a tag version. The default tag
version is latest.
tls-verify - Boolean field. The tls-verify boolean field controls the transport layer security. The
default value is true.
The embedded containers do not start automatically. To start it, create systemd unit files or quadlets
with the files customization.
[[containers]]
source = "registry.access.redhat.com/ubi9/ubi:latest"
name = "local-name"
tls-verify = true
[[containers]]
source = "localhost/test:latest"
local-storage = true
You can access protected container resources by using a containers-auth.json file. See Container
registry credentials.
The customizations.hostname is an optional string that you can use to configure the final image
hostname. This customization is optional, and if you do not set it, the blueprint uses the default
hostname.
[customizations]
hostname = "baseimage"
Add a user to the image, and optionally, set their SSH key. All fields for this section are optional except
for the name.
31
Red Hat Enterprise Linux 8 System Design Guide
Procedure
[[customizations.user]]
name = "USER-NAME"
description = "USER-DESCRIPTION"
password = "PASSWORD-HASH"
key = "PUBLIC-SSH-KEY"
home = "/home/USER-NAME/"
shell = "/usr/bin/bash"
groups = ["users", "wheel"]
uid = NUMBER
gid = NUMBER
[[customizations.user]]
name = "admin"
description = "Administrator account"
password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L..."
key = "PUBLIC SSH KEY"
home = "/srv/widget/"
shell = "/usr/bin/bash"
groups = ["widget", "users", "wheel"]
uid = 1200
gid = 1200
expiredate = 12345
The GID is optional and must already exist in the image. Optionally, a package creates it, or the
blueprint creates the GID by using the [[customizations.group]] entry.
Replace PASSWORD-HASH with the actual password hash. To generate the password hash,
use a command such as:
Enter the name value and omit any lines you do not need.
Specify a group for the resulting system image. Both the name and the gid attributes are mandatory.
[[customizations.group]]
name = "GROUP-NAME"
gid = NUMBER
32
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
[[customizations.group]]
name = "widget"
gid = 1130
You can use customizations.sshkey to set an SSH key for the existing users in the final image. Both
user and key attributes are mandatory.
[[customizations.sshkey]]
user = "root"
key = "PUBLIC-SSH-KEY"
For example:
[[customizations.sshkey]]
user = "root"
key = "SSH key for root"
NOTE
You can append arguments to the boot loader kernel command line. By default, RHEL image builder
builds a default kernel into the image. However, you can customize the kernel by configuring it in the
blueprint.
[customizations.kernel]
append = "KERNEL-OPTION"
For example:
[customizations.kernel]
name = "kernel-debug"
append = "nosmt=force"
To build a RHEL image by using the real-time kernel (kernel-rt), you need to override a repository so
that you can then build an image in which kernel-rt is correctly selected as the default kernel. Use the
.json from the /usr/share/osbuild-composer/repositories/ directory. Then, you can deploy the image
that you built to a system and use the real time kernel features.
NOTE
33
Red Hat Enterprise Linux 8 System Design Guide
NOTE
The real-time kernel runs on AMD64 and Intel 64 server platforms that are certified to
run Red Hat Enterprise Linux.
Prerequisites
Your system is registered and RHEL is attached to a RHEL for Real Time subscription. See
Installing RHEL for Real Time using dnf .
Procedure
# mkdir /etc/osbuild-composer/repositories/
# cp /usr/share/osbuild-composer/repositories/rhel-8.version.json /etc/osbuild-
composer/repositories
5. Confirm that the kernel-rt has been included into the .json file:
You will see the URL that you have previously configured.
34
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
name = "rt-kernel-image"
description = ""
version = "2.0.0"
modules = []
groups = []
distro = "rhel-8_version_"
[[customizations.user]]
name = "admin"
password = "admin"
groups = ["users", "wheel"]
[customizations.kernel]
name = "kernel-rt"
append = ""
8. Build your image from the blueprint you created. The following example builds a (.qcow2)
image:
9. Deploy the image that you built to the system where you want to use the real time kernel
features.
Verification
After booting a VM from the image, verify that the image was built with the kernel-rt correctly
selected as the default kernel.
$ cat /proc/cmdline
BOOT_IMAGE=(hd0,got3)/vmlinuz-5.14.0-362.24.1..el8_version_.x86_64+rt...
You can customize your blueprint to configure the time zone and the Network Time Protocol (NTP).
Both timezone and ntpservers attributes are optional strings. If you do not customize the time zone,
the system uses Universal Time, Coordinated (UTC). If you do not set NTP servers, the system uses the
default distribution.
Customize the blueprint with the timezone and the ntpservers you want:
[customizations.timezone]
timezone = "TIMEZONE"
ntpservers = "NTP_SERVER"
For example:
[customizations.timezone]
timezone = "US/Eastern"
ntpservers = ["0.north-america.pool.ntp.org", "1.north-america.pool.ntp.org"]
NOTE
35
Red Hat Enterprise Linux 8 System Design Guide
NOTE
Some image types, such as Google Cloud, already have NTP servers set up. You
cannot override it because the image requires the NTP servers to boot in the
selected environment. However, you can customize the time zone in the
blueprint.
You can customize the locale settings for your resulting system image. Both language and the
keyboard attributes are mandatory. You can add many other languages. The first language you add is
the primary language and the other languages are secondary.
Procedure
[customizations.locale]
languages = ["LANGUAGE"]
keyboard = "KEYBOARD"
For example:
[customizations.locale]
languages = ["en_US.UTF-8"]
keyboard = "us"
To list the values supported by the languages, run the following command:
$ localectl list-locales
To list the values supported by the keyboard, run the following command:
$ localectl list-keymaps
Set the firewall for the resulting system image. By default, the firewall blocks incoming connections,
except for services that enable their ports explicitly, such as sshd.
NOTE
The Google and OpenStack templates explicitly disable the firewall for their environment.
You cannot override this behavior by setting the blueprint.
Procedure
Customize the blueprint with the following settings to open other ports and services:
36
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
[customizations.firewall]
ports = ["PORTS"]
Where ports is an optional list of strings that contain ports or a range of ports and protocols to
open. You can configure the ports by using the following format: port:protocol format. You can
configure the port ranges by using the portA-portB:protocol format. For example:
[customizations.firewall]
ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp", "30000-32767:tcp", "30000-
32767:udp"]
You can use numeric ports, or their names from the /etc/services to enable or disable port lists.
[customizations.firewall.services]
enabled = ["SERVICES"]
disabled = ["SERVICES"]
$ firewall-cmd --get-services
For example:
[customizations.firewall.services]
enabled = ["ftp", "ntp", "dhcp"]
disabled = ["telnet"]
NOTE
You can control which services to enable during the boot time. Some image types already have services
enabled or disabled to ensure that the image works correctly and you cannot override this setup. The
[customizations.services] settings in the blueprint do not replace these services, but add the services
to the list of services already present in the image templates.
[customizations.services]
enabled = ["SERVICES"]
disabled = ["SERVICES"]
For example:
[customizations.services]
enabled = ["sshd", "cockpit.socket", "httpd"]
disabled = ["postfix", "telnetd"]
37
Red Hat Enterprise Linux 8 System Design Guide
Use the partitioning_mode variable to select how to partition the disk image that you are building. You
can customize your image with the following supported modes:
auto-lvm: It uses the raw partition mode, unless there are one or more filesystem
customizations. In that case, it uses the LVM partition mode.
lvm: It always uses the LVM partition mode, even when there are no extra mountpoints.
raw: It uses raw partitions even when there are one or more mountpoints.
You can customize your blueprint with the partitioning_mode variable by using the following
customization:
[customizations]
partitioning_mode = "lvm"
You can specify a custom file system configuration in your blueprints and therefore create images with a
specific disk layout, instead of the default layout configuration. By using the non-default layout
configuration in your blueprints, you can benefit from:
Improved performance
NOTE
The OSTree systems do not support the file system customizations, because OSTree
images have their own mount rule, such as read-only. The following image types are not
supported:
image-installer
edge-installer
edge-simplified-installer
Additionally, the following image types do not support file system customizations, because these image
types do not create partitioned operating system images:
edge-commit
edge-container
tar
container
However, the following image types have support for file system customization:
38
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
simplified-installer
edge-raw-image
edge-ami
edge-vsphere
With some additional exceptions for OSTree systems, you can choose arbitrary directory names at the
/root level of the file system, for example: ` /local`,` /mypartition`, /$PARTITION. In logical volumes,
these changes are made on top of the LVM partitioning system. The following directories are supported:
/var,` /var/log`, and /var/lib/containers on a separate logical volume. The following are exceptions at
root level:
For release distributions before RHEL 8.10 and 9.5, the blueprint supports the following mountpoints
and their sub-directories:
/var
/home
/opt
/srv
/usr
/app
/data
/tmp
39
Red Hat Enterprise Linux 8 System Design Guide
From the RHEL 9.5 and 8.10 release distributions onward, you can specify arbitrary custom mountpoints,
except for specific paths that are reserved for the operating system.
You cannot specify arbitrary custom mountpoints on the following mountpoints and their sub-
directories:
/bin
/boot/efi
/dev
/etc
/lib
/lib64
/lost+found
/proc
/run
/sbin
/sys
/sysroot
/var/lock
/var/run
You can customize the file system in the blueprint for the /usr custom mountpoint, but its subdirectory
is not allowed.
NOTE
Customizing mount points is only supported from RHEL 8.5 distributions onward, by using
the CLI. In earlier distributions, you can only specify the root partition as a mount point
and specify the size argument as an alias for the image size. Beginning with RHEL 8.6, for
the osbuild-composer-46.1-1.el8 RPM and later version, the physical partitions are no
longer available and file system customizations create logical volumes.
If you have more than one partition in the customized image, you can create images with a customized
file system partition on LVM and resize those partitions at runtime. To do this, you can specify a
customized file system configuration in your blueprint and therefore create images with the required
disk layout. The default file system layout remains unchanged - if you use plain images without file
system customization, and cloud-init resizes the root partition.
The blueprint automatically converts the file system customization to an LVM partition.
You can use the custom file blueprint customization to create new files or to replace existing files. The
parent directory of the file you specify must exist, otherwise, the image build fails. Ensure that the
parent directory exists by specifying it in the [[customizations.directories]] customization.
40
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
WARNING
If you combine the files customizations with other blueprint customizations, it might
affect the functioning of the other customizations, or it might override the current
files customizations.
Modifying existing files. WARNING: this can override the existing content.
Set user and group ownership for the file you are creating.
/etc/fstab
/etc/shadow
/etc/passwd
/etc/group
You can create customized files and directories in your image, by using the [[customizations.files]] and
the [[customizations.directories]] blueprint customizations. You can use these customizations only in
the /etc directory.
NOTE
These blueprint customizations are supported by all image types, except the image types
that deploy OSTree commits, such as edge-raw-image, edge-installer, and edge-
simplified-installer.
WARNING
If you use the customizations.directories with a directory path which already exists
in the image with mode, user or group already set, the image build fails to prevent
changing the ownership or permissions of the existing directory.
41
Red Hat Enterprise Linux 8 System Design Guide
Set user and group ownership for the directory you are creating.
Modifying existing files. WARNING: this can override the existing content.
Set user and group ownership for the file you are creating.
NOTE
/etc/fstab
/etc/shadow
/etc/passwd
/etc/group
[[customizations.filesystem]]
mountpoint = "MOUNTPOINT"
minsize = MINIMUM-PARTITION-SIZE
The MINIMUM-PARTITION-SIZE value has no default size format. The blueprint customization
supports the following values and units: kB to TB and KiB to TiB. For example, you can define
the mount point size in bytes:
[[customizations.filesystem]]
mountpoint = "/var"
minsize = 1073741824
[[customizations.filesystem]]
mountpoint = "/opt"
minsize = "20 GiB"
[[customizations.filesystem]]
mountpoint = "/boot"
minsize = "1 GiB"
42
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
[[customizations.filesystem]]
mountpoint = "/var"
minsize = 2147483648
Create customized directories under the /etc directory for your image by using
[[customizations.directories]]:
[[customizations.directories]]
path = "/etc/directory_name"
mode = "octal_access_permission"
user = "user_string_or_integer"
group = "group_string_or_integer"
ensure_parents = boolean
path - Mandatory - enter the path to the directory that you want to create. It must be an
absolute path under the /etc directory.
mode - Optional - set the access permission on the directory, in the octal format. If you do
not specify a permission, it defaults to 0755. The leading zero is optional.
user - Optional - set a user as the owner of the directory. If you do not specify a user, it
defaults to root. You can specify the user as a string or as an integer.
group - Optional - set a group as the owner of the directory. If you do not specify a group, it
defaults to root. You can specify the group as a string or as an integer.
Create customized file under the /etc directory for your image by using
[[customizations.directories]]:
[[customizations.files]]
path = "/etc/directory_name"
mode = "octal_access_permission"
user = "user_string_or_integer"
group = "group_string_or_integer"
data = "Hello world!"
path - Mandatory - enter the path to the file that you want to create. It must be an absolute
path under the /etc directory.
mode Optional - set the access permission on the file, in the octal format. If you do not
specify a permission, it defaults to 0644. The leading zero is optional.
user - Optional - set a user as the owner of the file. If you do not specify a user, it defaults
to root. You can specify the user as a string or as an integer.
group - Optional - set a group as the owner of the file. If you do not specify a group, it
43
Red Hat Enterprise Linux 8 System Design Guide
group - Optional - set a group as the owner of the file. If you do not specify a group, it
defaults to root. You can specify the group as a string or as an integer.
data - Optional - Specify the content of a plain text file. If you do not specify a content, it
creates an empty file.
NOTE
When you add additional components to your blueprint, ensure that the packages in the
components you added do not conflict with any other package components. Otherwise,
the system fails to solve dependencies and creating your customized image fails. You can
check if there is no conflict between the packages by running the command:
By default, RHEL image builder uses the Core group as the base list of packages.
44
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
45
Red Hat Enterprise Linux 8 System Design Guide
46
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
47
Red Hat Enterprise Linux 8 System Design Guide
48
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
Additional resources
For example, the ami image type enables the sshd, chronyd, and cloud-init services by default. If these
services are not enabled, the custom image does not boot.
qcow2 cloud-init
Note: You can customize which services to enable during the system boot. However, the customization
49
Red Hat Enterprise Linux 8 System Design Guide
Note: You can customize which services to enable during the system boot. However, the customization
does not override services enabled by default for the mentioned image types.
Additional resources
1.4.1. Accessing the RHEL image builder dashboard in the RHEL web console
With the cockpit-composer plugin for the RHEL web console, you can manage image builder blueprints
and composes using a graphical interface.
Prerequisites
Procedure
3. To display the RHEL image builder controls, click the Image Builder button, in the upper-left
corner of the window.
The RHEL image builder dashboard opens, listing existing blueprints, if any.
Additional resources
NOTE
50
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
NOTE
These blueprint customizations are available for Red Hat Enterprise Linux 9.2 or later
versions and Red Hat Enterprise Linux 8.8 or later versions.
Prerequisites
You have opened the RHEL image builder app from the web console in a browser. See
Accessing RHEL image builder GUI in the RHEL web console .
Procedure
b. Click Next.
c. Repeat the previous steps to search and include as many packages as you want.
d. Click Next.
NOTE
4. On the Kernel page, enter a kernel name and the command-line arguments.
5. On the File system page, you can select Use automatic partitioning or Manually configure
partitions for your image file system. For manually configuring the partitions, complete the
following steps:
i. For the Mount point field, select one of the following mount point type options:
/app
/boot
/data
51
Red Hat Enterprise Linux 8 System Design Guide
/home
/opt
/srv
/usr
/usr/local
/var
You can also add an additional path to the Mount point, such as /tmp. For example:
/var as a prefix and /tmp as an additional path results in /var/tmp.
NOTE
Depending on the Mount point type you choose, the file system type
changes to xfs.
ii. For the Minimum size partition field of the file system, enter the needed minimum
partition size. In the Minimum size dropdown menu, you can use common size units such
as GiB, MiB, or KiB. The default unit is GiB.
NOTE
Minimum size means that RHEL image builder can still increase the
partition sizes, in case they are too small to create a working image.
c. To add more partitions, click the Add partition button. If you see the following error
message: Duplicate partitions: Only one partition at each mount point can be created.,
you can:
ii. Choose a new mount point for the partition you want to create.
a. Enter the service names you want to enable or disable, separating them by a comma, by
space, or by pressing the Enter key. Click Next.
a. Enter the Ports, and the firewall services you want to enable or disable.
b. Click the Add zone button to manage your firewall rules for each zone independently. Click
Next.
52
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
b. Enter a Username, a Password, and a SSH key. You can also mark the user as a privileged
user, by clicking the Server administrator checkbox. Click Next.
i. Enter a Group name and a Group ID. You can add more groups. Click Next.
a. On the Timezone field, enter the time zone you want to add to your system image. For
example, add the following time zone format: "US/Eastern".
If you do not set a time zone, the system uses Universal Time, Coordinated (UTC) as
default.
a. On the Keyboard search field, enter the package name you want to add to your system
image. For example: ["en_US.UTF-8"].
b. On the Languages search field, enter the package name you want to add to your system
image. For example: "us". Click Next.
a. On the Hostname field, enter the hostname you want to add to your system image. If you do
not add a hostname, the operating system determines the hostname.
b. Mandatory only for the Simplifier Installer image: On the Installation Devices field, enter a
valid node for your system image. For example: dev/sda1. Click Next.
14. Mandatory only when building images for FDO: On the FIDO device onboarding page,
complete the following steps:
i. On the DIUN public key insecure field, enter the insecure public key.
ii. On the DIUN public key hash field, enter the public key hash.
iii. On the DIUN public key root certs field, enter the public key root certs. Click Next.
a. On the Datastream field, enter the datastream remediation instructions you want to add to
your system image.
53
Red Hat Enterprise Linux 8 System Design Guide
b. On the Profile ID field, enter the profile_id security profile you want to add to your system
image. Click Next.
16. Mandatory only when building images that use Ignition: On the Ignition page, complete the
following steps:
a. On the Firstboot URL field, enter the package name you want to add to your system image.
b. On the Embedded Data field, drag or upload your file. Click Next.
17. . On the Review page, review the details about the blueprint. Click Create.
1.4.3. Importing a blueprint in the RHEL image builder web console interface
You can import and use an already existing blueprint. The system automatically resolves all the
dependencies.
Prerequisites
You have opened the RHEL image builder app from the web console in a browser.
You have a blueprint that you want to import to use in the RHEL image builder web console
interface.
Procedure
1. On the RHEL image builder dashboard, click Import blueprint. The Import blueprint wizard
opens.
2. From the Upload field, either drag or upload an existing blueprint. This blueprint can be in either
TOML or JSON format.
Verification
When you click the blueprint you imported, you have access to a dashboard with all the customizations
for the blueprint that you imported.
To verify the packages that have been selected for the imported blueprint, navigate to the
Packages tab.
To list all the package dependencies, click All. The list is searchable and can be ordered.
Next steps
From the Customizations dashboard, click the customization you want to make a change.
Optionally, you can click Edit blueprint to navigate to all the available customization
options.
Additional resources
Creating a system image by using RHEL image builder in the web console interface
54
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
1.4.4. Exporting a blueprint from the RHEL image builder web console interface
You can export a blueprint to use the customizations in another system. You can export the blueprint in
the TOML or in the JSON format. Both formats work on the CLI and also in the API interface.
Prerequisites
You have opened the RHEL image builder app from the web console in a browser.
Procedure
1. On the image builder dashboard, select the blueprint you want to export.
3. Click the Export button to download the blueprint as a file or click the Copy button to copy the
blueprint to the clipboard.
Verification
Open the exported blueprint in a text editor to inspect and review it.
1.4.5. Creating a system image by using RHEL image builder in the web console
interface
You can create a customized RHEL system image from a blueprint by completing the following steps.
Prerequisites
You opened the RHEL image builder app from the web console in a browser.
Procedure
2. On the blueprint table, find the blueprint you want to build an image.
3. On the right side of the chosen blueprint, click Create Image. The Create image dialog wizard
opens.
a. From the Select a blueprint list, select the image type you want.
b. From the Image output type list, select the image output type you want.
Depending on the image type you select, you need to add further details.
5. Click Next.
6. On the Review page, review the details about the image creation and click Create image.
55
Red Hat Enterprise Linux 8 System Design Guide
Verification
After the image finishes building, you can:
On the RHEL image builder dashboard, click the Node options ( ) menu and select
Download image.
Download the logs of the image to inspect the elements and verify if any issue is found.
On the RHEL image builder dashboard, click the Node options ( ) menu and select
Download logs.
Before uploading an AWS AMI image, you must configure a system for uploading the images.
Prerequisites
You must have an Access Key ID configured in the AWS IAM account manager.
Procedure
3. Set your profile. The terminal prompts you to provide your credentials, region and output
format:
56
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
$ aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
$ BUCKET=bucketname
$ aws s3 mb s3://$BUCKET
Replace bucketname with the actual bucket name. It must be a globally unique name. As a
result, your bucket is created.
5. To grant permission to access the S3 bucket, create a vmimport S3 Role in the AWS Identity
and Access Management (IAM), if you have not already done so in the past:
a. Create a trust-policy.json file with the trust policy configuration, in the JSON format. For
example:
{
"Version": "2022-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "vmie.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:Externalid": "vmimport"
}
}
}]
}
b. Create a role-policy.json file with the role policy configuration, in the JSON format. For
example:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::%s", "arn:aws:s3:::%s/"] }, { "Effect": "Allow", "Action":
["ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage",
"ec2:Describe"],
"Resource": "*"
}]
}
$BUCKET $BUCKET
c. Create a role for your Amazon Web Services account, by using the trust-policy.json file:
57
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
You can use RHEL image builder to build ami images and manually upload them directly to Amazon
AWS Cloud service provider, by using the CLI.
Prerequisites
You have an Access Key ID configured in the AWS IAM account manager.
Procedure
1. Using the text editor, create a configuration file with the following content:
provider = "aws"
[settings]
accessKeyID = "AWS_ACCESS_KEY_ID"
secretAccessKey = "AWS_SECRET_ACCESS_KEY"
bucket = "AWS_BUCKET"
region = "AWS_REGION"
key = "IMAGE_KEY"
Replace values in the fields with your credentials for accessKeyID, secretAccessKey, bucket,
and region. The IMAGE_KEY value is the name of your VM Image to be uploaded to EC2.
Replace:
58
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
configuration-file.toml with the name of the configuration file of the cloud provider.
NOTE
You must have the correct AWS Identity and Access Management (IAM)
settings for the bucket you are going to send your customized image to. You
have to set up a policy to your bucket before you are able to upload images
to it.
After the image upload process is complete, you can see the "FINISHED" status.
Verification
To confirm that the image upload was successful:
1. Access EC2 on the menu and select the correct region in the AWS console. The image must
have the available status, to indicate that it was successfully uploaded.
Additional Resources
1.5.1.3. Creating and automatically uploading images to the AWS Cloud AMI
You can create a (.raw) image by using RHEL image builder, and choose to check the Upload to AWS
checkbox to automatically push the output image that you create directly to the Amazon AWS Cloud
AMI service provider.
Prerequisites
You must have root or wheel group user access to the system.
You have opened the RHEL image builder interface of the RHEL web console in a browser.
You have created a blueprint. See Creating a blueprint in the web console interface .
You must have an Access Key ID configured in the AWS IAM account manager.
Procedure
1. In the RHEL image builder dashboard, click the blueprint name that you previously created.
59
Red Hat Enterprise Linux 8 System Design Guide
a. From the Type drop-down menu list, select Amazon Machine Image Disk (.raw).
b. Check the Upload to AWS checkbox to upload your image to the AWS Cloud and click
Next.
c. To authenticate your access to AWS, type your AWS access key ID and AWS secret
access key in the corresponding fields. Click Next.
NOTE
You can view your AWS secret access key only when you create a new Access
Key ID. If you do not know your Secret Key, generate a new Access Key ID.
d. Type the name of the image in the Image name field, type the Amazon bucket name in the
Amazon S3 bucket name field and type the AWS region field for the bucket you are going
to add your customized image to. Click Next.
NOTE
You must have the correct IAM settings for the bucket you are going to send
your customized image. This procedure uses the IAM Import and Export, so
you have to set up a policy to your bucket before you are able to upload
images to it. For more information, see Required Permissions for IAM Users .
4. A pop-up on the upper right informs you of the saving progress. It also informs that the image
creation has been initiated, the progress of this image creation and the subsequent upload to
the AWS Cloud.
After the process is complete, you can see the Image build complete status.
a. On the AWS console dashboard menu, choose the correct region. The image must have the
Available status, to indicate that it is uploaded.
6. A new window opens. Choose an instance type according to the resources you need to start your
image. Click Review and Launch.
7. Review your instance start details. You can edit each section if you need to make any changes.
Click Launch
8. Before you start the instance, select a public key to access it.
You can either use the key pair you already have or you can create a new key pair.
Follow the next steps to create a new key pair in EC2 and attach it to the new instance.
a. From the drop-down menu list, select Create a new key pair.
b. Enter the name to the new key pair. It generates a new key pair.
c. Click Download Key Pair to save the new key pair on your local system.
60
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
10. After the instance status is running, the Connect button becomes available.
11. Click Connect. A window appears with instructions on how to connect by using SSH.
a. Select A standalone SSH client as the preferred connection method to and open a
terminal.
b. In the location you store your private key, ensure that your key is publicly viewable for SSH
to work. To do so, run the command:
Verification
Check if you are able to perform any action while connected to your instance by using SSH.
Additional resources
To create a VHD image that you can manually upload to Microsoft Azure cloud, you can use RHEL
image builder.
Prerequisites
You must have a Microsoft Azure resource group and storage account.
Procedure
2. Create a local azure-cli.repo repository with the following information. Save the azure-cli.repo
61
Red Hat Enterprise Linux 8 System Design Guide
2. Create a local azure-cli.repo repository with the following information. Save the azure-cli.repo
repository under /etc/yum.repos.d/:
[azure-cli]
name=Azure CLI
baseurl=https://packages.microsoft.com/yumrepos/vscode
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc
# yumdownloader azure-cli
# rpm -ivh --nodeps azure-cli-2.0.64-1.el7.x86_64.rpm
NOTE
The downloaded version of the Microsoft Azure CLI package can vary depending
on the current available version.
$ az login
The terminal shows the following message Note, we have launched a browser for you to
login. For old experience with device code, use "az login --use-device-code. Then, the
terminal opens a browser with a link to https://microsoft.com/devicelogin from where you can
login.
NOTE
If you are running a remote (SSH) session, the login page link will not open in the
browser. In this case, you can copy the link to a browser and login to authenticate
your remote session. To sign in, use a web browser to open the page
https://microsoft.com/devicelogin and enter the device code to authenticate.
Replace resource-group-name with name of your Microsoft Azure resource group and storage-
account-name with name of your Microsoft Azure storage account.
NOTE
You can list the available resources using the following command:
$ az resource list
62
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
Additional resources
After you have created your customized VHD image, you can manually upload it to the Microsoft Azure
cloud.
Prerequisites
Your system must be set up for uploading Microsoft Azure VHD images. See Preparing to
upload Microsoft Azure VHD images.
You must have a Microsoft Azure VHD image created by RHEL image builder.
In the GUI, use the Azure Disk Image (.vhd) image type.
Procedure
1. Push the image to Microsoft Azure and create an instance from it:
2. After the upload to the Microsoft Azure Blob storage completes, create a Microsoft Azure
image from it:
NOTE
Because the images that you create with RHEL image builder generate hybrid
images that support to both the V1 = BIOS and V2 = UEFI instances types, you
can specify the --hyper-v-generation argument. The default instance type is V1.
Verification
1. Create an instance either with the Microsoft Azure portal, or a command similar to the following:
63
Red Hat Enterprise Linux 8 System Design Guide
2. Use your private key via SSH to access the resulting instance. Log in as azure-user. This
username was set on the previous step.
Additional Resources
Composing an image for the .vhd format fails (Red Hat Knowledgebase)
1.5.2.3. Creating and automatically uploading VHD images to Microsoft Azure cloud
You can create .vhd images by using RHEL image builder that will be automatically uploaded to a Blob
Storage of the Microsoft Azure Cloud service provider.
Prerequisites
You have access to the RHEL image builder interface of the RHEL web console.
You created a blueprint. See Creating a RHEL image builder blueprint in the web console
interface.
Procedure
1. In the RHEL image builder dashboard, select the blueprint you want to use.
a. Select Microsoft Azure (.vhd) from the Type drop-down menu list.
b. Check the Upload to Azure checkbox to upload your image to the Microsoft Azure Cloud.
i. Your Storage account name. You can find it on the Storage account page, in the
Microsoft Azure portal.
ii. Your Storage access key: You can find it on the Access Key Storage page.
64
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
ii. The Storage container. It is the blob container to which you will upload the image. Find
it under the Blob service section, in the Microsoft Azure portal.
5. On the Review page, click Create. The RHEL image builder and upload processes start.
Access the image you pushed into Microsoft Azure Cloud.
7. In the search bar, type "storage account" and click Storage accounts from the list.
8. On the search bar, type "Images" and select the first entry under Services. You are redirected
to the Image dashboard.
10. Find the container you created. Inside the container is the .vhd file you created and pushed by
using RHEL image builder.
Verification
a. In the search bar, type images account and click Images from the list.
b. Click +Create.
c. From the dropdown list, choose the resource group you used earlier.
g. Under Storage Blob, click Browse and click through the storage accounts and container
until you reach your VHD file.
j. Click Review + Create and then Create. Wait a few moments for the image creation.
a. Click Go to resource.
65
Red Hat Enterprise Linux 8 System Design Guide
e. Click Review + Create and then Create. You can see the deployment progress.
After the deployment finishes, click the virtual machine name to retrieve the public IP
address of the instance to connect by using SSH.
Additional resources
Help + support
1.5.2.4. Uploading VMDK images and creating a RHEL virtual machine in vSphere
With RHEL image builder, you can create customized VMware vSphere system images, either in the
Open virtualization format (.ova) or in the Virtual disk ( .vmdk) format. You can upload these images to
the VMware vSphere client. You can upload the .vmdk or .ova image to VMware vSphere using the
govc import.vmdk CLI tool. The vmdk you create contains the cloud-init package installed and you
can use it to provision users by using user data, for example.
NOTE
Uploading vmdk images by using the VMware vSphere GUI is not supported.
Prerequisites
You created a VMware vSphere image either in the .ova or .vmdk format by using RHEL image
builder and downloaded it to your host system.
You installed and configured the govc CLI tool, to be able use the import.vmdk command.
Procedure
1. Configure the following values in the user environment with the GOVC environment variables:
GOVC_URL
GOVC_DATACENTER
GOVC_FOLDER
GOVC_DATASTORE
GOVC_RESOURCE_POOL
GOVC_NETWORK
2. Navigate to the directory where you downloaded your VMware vSphere image.
66
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
govc vm.create \
-net.adapter=vmxnet3 \
-m=4096 -c=2 -g=rhel8_64Guest \
-firmware=efi -disk=”foldername/composer-api.vmdk” \
-disk.controller=scsi -on=false \
vmname
e. Use SSH to log in to the VM, using the username and password you specified in your
blueprint:
$ ssh admin@<_ip_address_of_the_vm_>
NOTE
If you copied the .vmdk image from your local host to the destination using
the govc datastore.upload command, using the resulting image is not
supported. There is no option to use the import.vmdk command in the
vSphere GUI and as a result, the vSphere GUI does not support the direct
upload. As a consequence, the .vmdk image is not usable from the vSphere
GUI.
1.5.2.5. Creating and automatically uploading VMDK images to vSphere using image builder
GUI
You can build VMware images by using the RHEL image builder GUI tool and automatically push the
images directly to your vSphere instance. This avoids the need to download the image file and push it
manually. The vmdk you create contains the cloud-init package installed and you can use it to provision
users by using user data, for example. To build .vmdk images by using RHEL image builder and push
them directly to vSphere instances service provider, follow the steps:
Prerequisites
67
Red Hat Enterprise Linux 8 System Design Guide
You have created a blueprint. See Creating a RHEL image builder blueprint in the web console
interface.
Procedure
a. From the dropdown menu, select the Type: VMware vSphere (.vmdk).
b. Check the Upload to VMware checkbox to upload your image to the vSphere.
c. Optional: Set the size of the image you want to instantiate. The minimal default size is 2 GB.
d. Click Next.
4. In the Upload to VMware window, under Authentication, enter the following details:
5. In the Upload to VMware window, under Destination, enter the following details about the
image upload destination:
f. Click Next.
6. In the Review window, review the details of the image creation and click Finish.
You can click Back to modify any incorrect detail.
RHEL image builder adds the compose of a RHEL vSphere image to the queue, and creates and
uploads the image to the Cluster on the vSphere instance you specified.
NOTE
The image build and upload processes take a few minutes to complete.
After the process is complete, you can see the Image build complete status.
68
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
Verification
After the image status upload is completed successfully, you can create a Virtual Machine (VM) from the
image you uploaded and login into it. To do so:
2. Search for the image in the Cluster on the vSphere instance you specified.
c. Select a computer resource: choose a destination computer resource for this operation.
f. Select a guest operating system: For example, select Linux and Red Hat Fedora (64-bit) .
g. Customize hardware: When creating a VM, on the Device Configuration button on the
upper right, delete the default New Hard Disk and use the drop-down to select an Existing
Hard Disk disk image:
h. Ready to complete: Review the details and click Finish to create the image.
b. Click the Start button from the panel. A new window appears, showing the VM image
loading.
d. You can verify if the packages you added to the blueprint are installed. For example:
Additional resources
69
Red Hat Enterprise Linux 8 System Design Guide
With RHEL image builder, you can build a gce image, provide credentials for your user or GCP service
account, and then upload the gce image directly to the GCP environment.
1.5.3.1.1. Configuring and uploading a gce image to GCP by using the CLI
Set up a configuration file with credentials to upload your gce image to GCP by using the RHEL image
builder CLI.
WARNING
You cannot manually import gce image to GCP, because the image will not boot.
You must use either gcloud or RHEL image builder to upload it.
Prerequisites
You have a valid Google account and credentials to upload your image to GCP. The credentials
can be from a user account or a service account. The account associated with the credentials
must have at least the following IAM roles assigned:
Procedure
1. Use a text editor to create a gcp-config.toml configuration file with the following content:
provider = "gcp"
[settings]
bucket = "GCP_BUCKET"
region = "GCP_STORAGE_REGION"
object = "OBJECT_KEY"
credentials = "GCP_CREDENTIALS"
OBJECT_KEY is the name of an intermediate storage object. It must not exist before the
upload, and it is deleted when the upload process is done. If the object name does not end
with .tar.gz, the extension is automatically added to the object name.
NOTE
70
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
NOTE
2. Retrieve the GCP_CREDENTIALS from the JSON file downloaded from GCP.
3. Create a compose with an additional image name and cloud provider profile:
The image build, upload, and cloud registration processes can take up to ten minutes to
complete.
Verification
Additional resources
1.5.3.1.2. How RHEL image builder sorts the authentication order of different GCP credentials
You can use several different types of credentials with RHEL image builder to authenticate with GCP. If
RHEL image builder configuration is set to authenticate with GCP using multiple sets of credentials, it
uses the credentials in the following order of preference:
3. Application Default Credentials from the Google GCP SDK library, which tries to
automatically find a way to authenticate by using the following options:
b. Application Default Credentials tries to authenticate by using the service account attached
to the resource that is running the code. For example, Google Compute Engine VM.
NOTE
You must use the GCP credentials to determine which GCP project to
upload the image to. Therefore, unless you want to upload all of your images
to the same GCP project, you always must specify the credentials in the
gcp-config.toml configuration file with the composer-cli command.
71
Red Hat Enterprise Linux 8 System Design Guide
You can specify GCP authentication credentials in the upload target configuration gcp-config.toml file.
Use a Base64-encoded scheme of the Google account credentials JSON file to save time.
Procedure
1. Get the encoded content of the Google account credentials file with the path stored in
GOOGLE_APPLICATION_CREDENTIALS environment variable, by running the following
command:
$ base64 -w 0 "${GOOGLE_APPLICATION_CREDENTIALS}"
provider = "gcp"
[settings]
provider = "gcp"
[settings]
credentials = "GCP_CREDENTIALS"
You can configure GCP authentication credentials to be used for GCP globally for all image builds. This
way, if you want to import images to the same GCP project, you can use the same credentials for all
image uploads to GCP.
Procedure
[gcp]
credentials = "PATH_TO_GCP_ACCOUNT_CREDENTIALS"
With RHEL image builder, build customized images and automatically push them directly to your Oracle
Cloud Infrastructure (OCI) instance. Then, you can start an image instance from the OCI dashboard.
Prerequisites
72
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
Procedure
1. Open the RHEL image builder interface of the web console in a browser.
3. On the Details page, enter a name for the blueprint, and optionally, a description. Click Next.
4. On the Packages page, select the components and packages that you want to include in the
image. Click Next.
5. On the Customizations page, configure the customizations that you want for your blueprint.
Click Next.
7. To create an image, click Create Image. The Create image wizard opens.
a. From the "Select a blueprint" drop-down menu, select the blueprint you want.
b. From the "Image output type" drop-down menu, select Oracle Cloud Infrastructure
(.qcow2).
c. Check the "Upload OCI checkbox to upload your image to the OCI.
9. On the Upload to OCI - Authentication page, enter the following mandatory details:
a. User OCID: you can find it in the Console on the page showing the user’s details.
b. Private key
10. On the Upload to OCI - Destination page, enter the following mandatory details and click
Next.
b. OCI bucket
c. Bucket namespace
d. Bucket region
e. Bucket compartment
f. Bucket tenancy
RHEL image builder adds the compose of a RHEL .qcow2 image to the queue.
Verification
73
Red Hat Enterprise Linux 8 System Design Guide
2. Select the Compartment you specified for the image and locate the image in the Import
image table.
Additional resources
With the RHEL image builder tool, you can create customized .qcow2 images that are suitable for
uploading to OpenStack cloud deployments, and starting instances there. RHEL image builder creates
images in the QCOW2 format, but with further changes specific to OpenStack.
WARNING
Do not mistake the generic QCOW2 image type output format you create by using
RHEL image builder with the OpenStack image type, which is also in the QCOW2
format, but contains further changes specific to OpenStack.
Prerequisites
Procedure
After the image build finishes, you can download the image.
74
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
c. From the Format dropdown list, select the QCOW2 - QEMU Emulator.
75
Red Hat Enterprise Linux 8 System Design Guide
c. On the Details page, enter a name for the instance. Click Next.
d. On the Source page, select the name of the image you uploaded. Click Next.
e. On the Flavor page, select the machine resources that best fit your needs. Click Launch.
9. You can run the image instance using any mechanism (CLI or OpenStack web UI) from the
image. Use your private key via SSH to access the resulting instance. Log in as cloud-user.
1.5.6. Preparing and uploading customized RHEL images to the Alibaba Cloud
You can upload a customized .ami images that you created by using RHEL image builder to the Alibaba
Cloud.
To deploy a customized RHEL image to the Alibaba Cloud, first you need to verify the customized
image. The image needs a specific configuration to boot successfully, because Alibaba Cloud requests
the custom images to meet certain requirements before you use it.
NOTE
76
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
NOTE
RHEL image builder generates images that conform to Alibaba’s requirements. However,
Red Hat recommends also using the Alibaba image_check tool to verify the format
compliance of your image.
Prerequisites
You must have created an Alibaba image by using RHEL image builder.
Procedure
1. Connect to the system containing the image that you want to check by using the Alibaba
image_check tool.
$ curl -O https://docs-aliyun.cn-hangzhou.oss.aliyun-
inc.com/assets/attach/73848/cn_zh/1557459863884/image_check
# chmod +x image_check
# ./image_check
The tool verifies the system configuration and generates a report that is displayed on your
screen. The image_check tool saves this report in the same folder where the image compliance
tool is running.
Troubleshooting
If any of the Detection Items fail, follow the instructions in the terminal to correct it.
Additional resources
You can upload a customized AMI image you created by using RHEL image builder to the Object
Storage Service (OSS).
Prerequisites
Your system is set up for uploading Alibaba images. See Preparing for uploading images to
Alibaba.
77
Red Hat Enterprise Linux 8 System Design Guide
Procedure
2. In the Bucket menu on the left, select the bucket to which you want to upload an image.
4. Click Upload. A dialog window opens on the right side. Configure the following:
Upload To: Choose to upload the file to the Current directory or to a Specified directory.
5. Click Upload.
7. Click Open.
Additional resources
Upload an object.
Importing images.
To import a customized Alibaba RHEL image that you created by using RHEL image builder to the
Elastic Compute Service (ECS), follow the steps:
Prerequisites
Your system is set up for uploading Alibaba images. See Preparing for uploading images to
Alibaba.
You have uploaded the image to Object Storage Service (OSS). See Uploading images to
Alibaba.
Procedure
78
CHAPTER 1. COMPOSING A CUSTOMIZED RHEL SYSTEM IMAGE
ii. On the upper right side, click Import Image. A dialog window opens.
iii. Confirm that you have set up the correct region where the image is located. Enter the
following information:
b. Image Name
c. Operating System
e. System Architecture
h. Image Description
3. Click the Details link on the right for the appropriate image.
A window appears on the right side of the screen, showing image details. The OSS object
address is in the URL box.
4. Click OK.
NOTE
The importing process time can vary depending on the image size.
Additional resources
Upload an object.
You can create instances of a customized RHEL image by using the Alibaba ECS Console.
Prerequisites
79
Red Hat Enterprise Linux 8 System Design Guide
You have successfully imported your image to ECS Console. See Importing images to Alibaba .
Procedure
3. In the upper-right corner, click Create Instance. You are redirected to a new window.
4. Complete all the required information. See Creating an instance by using the wizard for more
details.
NOTE
You can see the option Create Order instead of Create Instance, depending on
your subscription.
As a result, you have an active instance ready for deployment from the Alibaba ECS Console.
Additional resources
80
CHAPTER 2. PERFORMING AN AUTOMATED INSTALLATION USING KICKSTART
81
Red Hat Enterprise Linux 8 System Design Guide
82
CHAPTER 4. KICKSTART REFERENCES
83
Red Hat Enterprise Linux 8 System Design Guide
84
CHAPTER 5. SECURING RHEL DURING AND RIGHT AFTER INSTALLATION
To ensure separation and protection of data on bare-metal installations, create separate partitions for
the /boot, /, /home, /tmp, and /var/tmp/ directories:
/boot
This partition is the first partition that is read by the system during boot up. The boot loader and
kernel images that are used to boot your system into Red Hat Enterprise Linux 8 are stored in this
partition. This partition should not be encrypted. If this partition is included in / and that partition is
encrypted or otherwise becomes unavailable then your system is not able to boot.
/home
When user data (/home) is stored in / instead of in a separate partition, the partition can fill up
causing the operating system to become unstable. Also, when upgrading your system to the next
version of Red Hat Enterprise Linux 8 it is a lot easier when you can keep your data in the /home
partition as it is not be overwritten during installation. If the root partition (/) becomes corrupt your
data could be lost forever. By using a separate partition there is slightly more protection against data
loss. You can also target this partition for frequent backups.
/tmp and /var/tmp/
Both the /tmp and /var/tmp/ directories are used to store data that does not need to be stored for a
long period of time. However, if a lot of data floods one of these directories it can consume all of your
storage space. If this happens and these directories are stored within / then your system could
become unstable and crash. For this reason, moving these directories into their own partitions is a
good idea.
For virtual machines or cloud instances, the separate /boot, /home, /tmp, and /var/tmp partitions are
optional because you can increase the virtual disk size and the / partition if it begins to fill up. Set up
monitoring to regularly check the / partition usage so that it does not fill up before you increase the
virtual disk size accordingly.
NOTE
During the installation process, you have an option to encrypt partitions. You must supply
a passphrase. This passphrase serves as a key to unlock the bulk encryption key, which is
used to secure the partition’s data.
85
Red Hat Enterprise Linux 8 System Design Guide
When installing a potentially vulnerable operating system, always limit exposure only to the closest
necessary network zone. The safest choice is the “no network” zone, which means to leave your machine
disconnected during the installation process. In some cases, a LAN or intranet connection is sufficient
while the Internet connection is the riskiest. To follow the best security practices, choose the closest
zone with your repository while installing Red Hat Enterprise Linux 8 from a network.
# yum update
Even though the firewall service, firewalld, is automatically enabled with the installation of
Red Hat Enterprise Linux, it might be explicitly disabled, for example, in the Kickstart
configuration. In such a case, re-enable the firewall.
To start firewalld enter the following commands as root:
To enhance security, disable services you do not need. For example, if no printers are installed
on your computer, disable the cups service by using the following command:
IMPORTANT
Prerequisites
86
CHAPTER 5. SECURING RHEL DURING AND RIGHT AFTER INSTALLATION
Procedure
2. In the Overview tab find the System information field and click View hardware details.
4. In the CPU Security Toggles table, turn on the Disable simultaneous multithreading (nosmt)
option.
Additional resources
87
Red Hat Enterprise Linux 8 System Design Guide
DEFAULT
The default system-wide cryptographic policy level offers secure settings for current threat models.
It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and
Diffie-Hellman parameters are accepted if they are at least 2048 bits long.
LEGACY
Ensures maximum compatibility with Red Hat Enterprise Linux 5 and earlier; it is less secure due to an
increased attack surface. In addition to the DEFAULT level algorithms and protocols, it includes
support for the TLS 1.0 and 1.1 protocols. The algorithms DSA, 3DES, and RC4 are allowed, while RSA
keys and Diffie-Hellman parameters are accepted if they are at least 1023 bits long.
FUTURE
A stricter forward-looking security level intended for testing a possible future policy. This policy does
not allow the use of SHA-1 in signature algorithms. It allows the TLS 1.2 and 1.3 protocols, as well as
the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are
at least 3072 bits long. If your system communicates on the public internet, you might face
interoperability problems.
IMPORTANT
Because a cryptographic key used by a certificate on the Customer Portal API does
not meet the requirements by the FUTURE system-wide cryptographic policy, the
redhat-support-tool utility does not work with this policy level at the moment.
To work around this problem, use the DEFAULT cryptographic policy while connecting
to the Customer Portal API.
FIPS
Conforms with the FIPS 140 requirements. The fips-mode-setup tool, which switches the RHEL
system into FIPS mode, uses this policy internally. Switching to the FIPS policy does not guarantee
compliance with the FIPS 140 standard. You also must re-generate all cryptographic keys after you
set the system to FIPS mode. This is not possible in many scenarios.
RHEL also provides the FIPS:OSPP system-wide subpolicy, which contains further restrictions for
cryptographic algorithms required by the Common Criteria (CC) certification. The system becomes
less interoperable after you set this subpolicy. For example, you cannot use RSA and DH keys
shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also
prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you
88
CHAPTER 6. USING SYSTEM-WIDE CRYPTOGRAPHIC POLICIES
cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP,
communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD
accounts might not be able to authenticate.
NOTE
Your system is not CC-compliant after you set the FIPS:OSPP cryptographic
subpolicy. The only correct way to make your RHEL system compliant with the CC
standard is by following the guidance provided in the cc-config package. See the
Common Criteria section in the Compliance Activities and Government Standards
Knowledgebase article for a list of certified RHEL versions, validation reports, and
links to CC guides.
Red Hat continuously adjusts all policy levels so that all libraries provide secure defaults, except when
using the LEGACY policy. Even though the LEGACY profile does not provide secure defaults, it does
not include any algorithms that are easily exploitable. As such, the set of enabled algorithms or
acceptable key sizes in any provided policy may change during the lifetime of Red Hat Enterprise Linux.
Such changes reflect new security standards and new security research. If you must ensure
interoperability with a specific system for the whole lifetime of Red Hat Enterprise Linux, you should
opt-out from the system-wide cryptographic policies for components that interact with that system or
re-enable specific algorithms using custom cryptographic policies.
The specific algorithms and ciphers described as allowed in the policy levels are available only if an
application supports them:
Table 6.1. Cipher suites and protocols enabled in the cryptographic policies
IKEv1 no no no no
3DES yes no no no
RC4 yes no no no
DSA yes no no no
89
Red Hat Enterprise Linux 8 System Design Guide
[a] You can use only Diffie-Hellman groups defined in RFC 7919 and RFC 3526.
[b] CBC ciphers are disabled for TLS. In a non-TLS scenario, AES-128-CBC is disabled but AES-256-CBC is
enabled. To disable also AES-256-CBC, apply a custom subpolicy.
Additional resources
Prerequisites
Procedure
$ update-crypto-policies --show
DEFAULT
Replace <POLICY> with the policy or subpolicy you want to set, for example FUTURE,
LEGACY or FIPS:OSPP.
# reboot
90
CHAPTER 6. USING SYSTEM-WIDE CRYPTOGRAPHIC POLICIES
Verification
$ update-crypto-policies --show
<POLICY>
Additional resources
WARNING
Switching to the LEGACY policy level results in a less secure system and
applications.
Procedure
1. To switch the system-wide cryptographic policy to the LEGACY level, enter the following
command as root:
Additional resources
For the list of available cryptographic policy levels, see the update-crypto-policies(8) man
page on your system.
For defining custom cryptographic policies, see the Custom Policies section in the update-
crypto-policies(8) man page and the Crypto Policy Definition Format section in the crypto-
policies(7) man page on your system.
91
Red Hat Enterprise Linux 8 System Design Guide
DEFAULT:SHA1
The DEFAULT policy with the SHA-1 algorithm enabled.
LEGACY:AD-SUPPORT
The LEGACY policy with less secure settings that improve interoperability for Active Directory
services.
FIPS:OSPP
The FIPS policy with further restrictions required by the Common Criteria for Information
Technology Security Evaluation standard.
WARNING
Note that your system is not CC-compliant after you set the FIPS:OSPP
cryptographic subpolicy. The only correct way to make your RHEL system compliant
with the CC standard is by following the guidance provided in the cc-config
package. See the Common Criteria section in the Compliance Activities and
Government Standards Knowledgebase article for a list of certified RHEL versions,
validation reports, and links to CC guides hosted at the National Information
Assurance Partnership (NIAP) website.
Prerequisites
You have root privileges or permissions to enter administrative commands with sudo.
Procedure
2. In the Configuration card of the Overview page, click your current policy value next to Crypto
policy.
92
CHAPTER 6. USING SYSTEM-WIDE CRYPTOGRAPHIC POLICIES
3. In the Change crypto policy dialog window, click on the policy you want to start using on your
system.
Verification
After the restart, log back in to web console, and check that the Crypto policy value
corresponds to the one you selected.
Alternatively, you can enter the update-crypto-policies --show command to display the current
system-wide cryptographic policy in your terminal.
93
Red Hat Enterprise Linux 8 System Design Guide
You can customize cryptographic settings used by your application preferably by configuring supported
cipher suites and protocols directly in the application.
You can also remove a symlink related to your application from the /etc/crypto-policies/back-ends
directory and replace it with your customized cryptographic settings. This configuration prevents the
use of system-wide cryptographic policies for applications that use the excluded back end.
Furthermore, this modification is not supported by Red Hat.
wget
To customize cryptographic settings used by the wget network downloader, use --secure-protocol and
--ciphers options. For example:
See the HTTPS (SSL/TLS) Options section of the wget(1) man page for more information.
curl
To specify ciphers used by the curl tool, use the --ciphers option and provide a colon-separated list of
ciphers as a value. For example:
Firefox
Even though you cannot opt out of system-wide cryptographic policies in the Firefox web browser, you
can further restrict supported ciphers and TLS versions in Firefox’s Configuration Editor. Type
about:config in the address bar and change the value of the security.tls.version.min option as
required. Setting security.tls.version.min to 1 allows TLS 1.0 as the minimum required,
security.tls.version.min 2 enables TLS 1.1, and so on.
OpenSSH
To opt out of the system-wide cryptographic policies for your OpenSSH server, uncomment the line
with the CRYPTO_POLICY= variable in the /etc/sysconfig/sshd file. After this change, values that you
specify in the Ciphers, MACs, KexAlgoritms, and GSSAPIKexAlgorithms sections in the
/etc/ssh/sshd_config file are not overridden.
To opt out of system-wide cryptographic policies for your OpenSSH client, perform one of the following
tasks:
For a given user, override the global ssh_config with a user-specific configuration in the
~/.ssh/config file.
For the entire system, specify the cryptographic policy in a drop-in configuration file located in
the /etc/ssh/ssh_config.d/ directory, with a two-digit number prefix smaller than 5, so that it
lexicographically precedes the 05-redhat.conf file, and with a .conf suffix, for example, 04-
crypto-policy-override.conf.
94
CHAPTER 6. USING SYSTEM-WIDE CRYPTOGRAPHIC POLICIES
Libreswan
See the Configuring IPsec connections that opt out of the system-wide crypto policies in the Securing
networks document for detailed information.
Additional resources
You can either apply custom subpolicies on top of an existing system-wide cryptographic policy or
define such a policy from scratch.
The concept of scoped policies allows enabling different sets of algorithms for different back ends. You
can limit each configuration directive to specific protocols, libraries, or services.
Furthermore, directives can use asterisks for specifying multiple values using wildcards.
The /etc/crypto-policies/state/CURRENT.pol file lists all settings in the currently applied system-wide
cryptographic policy after wildcard expansion. To make your cryptographic policy more strict, consider
using values listed in the /usr/share/crypto-policies/policies/FUTURE.pol file.
NOTE
Customization of system-wide cryptographic policies is available from RHEL 8.2. You can
use the concept of scoped policies and the option of using wildcards in RHEL 8.5 and
newer.
Procedure
# cd /etc/crypto-policies/policies/modules/
# touch MYCRYPTO-1.pmod
# touch SCOPES-AND-WILDCARDS.pmod
IMPORTANT
3. Open the policy modules in a text editor of your choice and insert options that modify the
system-wide cryptographic policy, for example:
95
Red Hat Enterprise Linux 8 System Design Guide
# vi MYCRYPTO-1.pmod
min_rsa_size = 3072
hash = SHA2-384 SHA2-512 SHA3-384 SHA3-512
# vi SCOPES-AND-WILDCARDS.pmod
# Disable CHACHA20-POLY1305 for the TLS protocol (OpenSSL, GnuTLS, NSS, and
OpenJDK)
cipher@TLS = -CHACHA20-POLY1305
# Allow using the FFDHE-1024 group with the SSH protocol (libssh and OpenSSH)
group@SSH = FFDHE-1024+
# Disable all CBC mode ciphers for the SSH protocol (libssh and OpenSSH)
cipher@SSH = -*-CBC
5. Apply your policy adjustments to the DEFAULT system-wide cryptographic policy level:
6. To make your cryptographic settings effective for already running services and applications,
restart the system:
# reboot
Verification
Additional resources
Crypto Policy Definition Format section in the crypto-policies(7) man page on your system
How to customize crypto policies in RHEL 8.2 Red Hat blog article
IMPORTANT
The NO-SHA1 policy module disables the SHA-1 hash function only in signatures and not
elsewhere. In particular, the NO-SHA1 module still allows the use of SHA-1 with hash-
based message authentication codes (HMAC). This is because HMAC security properties
do not rely on the collision resistance of the corresponding hash function, and therefore
the recent attacks on SHA-1 have a significantly lower impact on the use of SHA-1 for
HMAC.
If your scenario requires disabling a specific key exchange (KEX) algorithm combination, for example,
diffie-hellman-group-exchange-sha1, but you still want to use both the relevant KEX and the algorithm
in other combinations, see Steps to disable the diffie-hellman-group1-sha1 algorithm in SSH for
instructions on opting out of system-wide crypto-policies for SSH and configuring SSH directly.
NOTE
The module for disabling SHA-1 is available from RHEL 8.3. Customization of system-
wide cryptographic policies is available from RHEL 8.2.
Procedure
1. Apply your policy adjustments to the DEFAULT system-wide cryptographic policy level:
2. To make your cryptographic settings effective for already running services and applications,
restart the system:
# reboot
Additional resources
Crypto Policy Definition Format section in the crypto-policies(7) man page on your system
NOTE
97
Red Hat Enterprise Linux 8 System Design Guide
NOTE
Procedure
# cd /etc/crypto-policies/policies/
# touch MYPOLICY.pol
# cp /usr/share/crypto-policies/policies/DEFAULT.pol /etc/crypto-
policies/policies/MYPOLICY.pol
2. Edit the file with your custom cryptographic policy in a text editor of your choice to fit your
requirements, for example:
# vi /etc/crypto-policies/policies/MYPOLICY.pol
4. To make your cryptographic settings effective for already running services and applications,
restart the system:
# reboot
Additional resources
Custom Policies section in the update-crypto-policies(8) man page and the Crypto Policy
Definition Format section in the crypto-policies(7) man page on your system
Enhanced security: stronger encryption standards require longer key lengths and more secure
algorithms.
Compliance with high-security standards: for example in healthcare, telco, and finance the data
sensitivity is high, and availability of strong cryptography is critical.
Typically, FUTURE is suitable for environments handling highly sensitive data, preparing for future
regulations, or adopting long-term security strategies.
98
CHAPTER 6. USING SYSTEM-WIDE CRYPTOGRAPHIC POLICIES
WARNING
Legacy systems or software does not have to support the more modern and stricter
algorithms and protocols enforced by the FUTURE policy. For example, older
systems might not support TLS 1.3 or larger key sizes. This could lead to
compatibility problems.
Also, using strong algorithms usually increases the computational workload, which
could negatively affect your system performance.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure cryptographic policies
hosts: managed-node-01.example.com
tasks:
- name: Configure the FUTURE cryptographic security policy on the managed node
ansible.builtin.include_role:
name: rhel-system-roles.crypto_policies
vars:
- crypto_policies_policy: FUTURE
- crypto_policies_reboot_ok: true
crypto_policies_policy: FUTURE
Configures the required cryptographic policy (FUTURE) on the managed node. It can be
either the base policy or a base policy with some sub-policies. The specified base policy and
sub-policies have to be available on the managed node. The default value is null. It means
that the configuration is not changed and the crypto_policies RHEL system role will only
collect the Ansible facts.
crypto_policies_reboot_ok: true
Causes the system to reboot after the cryptographic policy change to make sure all of the
services and applications will read the new configuration files. The default value is false.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.crypto_policies/README.md file on the control node.
99
Red Hat Enterprise Linux 8 System Design Guide
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
WARNING
Note that your system is not CC-compliant after you set the FIPS:OSPP
cryptographic subpolicy. The only correct way to make your RHEL system compliant
with the CC standard is by following the guidance provided in the cc-config
package. See the Common Criteria section in the Compliance Activities and
Government Standards Knowledgebase article for a list of certified RHEL versions,
validation reports, and links to CC guides hosted at the National Information
Assurance Partnership (NIAP) website.
Verification
1. On the control node, create another playbook named, for example, verify_playbook.yml:
---
- name: Verification
hosts: managed-node-01.example.com
tasks:
- name: Verify active cryptographic policy
ansible.builtin.include_role:
name: rhel-system-roles.crypto_policies
- name: Display the currently active cryptographic policy
ansible.builtin.debug:
var: crypto_policies_active
crypto_policies_active
An exported Ansible fact that contains the currently active policy name in the format as
accepted by the crypto_policies_policy variable.
100
CHAPTER 6. USING SYSTEM-WIDE CRYPTOGRAPHIC POLICIES
$ ansible-playbook ~/verify_playbook.yml
TASK [debug] **************************
ok: [host] => {
"crypto_policies_active": "FUTURE"
}
The crypto_policies_active variable shows the active policy on the managed node.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file
/usr/share/doc/rhel-system-roles/crypto_policies/ directory
101
Red Hat Enterprise Linux 8 System Design Guide
PKCS #11 introduces the cryptographic token, an object that presents each hardware or software device
to applications in a unified manner. Therefore, applications view devices such as smart cards, which are
typically used by persons, and hardware security modules, which are typically used by computers, as
PKCS #11 cryptographic tokens.
A PKCS #11 token can store various object types including a certificate; a data object; and a public,
private, or secret key. These objects are uniquely identifiable through the PKCS #11 Uniform Resource
Identifier (URI) scheme.
A PKCS #11 URI is a standard way to identify a specific object in a PKCS #11 module according to the
object attributes. This enables you to configure all libraries and applications with the same configuration
string in the form of a URI.
RHEL provides the OpenSC PKCS #11 driver for smart cards by default. However, hardware tokens and
HSMs can have their own PKCS #11 modules that do not have their counterpart in the system. You can
register such PKCS #11 modules with the p11-kit tool, which acts as a wrapper over the registered smart-
card drivers in the system.
To make your own PKCS #11 module work on the system, add a new text file to the
/etc/pkcs11/modules/ directory
You can add your own PKCS #11 module into the system by creating a new text file in the
/etc/pkcs11/modules/ directory. For example, the OpenSC configuration file in p11-kit looks as follows:
$ cat /usr/share/p11-kit/modules/opensc.module
module: opensc-pkcs11.so
Additional resources
102
CHAPTER 7. CONFIGURING APPLICATIONS TO USE CRYPTOGRAPHIC HARDWARE THROUGH PKCS #11
Prerequisites
On the client side, the opensc package is installed and the pcscd service is running.
Procedure
1. List all keys provided by the OpenSC PKCS #11 module including their PKCS #11 URIs and save
the output to the keys.pub file:
2. Transfer the public key to the remote server. Use the ssh-copy-id command with the keys.pub
file created in the previous step:
3. Connect to <ssh-server-example.com> by using the ECDSA key. You can use just a subset of the
URI, which uniquely references your key, for example:
Because OpenSSH uses the p11-kit-proxy wrapper and the OpenSC PKCS #11 module is
registered to the p11-kit tool, you can simplify the previous command:
If you skip the id= part of a PKCS #11 URI, OpenSSH loads all keys that are available in the proxy
module. This can reduce the amount of typing required:
4. Optional: You can use the same URI string in the ~/.ssh/config file to make the configuration
permanent:
$ cat ~/.ssh/config
IdentityFile "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so"
$ ssh <ssh-server-example.com>
Enter PIN for 'SSH key':
[ssh-server-example.com] $
The ssh client utility now automatically uses this URI and the key from the smart card.
Additional resources
p11-kit(8), opensc.conf(5), pcscd(8), ssh(1), and ssh-keygen(1) man pages on your system
103
Red Hat Enterprise Linux 8 System Design Guide
The Firefox web browser automatically loads the p11-kit-proxy PKCS #11 module. This means
that every supported smart card in the system is automatically detected. For using TLS client
authentication, no additional setup is required and keys and certificates from a smart card are
automatically used when a server requests them.
If your application uses the GnuTLS or NSS library, it already supports PKCS #11 URIs. Also,
applications that rely on the OpenSSL library can access cryptographic hardware modules,
including smart cards, through the pkcs11 engine provided by the openssl-pkcs11 package.
Applications that require working with private keys on smart cards and that do not use NSS,
GnuTLS, nor OpenSSL can use the p11-kit API directly to work with cryptographic hardware
modules, including smart cards, rather than using the PKCS #11 API of specific PKCS #11
modules.
With the the wget network downloader, you can specify PKCS #11 URIs instead of paths to
locally stored private keys and certificates. This might simplify creation of scripts for tasks that
require safely stored private keys and certificates. For example:
You can also specify PKCS #11 URI when using the curl tool:
Additional resources
For secure communication in the form of the HTTPS protocol, the Apache HTTP server (httpd) uses
the OpenSSL library. OpenSSL does not support PKCS #11 natively. To use HSMs, you have to install the
openssl-pkcs11 package, which provides access to PKCS #11 modules through the engine interface.
You can use a PKCS #11 URI instead of a regular file name to specify a server key and a certificate in the
/etc/httpd/conf.d/ssl.conf configuration file, for example:
SSLCertificateFile "pkcs11:id=%01;token=softhsm;type=cert"
SSLCertificateKeyFile "pkcs11:id=%01;token=softhsm;type=private?pin-value=111111"
Install the httpd-manual package to obtain complete documentation for the Apache HTTP Server,
104
CHAPTER 7. CONFIGURING APPLICATIONS TO USE CRYPTOGRAPHIC HARDWARE THROUGH PKCS #11
Install the httpd-manual package to obtain complete documentation for the Apache HTTP Server,
including TLS configuration. The directives available in the /etc/httpd/conf.d/ssl.conf configuration file
are described in detail in the /usr/share/httpd/manual/mod/mod_ssl.html file.
Because Nginx also uses the OpenSSL for cryptographic operations, support for PKCS #11 must go
through the openssl-pkcs11 engine. Nginx currently supports only loading private keys from an HSM,
and a certificate must be provided separately as a regular file. Modify the ssl_certificate and
ssl_certificate_key options in the server section of the /etc/nginx/nginx.conf configuration file:
ssl_certificate /path/to/cert.pem
ssl_certificate_key "engine:pkcs11:pkcs11:token=softhsm;id=%01;type=private?pin-value=111111";
Note that the engine:pkcs11: prefix is needed for the PKCS #11 URI in the Nginx configuration file.
This is because the other pkcs11 prefix refers to the engine name.
105
Red Hat Enterprise Linux 8 System Design Guide
Certificate files are treated depending on the subdirectory they are installed to. For example, trust
anchors belong to the /usr/share/pki/ca-trust-source/anchors/ or /etc/pki/ca-trust/source/anchors/
directory.
NOTE
Additional resources
Prerequisites
Procedure
1. To add a certificate in the simple PEM or DER file formats to the list of CAs trusted on the
system, copy the certificate file to the /usr/share/pki/ca-trust-source/anchors/ or /etc/pki/ca-
trust/source/anchors/ directory, for example:
# cp ~/certificate-trust-examples/Cert-trust-test-ca.pem /usr/share/pki/ca-trust-
source/anchors/
2. To update the system-wide trust store configuration, use the update-ca-trust command:
# update-ca-trust extract
NOTE
106
CHAPTER 8. USING SHARED SYSTEM CERTIFICATES
NOTE
Even though the Firefox browser can use an added certificate without a prior execution
of update-ca-trust, enter the update-ca-trust command after every CA change. Also
note that browsers, such as Firefox, Chromium, and GNOME Web cache files, and you
might have to clear your browser’s cache or restart your browser to load the current
system certificate configuration.
Additional resources
To list, extract, add, remove, or change trust anchors, use the trust command. To see the built-
in help for this command, enter it without any arguments or with the --help directive:
$ trust
usage: trust command <args>...
To list all system trust anchors and certificates, use the trust list command:
$ trust list
pkcs11:id=%d2%87%b4%e3%df%37%27%93%55%f6%56%ea%81%e5%36%cc%8c%1e%3
f%bd;type=cert
type: certificate
label: ACCVRAIZ1
trust: anchor
category: authority
pkcs11:id=%a6%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%
ac%50;type=cert
type: certificate
label: ACEDICOM Root
trust: anchor
category: authority
...
To store a trust anchor into the system-wide truststore, use the trust anchor sub-command
and specify a path to a certificate. Replace <path.to/certificate.crt> by a path to your certificate
and its file name:
107
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
All sub-commands of the trust commands offer a detailed built-in help, for example:
Additional resources
108
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
SCAP Workbench
The scap-workbench graphical utility is designed to perform configuration and vulnerability scans
on a single local or remote system. You can also use it to generate security reports based on these
scans and evaluations.
OpenSCAP
The OpenSCAP library, with the accompanying oscap command-line utility, is designed to perform
configuration and vulnerability scans on a local system, to validate configuration compliance content,
and to generate reports and guides based on these scans and evaluations.
IMPORTANT
To perform automated compliance audits on multiple systems remotely, you can use the OpenSCAP
solution for Red Hat Satellite.
Additional resources
Red Hat Security Demos: Creating Customized Security Policy Content to Automate Security
Compliance
Red Hat Security Demos: Defend Yourself with RHEL Security Technologies
109
Red Hat Enterprise Linux 8 System Design Guide
Red Hat Enterprise Linux security auditing capabilities are based on the Security Content Automation
Protocol (SCAP) standard. SCAP is a multi-purpose framework of specifications that supports
automated configuration, vulnerability and patch checking, technical control compliance activities, and
security measurement.
SCAP specifications create an ecosystem where the format of security content is well-known and
standardized although the implementation of the scanner or policy editor is not mandated. This enables
organizations to build their security policy (SCAP content) once, no matter how many security vendors
they employ.
The Open Vulnerability Assessment Language (OVAL) is the essential and oldest component of SCAP.
Unlike other tools and custom scripts, OVAL describes a required state of resources in a declarative
manner. OVAL code is never executed directly but using an OVAL interpreter tool called scanner. The
declarative nature of OVAL ensures that the state of the assessed system is not accidentally modified.
Like all other SCAP components, OVAL is based on XML. The SCAP standard defines several document
formats. Each of them includes a different kind of information and serves a different purpose.
Red Hat Product Security helps customers evaluate and manage risk by tracking and investigating all
security issues affecting Red Hat customers. It provides timely and concise patches and security
advisories on the Red Hat Customer Portal. Red Hat creates and supports OVAL patch definitions,
providing machine-readable versions of our security advisories.
Because of differences between platforms, versions, and other factors, Red Hat Product Security
qualitative severity ratings of vulnerabilities do not directly align with the Common Vulnerability Scoring
System (CVSS) baseline ratings provided by third parties. Therefore, we recommend that you use the
RHSA OVAL definitions instead of those provided by third parties.
The RHSA OVAL definitions are available individually and as a complete package, and are updated within
an hour of a new security advisory being made available on the Red Hat Customer Portal.
Each OVAL patch definition maps one-to-one to a Red Hat Security Advisory (RHSA). Because an
RHSA can contain fixes for multiple vulnerabilities, each vulnerability is listed separately by its Common
Vulnerabilities and Exposures (CVE) name and has a link to its entry in our public bug database.
The RHSA OVAL definitions are designed to check for vulnerable versions of RPM packages installed on
a system. It is possible to extend these definitions to include further checks, for example, to find out if
the packages are being used in a vulnerable configuration. These definitions are designed to cover
software and updates shipped by Red Hat. Additional definitions are required to detect the patch status
of third-party software.
NOTE
The Red Hat Insights for Red Hat Enterprise Linux compliance service helps IT security
and compliance administrators to assess, monitor, and report on the security policy
compliance of Red Hat Enterprise Linux systems. You can also create and manage your
SCAP security policies entirely within the compliance service UI.
Additional resources
110
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
SCAP specifications create an ecosystem where the format of security content is well-known and
standardized although the implementation of the scanner or policy editor is not mandated. This enables
organizations to build their security policy (SCAP content) once, no matter how many security vendors
they employ.
The Open Vulnerability Assessment Language (OVAL) is the essential and oldest component of SCAP.
Unlike other tools and custom scripts, OVAL describes a required state of resources in a declarative
manner. OVAL code is never executed directly but using an OVAL interpreter tool called scanner. The
declarative nature of OVAL ensures that the state of the assessed system is not accidentally modified.
Like all other SCAP components, OVAL is based on XML. The SCAP standard defines several document
formats. Each of them includes a different kind of information and serves a different purpose.
Red Hat Product Security helps customers evaluate and manage risk by tracking and investigating all
security issues affecting Red Hat customers. It provides timely and concise patches and security
advisories on the Red Hat Customer Portal. Red Hat creates and supports OVAL patch definitions,
providing machine-readable versions of our security advisories.
Because of differences between platforms, versions, and other factors, Red Hat Product Security
qualitative severity ratings of vulnerabilities do not directly align with the Common Vulnerability Scoring
System (CVSS) baseline ratings provided by third parties. Therefore, we recommend that you use the
RHSA OVAL definitions instead of those provided by third parties.
The RHSA OVAL definitions are available individually and as a complete package, and are updated within
an hour of a new security advisory being made available on the Red Hat Customer Portal.
Each OVAL patch definition maps one-to-one to a Red Hat Security Advisory (RHSA). Because an
RHSA can contain fixes for multiple vulnerabilities, each vulnerability is listed separately by its Common
Vulnerabilities and Exposures (CVE) name and has a link to its entry in our public bug database.
The RHSA OVAL definitions are designed to check for vulnerable versions of RPM packages installed on
a system. It is possible to extend these definitions to include further checks, for example, to find out if
the packages are being used in a vulnerable configuration. These definitions are designed to cover
software and updates shipped by Red Hat. Additional definitions are required to detect the patch status
of third-party software.
NOTE
The Red Hat Insights for Red Hat Enterprise Linux compliance service helps IT security
and compliance administrators to assess, monitor, and report on the security policy
compliance of Red Hat Enterprise Linux systems. You can also create and manage your
SCAP security policies entirely within the compliance service UI.
Additional resources
111
Red Hat Enterprise Linux 8 System Design Guide
Prerequisites
Procedure
2. Scan the system for vulnerabilities and save results to the vulnerability.html file:
Verification
Additional resources
Prerequisites
The openscap-utils and bzip2 packages are installed on the system you use for scanning.
112
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Procedure
2. Scan a remote system for vulnerabilities and save the results to a file:
Replace:
<username>@<hostname> with the user name and host name of the remote system.
<port> with the port number through which you can access the remote system, for example,
22.
<scan-report.html> with the file name where oscap saves the scan results.
Additional resources
oscap-ssh(8)
Red Hat recommends you follow the Security Content Automation Protocol (SCAP) content provided
in the SCAP Security Guide package because it is in line with Red Hat best practices for affected
components.
The SCAP Security Guide package provides content which conforms to the SCAP 1.2 and SCAP 1.3
standards. The openscap scanner utility is compatible with both SCAP 1.2 and SCAP 1.3 content
provided in the SCAP Security Guide package.
IMPORTANT
113
Red Hat Enterprise Linux 8 System Design Guide
IMPORTANT
The SCAP Security Guide suite provides profiles for several platforms in a form of data stream
documents. A data stream is a file that contains definitions, benchmarks, profiles, and individual rules.
Each rule specifies the applicability and requirements for compliance. RHEL provides several profiles for
compliance with security policies. In addition to the industry standard, Red Hat data streams also contain
information for remediation of failed rules.
Data stream
├── xccdf
| ├── benchmark
| ├── profile
| | ├──rule reference
| | └──variable
| ├── rule
| ├── human readable data
| ├── oval reference
├── oval ├── ocil reference
├── ocil ├── cpe reference
└── cpe └── remediation
A profile is a set of rules based on a security policy, such as OSPP, PCI-DSS, and Health Insurance
Portability and Accountability Act (HIPAA). This enables you to audit the system in an automated way
for compliance with security standards.
You can modify (tailor) a profile to customize certain rules, for example, password length. For more
information about profile tailoring, see Customizing a security profile with SCAP Workbench .
Pass
The scan did not find any conflicts with this rule.
Fail
The scan found a conflict with this rule.
Not checked
OpenSCAP does not perform an automatic evaluation of this rule. Check whether your system
conforms to this rule manually.
Not applicable
This rule does not apply to the current configuration.
Not selected
This rule is not part of the profile. OpenSCAP does not evaluate this rule and does not display these
rules in the results.
114
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Error
The scan encountered an error. For additional information, you can enter the oscap command with
the --verbose DEVEL option. File a support case on the Red Hat customer portal or open a ticket in
the RHEL project in Red Hat Jira .
Unknown
The scan encountered an unexpected situation. For additional information, you can enter the oscap
command with the `--verbose DEVEL option. File a support case on the Red Hat customer portal or
open a ticket in the RHEL project in Red Hat Jira .
Prerequisites
Procedure
1. List all available files with security compliance profiles provided by the SCAP Security Guide
project:
$ ls /usr/share/xml/scap/ssg/content/
ssg-firefox-cpe-dictionary.xml ssg-rhel6-ocil.xml
ssg-firefox-cpe-oval.xml ssg-rhel6-oval.xml
…
ssg-rhel6-ds-1.2.xml ssg-rhel8-oval.xml
ssg-rhel8-ds.xml ssg-rhel8-xccdf.xml
…
2. Display detailed information about a selected data stream using the oscap info subcommand.
XML files containing data streams are indicated by the -ds string in their names. In the Profiles
section, you can find a list of available profiles and their IDs:
3. Select a profile from the data stream file and display additional details about the selected
profile. To do so, use oscap info with the --profile option followed by the last section of the ID
displayed in the output of the previous command. For example, the ID of the HIPPA profile is
xccdf_org.ssgproject.content_profile_hipaa, and the value for the --profile option is hipaa:
115
Red Hat Enterprise Linux 8 System Design Guide
Profile
Title: Health Insurance Portability and Accountability Act (HIPAA)
Description: The HIPAA Security Rule establishes U.S. national standards to protect
individuals’ electronic personal health information that is created, received, used, or
maintained by a covered entity.
…
Additional resources
Prerequisites
You know the ID of the profile within the baseline with which the system should comply. To find
the ID, see the Viewing profiles for configuration compliance section.
Procedure
1. Scan the local system for compliance with the selected profile and save the scan results to a file:
Replace:
<scan-report.html> with the file name where oscap saves the scan results.
<profileID> with the profile ID with which the system should comply, for example, hipaa.
2. Optional: Scan a remote system for compliance with the selected profile and save the scan
results to a file:
Replace:
<username>@<hostname> with the user name and host name of the remote system.
<port> with the port number through which you can access the remote system.
<scan-report.html> with the file name where oscap saves the scan results.
<profileID> with the profile ID with which the system should comply, for example, hipaa.
116
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Additional resources
WARNING
If not used carefully, running the system evaluation with the Remediate option
enabled might render the system non-functional. Red Hat does not provide any
automated method to revert changes made by security-hardening remediations.
Remediations are supported on RHEL systems in the default configuration. If your
system has been altered after the installation, running remediation might not make
it compliant with the required security profile.
Prerequisites
Procedure
1. Remediate the system by using the oscap command with the --remediate option:
Replace <profileID> with the profile ID with which the system should comply, for example,
hipaa.
Verification
1. Evaluate compliance of the system with the profile, and save the scan results to a file:
117
Red Hat Enterprise Linux 8 System Design Guide
Replace:
<scan-report.html> with the file name where oscap saves the scan results.
<profileID> with the profile ID with which the system should comply, for example, hipaa.
Additional resources
Complementing the DISA benchmark using the SSG content Knowledgebase article
WARNING
If not used carefully, running the system evaluation with the Remediate option
enabled might render the system non-functional. Red Hat does not provide any
automated method to revert changes made by security-hardening remediations.
Remediations are supported on RHEL systems in the default configuration. If your
system has been altered after the installation, running remediation might not make
it compliant with the required security profile.
Prerequisites
The ansible-core package is installed. See the Ansible Installation Guide for more information.
RHEL 8.6 or later is installed. For more information about installing RHEL, see Interactively
installing RHEL from installation media.
NOTE
In RHEL 8.5 and earlier versions, Ansible packages were provided through Ansible
Engine instead of Ansible Core, and with a different level of support. Do not use
Ansible Engine because the packages might not be compatible with Ansible
automation content in RHEL 8.6 and later. For more information, see Scope of
support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and
later AppStream repositories.
Procedure
118
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Verification
1. Evaluate the compliance of the system with the HIPAA profile, and save the scan results to a
file:
Replace <scan-report.html> with the file name where oscap saves the scan results.
Additional resources
Ansible Documentation
NOTE
In RHEL 8.6, Ansible Engine is replaced by the ansible-core package, which contains only
built-in modules. Note that many Ansible remediations use modules from the community
and Portable Operating System Interface (POSIX) collections, which are not included in
the built-in modules. In this case, you can use Bash remediations as a substitute for
Ansible remediations. The Red Hat Connector in RHEL 8.6 includes the Ansible modules
necessary for the remediation playbooks to function with Ansible Core.
Prerequisites
Procedure
2. Find the value of the result ID in the file with the results:
119
Red Hat Enterprise Linux 8 System Design Guide
4. Review the generated file, which contains the Ansible remediations for rules that failed during
the scan performed in step 1. After reviewing this generated file, you can apply it by using the
ansible-playbook <hipaa-remediations.yml> command.
Verification
In a text editor of your choice, review that the generated <hipaa-remediations.yml> file
contains rules that failed in the scan performed in step 1.
Additional resources
Ansible Documentation
Prerequisites
Procedure
1. Use the oscap command to scan the system and to save the results to an XML file. In the
following example, oscap evaluates the system against the hipaa profile:
2. Find the value of the result ID in the file with the results:
4. The <hipaa-remediations.sh> file contains remediations for rules that failed during the scan
120
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
4. The <hipaa-remediations.sh> file contains remediations for rules that failed during the scan
performed in step 1. After reviewing this generated file, you can apply it with the ./<hipaa-
remediations.sh> command when you are in the same directory as this file.
Verification
In a text editor of your choice, review that the <hipaa-remediations.sh> file contains rules that
failed in the scan performed in step 1.
Additional resources
Prerequisites
Procedure
1. To run SCAP Workbench from the GNOME Classic desktop environment, press the Super
key to enter the Activities Overview, type scap-workbench, and then press Enter.
Alternatively, use:
$ scap-workbench &
Open Other Content in the File menu, and search the respective XCCDF, SCAP RPM, or
data stream file.
121
Red Hat Enterprise Linux 8 System Design Guide
3. You can allow automatic correction of the system configuration by selecting the Remediate
check box. With this option enabled, SCAP Workbench attempts to change the system
configuration in accordance with the security rules applied by the policy. This process should fix
the related checks that fail during the system scan.
WARNING
If not used carefully, running the system evaluation with the Remediate
option enabled might render the system non-functional. Red Hat does not
provide any automated method to revert changes made by security-
hardening remediations. Remediations are supported on RHEL systems in
the default configuration. If your system has been altered after the
installation, running remediation might not make it compliant with the
required security profile.
4. Scan your system with the selected profile by clicking the Scan button.
122
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
5. To store the scan results in form of an XCCDF, ARF, or HTML file, click the Save Results
combo box. Choose the HTML Report option to generate the scan report in human-readable
format. The XCCDF and ARF (data stream) formats are suitable for further automatic
processing. You can repeatedly choose all three options.
6. To export results-based remediations to a file, use the Generate remediation role pop-up
menu.
The following procedure demonstrates the use of SCAP Workbench for customizing (tailoring) a
profile. You can also save the tailored profile for use with the oscap command-line utility.
Prerequisites
Procedure
1. Run SCAP Workbench, and select the profile to customize by using either Open content from
SCAP Security Guide or Open Other Content in the File menu.
2. To adjust the selected security profile according to your needs, click the Customize button.
This opens the new Customization window that enables you to modify the currently selected
123
Red Hat Enterprise Linux 8 System Design Guide
This opens the new Customization window that enables you to modify the currently selected
profile without changing the original data stream file. Choose a new profile ID.
3. Find a rule to modify using either the tree structure with rules organized into logical groups or
the Search field.
4. Include or exclude rules using check boxes in the tree structure, or modify values in rules where
applicable.
Save a customization file separately by using Save Customization Only in the File menu.
Save all security content at once by Save All in the File menu.
If you select the Into a directory option, SCAP Workbench saves both the data stream file
and the customization file to the specified location. You can use this as a backup solution.
124
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
By selecting the As RPM option, you can instruct SCAP Workbench to create an RPM
package containing the data stream file and the customization file. This is useful for
distributing the security content to systems that cannot be scanned remotely, and for
delivering the content for further processing.
NOTE
Because SCAP Workbench does not support results-based remediations for tailored
profiles, use the exported remediations with the oscap command-line utility.
Deploy customized SCAP policies with Satellite 6.x (Red Hat Knowledgebase)
NOTE
The oscap-podman command is available from RHEL 8.2. For RHEL 8.1 and 8.0, use the
workaround described in the Using OpenSCAP for scanning containers in RHEL 8
Knowledgebase article.
Prerequisites
Procedure
# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.access.redhat.com/ubi8/ubi latest 096cae65a207 7 weeks ago 239 MB
3. Scan the container or the container image for vulnerabilities and save results to the
vulnerability.html file:
Note that the oscap-podman command requires root privileges, and the ID of a container is the
125
Red Hat Enterprise Linux 8 System Design Guide
Note that the oscap-podman command requires root privileges, and the ID of a container is the
first argument.
Verification
Additional resources
For more information, see the oscap-podman(8) and oscap(8) man pages.
NOTE
The oscap-podman command is available from RHEL 8.2. For RHEL 8.1 and 8.0, use the
workaround described in the Using OpenSCAP for scanning containers in RHEL 8
Knowledgebase article.
Prerequisites
Procedure
2. Evaluate the compliance of the container or container image with a profile and save the scan
results into a file:
Replace:
<scan-report.html> with the file name where oscap saves the scan results
<profileID> with the profile ID with which the system should comply, for example, hipaa,
ospp, or pci-dss
126
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Verification
NOTE
The rules marked as notapplicable apply only to bare-metal and virtualized systems and
not to containers or container images.
Additional resources
/usr/share/doc/scap-security-guide/ directory.
Prerequisites
Procedure
# aide --init
Start timestamp: 2024-07-08 10:39:23 -0400 (AIDE 0.16)
AIDE initialized database at /var/lib/aide/aide.db.new.gz
---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------
/var/lib/aide/aide.db.new.gz
…
127
Red Hat Enterprise Linux 8 System Design Guide
SHA512 : mZaWoGzL2m6ZcyyZ/AXTIowliEXWSZqx
IFYImY4f7id4u+Bq8WeuSE2jasZur/A4
FPBFaBkoCFHdoE/FW/V94Q==
3. Optional: In the default configuration, the aide --init command checks just a set of directories
and files defined in the /etc/aide.conf file. To include additional directories or files in the AIDE
database, and to change their watched parameters, edit /etc/aide.conf accordingly.
4. To start using the database, remove the .new substring from the initial database file name:
# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz
5. Optional: To change the location of the AIDE database, edit the /etc/aide.conf file and modify
the DBDIR value. For additional security, store the database, configuration, and the
/usr/sbin/aide binary file in a secure location such as a read-only media.
Prerequisites
AIDE is properly installed and its database is initialized. See Installing AIDE
Procedure
# aide --check
Start timestamp: 2024-07-08 10:43:46 -0400 (AIDE 0.16)
AIDE found differences between database and filesystem!!
Summary:
Total number of entries: 55856
Added entries: 0
Removed entries: 0
Changed entries: 1
---------------------------------------------------
Changed entries:
---------------------------------------------------
---------------------------------------------------
Detailed information about changes:
---------------------------------------------------
File: /root/.viminfo
SELinux : system_u:object_r:admin_home_t:s | unconfined_u:object_r:admin_home
0 | _t:s0
…
2. At a minimum, configure the system to run AIDE weekly. Optimally, run AIDE daily. For example,
128
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
2. At a minimum, configure the system to run AIDE weekly. Optimally, run AIDE daily. For example,
to schedule a daily execution of AIDE at 04:05 a.m. by using the cron command, add the
following line to the /etc/crontab file:
Additional resources
Prerequisites
AIDE is properly installed and its database is initialized. See Installing AIDE
Procedure
# aide --update
2. To start using the updated database for integrity checks, remove the .new substring from the
file name.
What AIDE is a utility that creates a database of IMA detects if a file is altered by checking file
files and directories on the system. This measurement (hash values) compared to
database serves for checking file integrity previously stored extended attributes.
and detect intrusion detection.
How AIDE uses rules to compare the integrity IMA uses file hash values to detect the
state of the files and directories. intrusion.
Why Detection - AIDE detects if a file is modified Detection and Prevention - IMA detects and
by verifying the rules. prevents an attack by replacing the extended
attribute of a file.
129
Red Hat Enterprise Linux 8 System Design Guide
Usage AIDE detects a threat when the file or IMA detects a threat when someone tries to
directory is modified. alter the entire file.
Extension AIDE checks the integrity of files and IMA ensures security on the local and remote
directories on the local system. systems.
Red Hat Enterprise Linux uses LUKS to perform block device encryption. By default, the option to
encrypt the block device is unchecked during the installation. If you select the option to encrypt your
disk, the system prompts you for a passphrase every time you boot the computer. This passphrase
unlocks the bulk encryption key that decrypts your partition. If you want to modify the default partition
table, you can select the partitions that you want to encrypt. This is set in the partition table settings.
Ciphers
The default cipher used for LUKS is aes-xts-plain64. The default key size for LUKS is 512 bits. The
default key size for LUKS with Anaconda XTS mode is 512 bits. The following are the available ciphers:
Twofish
Serpent
LUKS encrypts entire block devices and is therefore well-suited for protecting contents of
mobile devices such as removable storage media or laptop disk drives.
The underlying contents of the encrypted block device are arbitrary, which makes it useful for
130
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
The underlying contents of the encrypted block device are arbitrary, which makes it useful for
encrypting swap devices. This can also be useful with certain databases that use specially
formatted block devices for data storage.
LUKS devices contain multiple key slots, which means you can add backup keys or passphrases.
IMPORTANT
Disk-encryption solutions such as LUKS protect the data only when your system
is off. After the system is on and LUKS has decrypted the disk, the files on that
disk are available to anyone who have access to them.
Scenarios that require multiple users to have distinct access keys to the same
device. The LUKS1 format provides eight key slots and LUKS2 provides up to 32
key slots.
Additional resources
The LUKS2 format enables future updates of various parts without a need to modify binary structures.
Internally it uses JSON text format for metadata, provides redundancy of metadata, detects metadata
corruption, and automatically repairs from a metadata copy.
IMPORTANT
131
Red Hat Enterprise Linux 8 System Design Guide
IMPORTANT
Do not use LUKS2 in systems that support only LUKS1 because LUKS2 and LUKS1 use
different commands to encrypt the disk. Using the wrong command for a LUKS version
might cause data loss.
LUKS1 cryptsetup-reencrypt
Online re-encryption
The LUKS2 format supports re-encrypting encrypted devices while the devices are in use. For example,
you do not have to unmount the file system on the device to perform the following tasks:
Conversion
In certain situations, you can convert LUKS1 to LUKS2. The conversion is not possible specifically in the
following scenarios:
A LUKS1 device is marked as being used by a Policy-Based Decryption (PBD) Clevis solution.
The cryptsetup tool does not convert the device when some luksmeta metadata are detected.
A device is active. The device must be in an inactive state before any conversion is possible.
checksum
The default mode. It balances data protection and performance.
This mode stores individual checksums of the sectors in the re-encryption area, which the recovery
process can detect for the sectors that were re-encrypted by LUKS2. The mode requires that the
block device sector write is atomic.
journal
The safest mode but also the slowest. Since this mode journals the re-encryption area in the binary
area, the LUKS2 writes the data twice.
none
132
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
The none mode prioritizes performance and provides no data protection. It protects the data only
against safe process termination, such as the SIGTERM signal or the user pressing Ctrl+C key. Any
unexpected system failure or application failure might result in data corruption.
If a LUKS2 re-encryption process terminates unexpectedly by force, LUKS2 can perform the recovery in
one of the following ways:
Automatically
By performing any one of the following actions triggers the automatic recovery action during the
next LUKS2 device open action:
Manually
By using the cryptsetup repair /dev/sdx command on the LUKS2 device.
Additional resources
Prerequisites
WARNING
You might lose your data during the encryption process due to a hardware,
kernel, or human failure. Ensure that you have a reliable backup before you
start encrypting the data.
Procedure
1. Unmount all file systems on the device that you plan to encrypt, for example:
# umount /dev/mapper/vg00-lv00
2. Make free space for storing a LUKS header. Use one of the following options that suits your
scenario:
In the case of encrypting a logical volume, you can extend the logical volume without
resizing the file system. For example:
133
Red Hat Enterprise Linux 8 System Design Guide
Shrink the file system on the device. You can use the resize2fs utility for the ext2, ext3, or
ext4 file systems. Note that you cannot shrink the XFS file system.
a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325
b. Open /etc/crypttab in a text editor of your choice and add a device in this file:
$ vi /etc/crypttab
$ dracut -f --regenerate-all
a. Find the file system’s UUID of the active LUKS block device:
$ blkid -p /dev/mapper/lv00_encrypted
/dev/mapper/lv00-encrypted: UUID="37bc2492-d8fa-4969-9d9b-bb64d3685aa9"
BLOCK_SIZE="4096" TYPE="xfs" USAGE="filesystem"
b. Open /etc/fstab in a text editor of your choice and add a device in this file, for example:
$ vi /etc/fstab
134
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Verification
Data segments:
0: crypt
offset: 33554432 [bytes]
length: (whole device)
cipher: aes-xts-plain64
[...]
Additional resources
9.13.5. Encrypting existing data on a block device using LUKS2 with a detached
header
You can encrypt existing data on a block device without creating free space for storing a LUKS header.
135
Red Hat Enterprise Linux 8 System Design Guide
You can encrypt existing data on a block device without creating free space for storing a LUKS header.
The header is stored in a detached location, which also serves as an additional layer of security. The
procedure uses the LUKS2 encryption format.
Prerequisites
WARNING
You might lose your data during the encryption process due to a hardware,
kernel, or human failure. Ensure that you have a reliable backup before you
start encrypting the data.
Procedure
# umount /dev/nvme0n1p1
WARNING!
========
Header file does not exist, do you want to create it?
Replace /home/header with a path to the file with a detached LUKS header. The detached
LUKS header has to be accessible to unlock the encrypted device later.
136
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Verification
1. Verify if the existing data on a block device using LUKS2 with a detached header is encrypted:
Data segments:
0: crypt
offset: 0 [bytes]
length: (whole device)
cipher: aes-xts-plain64
sector: 512 [bytes]
[...]
Additional resources
Prerequisites
A blank block device. You can use commands such as lsblk to find if there is no real data on that
device, for example, a file system.
Procedure
137
Red Hat Enterprise Linux 8 System Design Guide
WARNING!
========
This will overwrite data on /dev/nvme0n1p1 irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for /dev/nvme0n1p1:
Verify passphrase:
This unlocks the partition and maps it to a new device by using the device mapper. To not
overwrite the encrypted data, this command alerts the kernel that the device is an encrypted
device and addressed through LUKS by using the /dev/mapper/device_mapped_name path.
3. Create a file system to write encrypted data to the partition, which must be accessed through
the device mapped name:
Verification
Data segments:
0: crypt
offset: 16777216 [bytes]
length: (whole device)
cipher: aes-xts-plain64
sector: 512 [bytes]
[...]
138
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Additional resources
Prerequisites
Procedure
2. Click Storage.
3. In the Storage table, click the menu button, ⋮, next to the storage device you want to encrypt.
9. Click Format.
139
Red Hat Enterprise Linux 8 System Design Guide
Prerequisites
Procedure
2. Click Storage
4. On the disk page, scroll to the Keys section and click the edit button.
140
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
6. Click Save
9.13.9. Creating a LUKS2 encrypted volume by using the storage RHEL system role
You can use the storage role to create and configure a volume encrypted with LUKS by running an
Ansible playbook.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
luks_password: <password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
vars_files:
- vault.yml
tasks:
- name: Create and configure a volume encrypted with LUKS
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
fs_label: <label>
mount_point: /mnt/data
encryption: true
encryption_password: "{{ luks_password }}"
141
Red Hat Enterprise Linux 8 System Design Guide
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Verification
4e4e7970-1822-470e-b55a-e91efe5d0f5c
Data segments:
0: crypt
offset: 16777216 [bytes]
142
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Ansible vault
PBD allows combining different unlocking methods into a policy, which makes it possible to unlock the
same volume in different ways. The current implementation of the PBD in RHEL consists of the Clevis
framework and plug-ins called pins. Each pin provides a separate unlocking capability. Currently, the
following pins are available:
tang
Allows unlocking volumes using a network server.
tpm2
allows unlocking volumes using a TPM2 policy.
sss
allows deploying high-availability systems using the Shamir’s Secret Sharing (SSS) cryptographic
scheme.
Figure 9.1. NBDE scheme when using a LUKS1-encrypted volume. The luksmeta package is not used
143
Red Hat Enterprise Linux 8 System Design Guide
Figure 9.1. NBDE scheme when using a LUKS1-encrypted volume. The luksmeta package is not used
for LUKS2 volumes.
Tang is a server for binding data to network presence. It makes a system containing your data available
when the system is bound to a certain secure network. Tang is stateless and does not require TLS or
authentication. Unlike escrow-based solutions, where the server stores all encryption keys and has
knowledge of every key ever used, Tang never interacts with any client keys, so it never gains any
identifying information from the client.
Clevis is a pluggable framework for automated decryption. In NBDE, Clevis provides automated
unlocking of LUKS volumes. The clevis package provides the client side of the feature.
A Clevis pin is a plug-in into the Clevis framework. One of such pins is a plug-in that implements
interactions with the NBDE server — Tang.
Clevis and Tang are generic client and server components that provide network-bound encryption. In
RHEL, they are used in conjunction with LUKS to encrypt and decrypt root and non-root storage
volumes to accomplish Network-Bound Disk Encryption.
Both client- and server-side components use the José library to perform encryption and decryption
operations.
When you begin provisioning NBDE, the Clevis pin for Tang server gets a list of the Tang server’s
advertised asymmetric keys. Alternatively, since the keys are asymmetric, a list of Tang’s public keys can
be distributed out of band so that clients can operate without access to the Tang server. This mode is
called offline provisioning.
The Clevis pin for Tang uses one of the public keys to generate a unique, cryptographically-strong
encryption key. Once the data is encrypted using this key, the key is discarded. The Clevis client should
store the state produced by this provisioning operation in a convenient location. This process of
encrypting data is the provisioning step.
The LUKS version 2 (LUKS2) is the default disk-encryption format in RHEL, hence, the provisioning
state for NBDE is stored as a token in a LUKS2 header. The leveraging of provisioning state for NBDE by
the luksmeta package is used only for volumes encrypted with LUKS1.
The Clevis pin for Tang supports both LUKS1 and LUKS2 without specification need. Clevis can encrypt
plain-text files but you have to use the cryptsetup tool for encrypting block devices. See the Encrypting
block devices using LUKS for more information.
144
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
When the client is ready to access its data, it loads the metadata produced in the provisioning step and it
responds to recover the encryption key. This process is the recovery step.
In NBDE, Clevis binds a LUKS volume using a pin so that it can be automatically unlocked. After
successful completion of the binding process, the disk can be unlocked using the provided Dracut
unlocker.
NOTE
If the kdump kernel crash dumping mechanism is set to save the content of the system
memory to a LUKS-encrypted device, you are prompted for entering a password during
the second kernel boot.
Additional resources
How to set up Network-Bound Disk Encryption with multiple LUKS devices (Clevis + Tang
unlocking) Knowledgebase article
Prerequisites
Procedure
1. To install the tang package and its dependencies, enter the following command as root:
2. Pick an unoccupied port, for example, 7500/tcp, and allow the tangd service to bind to that
port:
Note that a port can be used only by one service at a time, and thus an attempt to use an
already occupied port implies the ValueError: Port already defined error message.
# firewall-cmd --add-port=7500/tcp
# firewall-cmd --runtime-to-permanent
145
Red Hat Enterprise Linux 8 System Design Guide
6. In the following editor screen, which opens an empty override.conf file located in the
/etc/systemd/system/tangd.socket.d/ directory, change the default port for the Tang server
from 80 to the previously picked number by adding the following lines:
[Socket]
ListenStream=
ListenStream=7500
IMPORTANT
Insert the previous code snippet between the lines starting with # Anything
between here and # Lines below this, otherwise the system discards your
changes.
7. Save the changes by pressing Ctrl+O and Enter. Exit the editor by pressing Ctrl+X.
# systemctl daemon-reload
Because tangd uses the systemd socket activation mechanism, the server starts as soon as the
first connection comes in. A new set of cryptographic keys is automatically generated at the first
start. To perform cryptographic operations such as manual key generation, use the jose utility.
Verification
On your NBDE client, verify that your Tang server works correctly by using the following
command. The command must return the identical message you pass for encryption and
decryption:
Additional resources
Alternatively, you can rotate Tang keys by using the nbde_server RHEL system role. See Using the
nbde_server system role for setting up multiple Tang servers for more information.
Prerequisites
Note that clevis luks list, clevis luks report, and clevis luks regen have been introduced in
RHEL 8.2.
Procedure
1. Rename all keys in the /var/db/tang key database directory to have a leading . to hide them
from advertisement. Note that the file names in the following example differs from unique file
names in the key database directory of your Tang server:
# cd /var/db/tang
# ls -l
-rw-r--r--. 1 root root 349 Feb 7 14:55 UV6dqXSwe1bRKG3KbJmdiR020hY.jwk
-rw-r--r--. 1 root root 354 Feb 7 14:55 y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk
# mv UV6dqXSwe1bRKG3KbJmdiR020hY.jwk .UV6dqXSwe1bRKG3KbJmdiR020hY.jwk
# mv y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk .y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk
2. Check that you renamed and therefore hid all keys from the Tang server advertisement:
# ls -l
total 0
3. Generate new keys using the /usr/libexec/tangd-keygen command in /var/db/tang on the Tang
server:
# /usr/libexec/tangd-keygen /var/db/tang
# ls /var/db/tang
3ZWS6-cDrCG61UPJS2BMmPU4I54.jwk zyLuX6hijUy_PSeUEFDi7hi38.jwk
4. Check that your Tang server advertises the signing key from the new key pair, for example:
# tang-show-keys 7500
3ZWS6-cDrCG61UPJS2BMmPU4I54
5. On your NBDE clients, use the clevis luks report command to check if the keys advertised by
the Tang server remains the same. You can identify slots with the relevant binding using the
clevis luks list command, for example:
147
Red Hat Enterprise Linux 8 System Design Guide
6. To regenerate LUKS metadata for the new keys either press y to the prompt of the previous
command, or use the clevis luks regen command:
7. When you are sure that all old clients use the new keys, you can remove the old keys from the
Tang server, for example:
# cd /var/db/tang
# rm .*.jwk
WARNING
Removing the old keys while clients are still using them can result in data loss. If you
accidentally remove such keys, use the clevis luks regen command on the clients,
and provide your LUKS password manually.
Additional resources
9.14.4. Configuring automated unlocking by using a Tang key in the web console
You can configure automated unlocking of a LUKS-encrypted storage device using a key provided by a
Tang server.
Prerequisites
A Tang server is available. See Deploying a Tang server with SELinux in enforcing mode for
details.
You have root privileges or permissions to enter administrative commands with sudo.
Procedure
148
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
2. Switch to administrative access, provide your credentials, and click Storage. In the Storage
table, click the disk that contains an encrypted volume you plan to add to unlock automatically.
3. In the following page with details of the selected disk, click + in the Keys section to add a Tang
key:
4. Select Tang keyserver as Key source, provide the address of your Tang server, and a password
that unlocks the LUKS-encrypted device. Click Add to confirm:
149
Red Hat Enterprise Linux 8 System Design Guide
The following dialog window provides a command to verify that the key hash matches.
5. In a terminal on the Tang server, use the tang-show-keys command to display the key hash for
comparison. In this example, the Tang server is running on the port 7500:
# tang-show-keys 7500
x100_1k6GPiDOaMlL3WbpCjHOy9ul1bSfdhI3M08wO0
6. Click Trust key when the key hashes in the web console and in the output of previously listed
commands are the same:
7. In RHEL 8.8 and later, after you select an encrypted root file system and a Tang server, you can
skip adding the rd.neednet=1 parameter to the kernel command line, installing the clevis-
dracut package, and regenerating an initial RAM disk ( initrd). For non-root file systems, the
web console now enables the remote-cryptsetup.target and clevis-luks-akspass.path
systemd units, installs the clevis-systemd package, and adds the _netdev parameter to the
fstab and crypttab configuration files.
Verification
1. Check that the newly added Tang key is now listed in the Keys section with the Keyserver type:
150
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
2. Verify that the bindings are available for the early boot, for example:
Additional resources
The following commands demonstrate the basic functionality provided by Clevis on examples containing
plain-text files. You can also use them for troubleshooting your NBDE or Clevis+TPM deployments.
To check that a Clevis encryption client binds to a Tang server, use the clevis encrypt tang
sub-command:
_OsIk0T-E2l6qjfdDiwVmidoZjA
Change the http://tang.srv:port URL in the previous example to match the URL of the server
151
Red Hat Enterprise Linux 8 System Design Guide
Change the http://tang.srv:port URL in the previous example to match the URL of the server
where tang is installed. The secret.jwe output file contains your encrypted cipher text in the
JWE format. This cipher text is read from the input-plain.txt input file.
Use the advertisement in the adv.jws file for any following tasks, such as encryption of files or
messages:
To decrypt data, use the clevis decrypt command and provide the cipher text (JWE):
To encrypt using a TPM 2.0 chip, use the clevis encrypt tpm2 sub-command with the only
argument in form of the JSON configuration object:
To choose a different hierarchy, hash, and key algorithms, specify configuration properties, for
example:
To decrypt the data, provide the ciphertext in the JSON Web Encryption (JWE) format:
The pin also supports sealing data to a Platform Configuration Registers (PCR) state. That way, the
data can only be unsealed if the PCR hashes values match the policy used when sealing.
For example, to seal the data to the PCR with index 0 and 7 for the SHA-256 bank:
152
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
WARNING
Hashes in PCRs can be rewritten, and you no longer can unlock your encrypted
volume. For this reason, add a strong passphrase that enable you to unlock the
encrypted volume manually even when a value in a PCR changes.
If the system cannot automatically unlock your encrypted volume after an upgrade
of the shim-x64 package, follow the steps in the Clevis TPM2 no longer decrypts
LUKS devices after a restart KCS article.
Additional resources
clevis, clevis decrypt, and clevis encrypt tang commands without any arguments show the
built-in CLI help, for example:
Prerequisites
Procedure
2. Identify the LUKS-encrypted volume for PBD. In the following example, the block device is
referred as /dev/sda2:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 12G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 11G 0 part
└─luks-40e20552-2ade-4954-9d56-565aa7994fb6 253:0 0 11G 0 crypt
├─rhel-root 253:0 0 9.8G 0 lvm /
└─rhel-swap 253:1 0 1.2G 0 lvm [SWAP]
153
Red Hat Enterprise Linux 8 System Design Guide
3. Bind the volume to a Tang server using the clevis luks bind command:
_OsIk0T-E2l6qjfdDiwVmidoZjA
a. Creates a new key with the same entropy as the LUKS master key.
c. Stores the Clevis JWE object in the LUKS2 header token or uses LUKSMeta if the non-
default LUKS1 header is used.
NOTE
The binding procedure assumes that there is at least one free LUKS password
slot. The clevis luks bind command takes one of the slots.
The volume can now be unlocked with your existing password as well as with the Clevis policy.
4. To enable the early boot system to process the disk binding, use the dracut tool on an already
installed system. In RHEL, Clevis produces a generic initrd (initial RAM disk) without host-
specific configuration options and does not automatically add parameters such as rd.neednet=1
to the kernel command line. If your configuration relies on a Tang pin that requires network
during early boot, use the --hostonly-cmdline argument and dracut adds rd.neednet=1 when it
detects a Tang binding:
c. Alternatively, create a .conf file in the /etc/dracut.conf.d/ directory, and add the
hostonly_cmdline=yes option to the file. Then, you can use dracut without --hostonly-
cmdline, for example:
154
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
d. You can also ensure that networking for a Tang pin is available during early boot by using
the grubby tool on the system where Clevis is installed:
Verification
1. Verify that the Clevis JWE object is successfully placed in a LUKS header, use the clevis luks
list command:
2. Check that the bindings are available for the early boot, for example:
Additional resources
Looking forward to Linux network configuration in the initial ramdisk (initrd) (Red Hat Enable
Sysadmin)
Prerequisites
The NBDE client is configured for automated unlocking of encrypted volumes by the Tang
server.
For details, see Configuring NBDE clients for automated unlocking of LUKS-encrypted volumes .
Procedure
1. You can provide your static network configuration as a value for the kernel-cmdline option in a
dracut command, for example:
2. Alternatively, create a .conf file in the /etc/dracut.conf.d/ directory with the static network
155
Red Hat Enterprise Linux 8 System Design Guide
2. Alternatively, create a .conf file in the /etc/dracut.conf.d/ directory with the static network
information and then, regenerate the initial RAM disk image:
# cat /etc/dracut.conf.d/static_ip.conf
kernel_cmdline="ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none
nameserver=192.0.2.100"
# dracut -fv --regenerate-all
Prerequisites
Procedure
2. Identify the LUKS-encrypted volume for PBD. In the following example, the block device is
referred as /dev/sda2:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 12G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 11G 0 part
└─luks-40e20552-2ade-4954-9d56-565aa7994fb6 253:0 0 11G 0 crypt
├─rhel-root 253:0 0 9.8G 0 lvm /
└─rhel-swap 253:1 0 1.2G 0 lvm [SWAP]
3. Bind the volume to a TPM 2.0 device using the clevis luks bind command, for example:
a. Creates a new key with the same entropy as the LUKS master key.
c. Stores the Clevis JWE object in the LUKS2 header token or uses LUKSMeta if the non-
default LUKS1 header is used.
156
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
NOTE
The binding procedure assumes that there is at least one free LUKS
password slot. The clevis luks bind command takes one of the slots.
Alternatively, if you want to seal data to specific Platform Configuration Registers (PCR)
states, add the pcr_bank and pcr_ids values to the clevis luks bind command, for
example:
IMPORTANT
Because the data can only be unsealed if PCR hashes values match the
policy used when sealing and the hashes can be rewritten, add a strong
passphrase that enable you to unlock the encrypted volume manually when a
value in a PCR changes.
4. The volume can now be unlocked with your existing password as well as with the Clevis policy.
5. To enable the early boot system to process the disk binding, use the dracut tool on an already
installed system:
Verification
1. To verify that the Clevis JWE object is successfully placed in a LUKS header, use the clevis
luks list command:
Additional resources
IMPORTANT
157
Red Hat Enterprise Linux 8 System Design Guide
IMPORTANT
The recommended way to remove a Clevis pin from a LUKS-encrypted volume is through
the clevis luks unbind command. The removal procedure using clevis luks unbind
consists of only one step and works for both LUKS1 and LUKS2 volumes. The following
example command removes the metadata created by the binding step and wipe the key
slot 1 on the /dev/sda2 device:
Prerequisites
Procedure
1. Check which LUKS version the volume, for example /dev/sda2, is encrypted by and identify a
slot and a token that is bound to Clevis:
In the previous example, the Clevis token is identified by 0 and the associated key slot is 1.
3. If your device is encrypted by LUKS1, which is indicated by the Version: 1 string in the output of
the cryptsetup luksDump command, perform this additional step with the luksmeta wipe
command:
Additional resources
158
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Procedure
1. Instruct Kickstart to partition the disk such that LUKS encryption has enabled for all mount
points, other than /boot, with a temporary password. The password is temporary for this step of
the enrollment process.
Note that OSPP-compliant systems require a more complex configuration, for example:
2. Install the related Clevis packages by listing them in the %packages section:
%packages
clevis-dracut
clevis-luks
clevis-systemd
%end
3. Optional: To ensure that you can unlock the encrypted volume manually when required, add a
strong passphrase before you remove the temporary passphrase. See the How to add a
passphrase, key, or keyfile to an existing LUKS device article for more information.
4. Call clevis luks bind to perform binding in the %post section. Afterward, remove the
temporary password:
%post
clevis luks bind -y -k - -d /dev/vda2 \
tang '{"url":"http://tang.srv"}' <<< "temppass"
cryptsetup luksRemoveKey /dev/vda2 <<< "temppass"
dracut -fv --regenerate-all
%end
If your configuration relies on a Tang pin that requires network during early boot or you use
NBDE clients with static IP configurations, you have to modify the dracut command as
described in Configuring manual enrollment of LUKS-encrypted volumes .
159
Red Hat Enterprise Linux 8 System Design Guide
Note that the -y option for the clevis luks bind command is available from RHEL 8.3. In RHEL
8.2 and older, replace -y by -f in the clevis luks bind command and download the
advertisement from the Tang server:
%post
curl -sfg http://tang.srv/adv -o adv.jws
clevis luks bind -f -k - -d /dev/vda2 \
tang '{"url":"http://tang.srv","adv":"adv.jws"}' <<< "temppass"
cryptsetup luksRemoveKey /dev/vda2 <<< "temppass"
dracut -fv --regenerate-all
%end
WARNING
You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server.
Additional resources
Procedure
2. Reboot the system, and then perform the binding step using the clevis luks bind command as
described in Configuring manual enrollment of LUKS-encrypted volumes , for example:
3. The LUKS-encrypted removable device can be now unlocked automatically in your GNOME
desktop session. The device bound to a Clevis policy can be also unlocked by the clevis luks
unlock command:
160
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server.
Additional resources
Clevis provides an implementation of SSS. It creates a key and divides it into a number of pieces. Each
piece is encrypted using another pin including even SSS recursively. Additionally, you define the
threshold t. If an NBDE deployment decrypts at least t pieces, then it recovers the encryption key and
the decryption process succeeds. When Clevis detects a smaller number of parts than specified in the
threshold, it prints an error message.
{
"t":1,
"pins":{
"tang":[
{
"url":"http://tang1.srv"
},
{
"url":"http://tang2.srv"
}
]
}
}
161
Red Hat Enterprise Linux 8 System Design Guide
In this configuration, the SSS threshold t is set to 1 and the clevis luks bind command successfully
reconstructs the secret if at least one from two listed tang servers is available.
The configuration scheme with the SSS threshold 't' set to '2' is now:
{
"t":2,
"pins":{
"tang":[
{
"url":"http://tang1.srv"
}
],
"tpm2":{
"pcr_ids":"0,7"
}
}
}
Additional resources
tang(8) (section High Availability), clevis(1) (section Shamir’s Secret Sharing), and clevis-
encrypt-sss(1) man pages on your system
This is not a limitation of Clevis but a design principle of LUKS. If your scenario requires having encrypted
root volumes in a cloud, perform the installation process (usually using Kickstart) for each instance of
Red Hat Enterprise Linux in the cloud as well. The images cannot be shared without also sharing a LUKS
master key.
To deploy automated unlocking in a virtualized environment, use systems such as lorax or virt-install
together with a Kickstart file (see Configuring automated enrollment of LUKS-encrypted volumes using
Kickstart) or another automated provisioning tool to ensure that each encrypted VM has a unique
master key.
Additional resources
Deploying automatically-enrollable encrypted images in a cloud environment can provide a unique set
162
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Deploying automatically-enrollable encrypted images in a cloud environment can provide a unique set
of challenges. Like other virtualization environments, it is recommended to reduce the number of
instances started from a single image to avoid sharing the LUKS master key.
Therefore, the best practice is to create customized images that are not shared in any public repository
and that provide a base for the deployment of a limited amount of instances. The exact number of
instances to create should be defined by deployment’s security policies and based on the risk tolerance
associated with the LUKS master key attack vector.
To build LUKS-enabled automated deployments, systems such as Lorax or virt-install together with a
Kickstart file should be used to ensure master key uniqueness during the image building process.
Cloud environments enable two Tang server deployment options which we consider here. First, the Tang
server can be deployed within the cloud environment itself. Second, the Tang server can be deployed
outside of the cloud on independent infrastructure with a VPN link between the two infrastructures.
Deploying Tang natively in the cloud does allow for easy deployment. However, given that it shares
infrastructure with the data persistence layer of ciphertext of other systems, it may be possible for both
the Tang server’s private key and the Clevis metadata to be stored on the same physical disk. Access to
this physical disk permits a full compromise of the ciphertext data.
IMPORTANT
Always maintain a physical separation between the location where the data is stored and
the system where Tang is running. This separation between the cloud and the Tang
server ensures that the Tang server’s private key cannot be accidentally combined with
the Clevis metadata. It also provides local control of the Tang server if the cloud
infrastructure is at risk.
Prerequisites
The podman package and its dependencies are installed on the system.
You have logged in on the registry.redhat.io container catalog using the podman login
registry.redhat.io command. See Red Hat Container Registry Authentication for more
information.
The Clevis client is installed on systems containing LUKS-encrypted volumes that you want to
automatically unlock by using a Tang server.
Procedure
2. Run the container, specify its port, and specify the path to the Tang keys. The previous example
runs the tang container, specifies the port 7500, and indicates a path to the Tang keys of the
/var/db/tang directory:
163
Red Hat Enterprise Linux 8 System Design Guide
Note that Tang uses port 80 by default but this may collide with other services such as the
Apache HTTP server.
3. Optional: For increased security, rotate the Tang keys periodically. You can use the tangd-
rotate-keys script, for example:
Verification
On a system that contains LUKS-encrypted volumes for automated unlocking by the presence
of the Tang server, check that the Clevis client can encrypt and decrypt a plain-text message
using Tang:
x1AIpc6WmnCU-CabD8_4q18vDuw
The previous example command shows the test string at the end of its output when a Tang
server is available on the localhost URL and communicates through port 7500.
Additional resources
For more details on automated unlocking of LUKS-encrypted volumes using Clevis and Tang,
see the Configuring automated unlocking of encrypted volumes using policy-based decryption
chapter.
9.14.16.1. Using the nbde_server RHEL system role for setting up multiple Tang servers
By using the nbde_server system role, you can deploy and manage a Tang server as part of an
automated disk encryption solution. This role supports the following features:
164
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploy a Tang server
hosts: tang.server.example.com
tasks:
- name: Install and configure periodic key rotation
ansible.builtin.include_role:
name: rhel-system-roles.nbde_server
vars:
nbde_server_rotate_keys: yes
nbde_server_manage_firewall: true
nbde_server_manage_selinux: true
This example playbook ensures deploying of your Tang server and a key rotation.
nbde_server_manage_firewall: true
Use the firewall system role to manage ports used by the nbde_server role.
nbde_server_manage_selinux: true
Use the selinux system role to manage ports used by the nbde_server role.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.nbde_server/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
On your NBDE client, verify that your Tang server works correctly by using the following
165
Red Hat Enterprise Linux 8 System Design Guide
On your NBDE client, verify that your Tang server works correctly by using the following
command. The command must return the identical message you pass for encryption and
decryption:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.nbde_server/README.md file
/usr/share/doc/rhel-system-roles/nbde_server/ directory
9.14.16.2. Setting up Clevis clients with DHCP by using the nbde_client RHEL system role
The nbde_client system role enables you to deploy multiple Clevis clients in an automated way.
This role supports binding a LUKS-encrypted volume to one or more Network-Bound (NBDE) servers -
Tang servers. You can either preserve the existing volume encryption with a passphrase or remove it.
After removing the passphrase, you can unlock the volume only using NBDE. This is useful when a volume
is initially encrypted using a temporary key or password that you should remove after you provision the
system.
If you provide both a passphrase and a key file, the role uses what you have provided first. If it does not
find any of these valid, it attempts to retrieve a passphrase from an existing binding.
Policy-Based Decryption (PBD) defines a binding as a mapping of a device to a slot. This means that
you can have multiple bindings for the same device. The default slot is slot 1.
NOTE
The nbde_client system role supports only Tang bindings. Therefore, you cannot use it
for TPM2 bindings.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure clients for unlocking of encrypted volumes by Tang servers
hosts: managed-node-01.example.com
tasks:
- name: Create NBDE client bindings
ansible.builtin.include_role:
166
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
name: rhel-system-roles.nbde_client
vars:
nbde_client_bindings:
- device: /dev/rhel/root
encryption_key_src: /etc/luks/keyfile
nbde_client_early_boot: true
state: present
servers:
- http://server1.example.com
- http://server2.example.com
- device: /dev/rhel/swap
encryption_key_src: /etc/luks/keyfile
servers:
- http://server1.example.com
- http://server2.example.com
This example playbook configures Clevis clients for automated unlocking of two LUKS-
encrypted volumes when at least one of two Tang servers is available.
state: present
The values of state indicate the configuration after you run the playbook. Use the present
value for either creating a new binding or updating an existing one. Contrary to a clevis luks
bind command, you can use state: present also for overwriting an existing binding in its
device slot. The absent value removes a specified binding.
nbde_client_early_boot: true
The nbde_client role ensures that networking for a Tang pin is available during early boot by
default. If you scenario requires to disable this feature, add the nbde_client_early_boot:
false variable to your playbook.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.nbde_client/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
1. On your NBDE client, check that the encrypted volume that should be automatically unlocked
by your Tang servers contain the corresponding information in its LUKS pins:
167
Red Hat Enterprise Linux 8 System Design Guide
2. If you do not use the nbde_client_early_boot: false variable, verify that the bindings are
available for the early boot, for example:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.nbde_client/README.md file
/usr/share/doc/rhel-system-roles/nbde_client/ directory
9.14.16.3. Setting up static-IP Clevis clients by using the nbde_client RHEL system role
The nbde_client RHEL system role supports only scenarios with Dynamic Host Configuration Protocol
(DHCP). On an NBDE client with static IP configuration, you must pass your network configuration as a
kernel boot parameter.
Typically, administrators want to reuse a playbook and not maintain individual playbooks for each host to
which Ansible assigns static IP addresses during early boot. In this case, you can use variables in the
playbook and provide the settings in an external file. As a result, you need only one playbook and one file
with the settings.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a file with the network settings of your hosts, for example, static-ip-settings-
clients.yml, and add the values you want to dynamically assign to the hosts:
clients:
managed-node-01.example.com:
ip_v4: 192.0.2.1
gateway_v4: 192.0.2.254
netmask_v4: 255.255.255.0
interface: enp1s0
managed-node-02.example.com:
ip_v4: 192.0.2.2
gateway_v4: 192.0.2.254
netmask_v4: 255.255.255.0
interface: enp1s0
2. Create a playbook file, for example, ~/playbook.yml, with the following content:
168
CHAPTER 9. SCANNING THE SYSTEM FOR SECURITY COMPLIANCE AND VULNERABILITIES
- name: Configure a Clevis client with static IP address during early boot
ansible.builtin.include_role:
name: rhel-system-roles.bootloader
vars:
bootloader_settings:
- kernel: ALL
options:
- name: ip
value: "{{ clients[inventory_hostname]['ip_v4'] }}::{{ clients[inventory_hostname]
['gateway_v4'] }}:{{ clients[inventory_hostname]['netmask_v4'] }}::{{
clients[inventory_hostname]['interface'] }}:none"
This playbook reads certain values dynamically for each host listed in the ~/static-ip-settings-
clients.yml file.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.nbde_client/README.md file
/usr/share/doc/rhel-system-roles/nbde_client/ directory
169
Red Hat Enterprise Linux 8 System Design Guide
Looking forward to Linux network configuration in the initial ramdisk (initrd) (Red Hat Enable
Sysadmin)
170
CHAPTER 10. USING SELINUX
Security Enhanced Linux (SELinux) implements Mandatory Access Control (MAC). Every process and
system resource has a special security label called an SELinux context. A SELinux context, sometimes
referred to as an SELinux label, is an identifier which abstracts away the system-level details and focuses
on the security properties of the entity. Not only does this provide a consistent way of referencing
objects in the SELinux policy, but it also removes any ambiguity that can be found in other identification
methods. For example, a file can have multiple valid path names on a system that makes use of bind
mounts.
The SELinux policy uses these contexts in a series of rules which define how processes can interact with
each other and the various system resources. By default, the policy does not allow any interaction unless
a rule explicitly grants access.
NOTE
Remember that SELinux policy rules are checked after DAC rules. SELinux policy rules
are not used if DAC rules deny access first, which means that no SELinux denial is logged
if the traditional DAC rules prevent the access.
SELinux contexts have several fields: user, role, type, and security level. The SELinux type information is
perhaps the most important when it comes to the SELinux policy, as the most common policy rule which
defines the allowed interactions between processes and system resources uses SELinux types and not
the full SELinux context. SELinux types end with _t. For example, the type name for the web server is
httpd_t. The type context for files and directories normally found in /var/www/html/ is
httpd_sys_content_t. The type contexts for files and directories normally found in /tmp and /var/tmp/
is tmp_t. The type context for web server ports is http_port_t.
There is a policy rule that permits Apache (the web server process running as httpd_t) to access files
and directories with a context normally found in /var/www/html/ and other web server directories
(httpd_sys_content_t). There is no allow rule in the policy for files normally found in /tmp and /var/tmp/,
so access is not permitted. With SELinux, even if Apache is compromised, and a malicious script gains
access, it is still not able to access the /tmp directory.
Figure 10.1. An example how can SELinux help to run Apache and MariaDB in a secure way.
171
Red Hat Enterprise Linux 8 System Design Guide
Figure 10.1. An example how can SELinux help to run Apache and MariaDB in a secure way.
As the previous scheme shows, SELinux allows the Apache process running as httpd_t to access the
/var/www/html/ directory and it denies the same process to access the /data/mysql/ directory because
there is no allow rule for the httpd_t and mysqld_db_t type contexts. On the other hand, the MariaDB
process running as mysqld_t is able to access the /data/mysql/ directory and SELinux also correctly
denies the process with the mysqld_t type to access the /var/www/html/ directory labeled as
httpd_sys_content_t.
Additional resources
selinux(8) man page and man pages listed by the apropos selinux command.
Man pages listed by the man -k _selinux command when the selinux-policy-doc package is
installed.
The SELinux Coloring Book helps you to better understand SELinux basic concepts.
All processes and files are labeled. SELinux policy rules define how processes interact with files,
as well as how processes interact with each other. Access is only allowed if an SELinux policy
rule exists that specifically allows it.
SELinux provides fine-grained access control. Stepping beyond traditional UNIX permissions
that are controlled at user discretion and based on Linux user and group IDs, SELinux access
decisions are based on all available information, such as an SELinux user, role, type, and,
optionally, a security level.
SELinux can mitigate privilege escalation attacks. Processes run in domains, and are therefore
separated from each other. SELinux policy rules define how processes access files and other
processes. If a process is compromised, the attacker only has access to the normal functions of
that process, and to files the process has been configured to have access to. For example, if the
172
CHAPTER 10. USING SELINUX
Apache HTTP Server is compromised, an attacker cannot use that process to read files in user
home directories, unless a specific SELinux policy rule was added or configured to allow such
access.
SELinux can enforce data confidentiality and integrity, and can protect processes from
untrusted inputs.
SELinux is designed to enhance existing security solutions, not replace antivirus software, secure
passwords, firewalls, or other security systems. Even when running SELinux, it is important to continue to
follow good security practices, such as keeping software up-to-date, using hard-to-guess passwords,
and firewalls.
The default action is deny. If an SELinux policy rule does not exist to allow access, such as for a
process opening a file, access is denied.
SELinux can confine Linux users. A number of confined SELinux users exist in the SELinux
policy. Linux users can be mapped to confined SELinux users to take advantage of the security
rules and mechanisms applied to them. For example, mapping a Linux user to the SELinux
user_u user, results in a Linux user that is not able to run unless configured otherwise set user
ID (setuid) applications, such as sudo and su.
Increased process and data separation. The concept of SELinux domains allows defining which
processes can access certain files and directories. For example, when running SELinux, unless
otherwise configured, an attacker cannot compromise a Samba server, and then use that Samba
server as an attack vector to read and write to files used by other processes, such as MariaDB
databases.
SELinux helps mitigate the damage made by configuration mistakes. Domain Name System
(DNS) servers often replicate information between each other in a zone transfer. Attackers can
use zone transfers to update DNS servers with false information. When running the Berkeley
Internet Name Domain (BIND) as a DNS server in RHEL, even if an administrator forgets to limit
which servers can perform a zone transfer, the default SELinux policy prevent updates for zone
files [1] that use zone transfers, by the BIND named daemon itself, and by other processes.
Without SELinux, an attacker can misuse a vulnerability to path traversal on an Apache web
server and access files and directories stored on the file system by using special elements such
as ../. If an attacker attempts an attack on a server running with SELinux in enforcing mode,
SELinux denies access to files that the httpd process must not access. SELinux cannot block
this type of attack completely but it effectively mitigates it.
The deny_ptrace SELinux boolean and SELinux in enforcing mode protect systems from the
PTRACE_TRACEME vulnerability (CVE-2019-13272). Such configuration prevents scenarios
when an attacker can get root privileges.
173
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
SELinux as a security pillar of an operating system - Real-world benefits and examples (Red Hat
Knowledgebase)
SELinux decisions, such as allowing or disallowing access, are cached. This cache is known as the Access
Vector Cache (AVC). When using these cached decisions, SELinux policy rules need to be checked less,
which increases performance. Remember that SELinux policy rules have no effect if DAC rules deny
access first. Raw audit messages are logged to the /var/log/audit/audit.log and they start with the
type=AVC string.
In RHEL 8, system services are controlled by the systemd daemon; systemd starts and stops all
services, and users and processes communicate with systemd using the systemctl utility. The systemd
daemon can consult the SELinux policy and check the label of the calling process and the label of the
unit file that the caller tries to manage, and then ask SELinux whether or not the caller is allowed the
access. This approach strengthens access control to critical system capabilities, which include starting
and stopping system services.
The systemd daemon also works as an SELinux Access Manager. It retrieves the label of the process
running systemctl or the process that sent a D-Bus message to systemd. The daemon then looks up
the label of the unit file that the process wanted to configure. Finally, systemd can retrieve information
from the kernel if the SELinux policy allows the specific access between the process label and the unit
file label. This means a compromised application that needs to interact with systemd for a specific
service can now be confined by SELinux. Policy writers can also use these fine-grained controls to
confine administrators.
If a process is sending a D-Bus message to another process and if the SELinux policy does not allow the
D-Bus communication of these two processes, then the system prints a USER_AVC denial message,
and the D-Bus communication times out. Note that the D-Bus communication between two processes
works bidirectionally.
IMPORTANT
To avoid incorrect SELinux labeling and subsequent problems, ensure that you start
services using a systemctl start command.
174
CHAPTER 10. USING SELINUX
Enforcing mode is the default, and recommended, mode of operation; in enforcing mode
SELinux operates normally, enforcing the loaded security policy on the entire system.
In permissive mode, the system acts as if SELinux is enforcing the loaded security policy,
including labeling objects and emitting access denial entries in the logs, but it does not actually
deny any operations. While not recommended for production systems, permissive mode can be
helpful for SELinux policy development and debugging.
Disabled mode is strongly discouraged; not only does the system avoid enforcing the SELinux
policy, it also avoids labeling any persistent objects such as files, making it difficult to enable
SELinux in the future.
Use the setenforce utility to change between enforcing and permissive mode. Changes made with
setenforce do not persist across reboots. To change to enforcing mode, enter the setenforce 1
command as the Linux root user. To change to permissive mode, enter the setenforce 0 command. Use
the getenforce utility to view the current SELinux mode:
# getenforce
Enforcing
# setenforce 0
# getenforce
Permissive
# setenforce 1
# getenforce
Enforcing
In Red Hat Enterprise Linux, you can set individual domains to permissive mode while the system runs in
enforcing mode. For example, to make the httpd_t domain permissive:
Note that permissive domains are a powerful tool that can compromise security of your system. Red Hat
recommends to use permissive domains with caution, for example, when debugging a specific scenario.
Use the getenforce or sestatus commands to check in which mode SELinux is running. The getenforce
command returns Enforcing, Permissive, or Disabled.
The sestatus command returns the SELinux status and the SELinux policy being used:
175
Red Hat Enterprise Linux 8 System Design Guide
$ sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 31
WARNING
When systems run SELinux in permissive mode, users and processes might label
various file-system objects incorrectly. File-system objects created while SELinux is
disabled are not labeled at all. This behavior causes problems when changing to
enforcing mode because SELinux relies on correct labels of file-system objects.
To prevent incorrectly labeled and unlabeled files from causing problems, SELinux
automatically relabels file systems when changing from the disabled state to
permissive or enforcing mode. Use the fixfiles -F onboot command as root to
create the /.autorelabel file containing the -F option to ensure that files are
relabeled upon next reboot.
Before rebooting the system for relabeling, make sure the system will boot in
permissive mode, for example by using the enforcing=0 kernel option. This
prevents the system from failing to boot in case the system contains unlabeled files
required by systemd before launching the selinux-autorelabel service. For more
information, see RHBZ#2021835.
Prerequisites
Procedure
1. Open the /etc/selinux/config file in a text editor of your choice, for example:
# vi /etc/selinux/config
176
CHAPTER 10. USING SELINUX
# reboot
Verification
1. After the system restarts, confirm that the getenforce command returns Permissive:
$ getenforce
Permissive
Prerequisites
Procedure
1. Open the /etc/selinux/config file in a text editor of your choice, for example:
# vi /etc/selinux/config
177
Red Hat Enterprise Linux 8 System Design Guide
# reboot
On the next boot, SELinux relabels all the files and directories within the system and adds
SELinux context for files and directories that were created when SELinux was disabled.
Verification
1. After the system restarts, confirm that the getenforce command returns Enforcing:
$ getenforce
Enforcing
Troubleshooting
After changing to enforcing mode, SELinux may deny some actions because of incorrect or missing
SELinux policy rules.
To view what actions SELinux denies, enter the following command as root:
If SELinux is active and the Audit daemon (auditd) is not running on your system, then search
for certain SELinux messages in the output of the dmesg command:
When systems run SELinux in permissive mode, users and processes might label various file-system
objects incorrectly. File-system objects created while SELinux is disabled are not labeled at all. This
behavior causes problems when changing to enforcing mode because SELinux relies on correct labels of
file-system objects.
To prevent incorrectly labeled and unlabeled files from causing problems, SELinux automatically
relabels file systems when changing from the disabled state to permissive or enforcing mode.
178
CHAPTER 10. USING SELINUX
WARNING
Before rebooting the system for relabeling, make sure the system will boot in
permissive mode, for example by using the enforcing=0 kernel option. This
prevents the system from failing to boot in case the system contains unlabeled files
required by systemd before launching the selinux-autorelabel service. For more
information, see RHBZ#2021835.
Procedure
1. Enable SELinux in permissive mode. For more information, see Changing to permissive mode .
# reboot
3. Check for SELinux denial messages. For more information, see Identifying SELinux denials.
# fixfiles -F onboot
WARNING
By default, autorelabel uses as many threads in parallel as the system has available CPU cores.
To use only a single thread during automatic relabeling, use the fixfiles -T 1 onboot command.
5. If there are no denials, switch to enforcing mode. For more information, see Changing SELinux
modes at boot time.
Verification
1. After the system restarts, confirm that the getenforce command returns Enforcing:
$ getenforce
Enforcing
Next steps
To run custom applications with SELinux in enforcing mode, choose one of the following scenarios:
179
Red Hat Enterprise Linux 8 System Design Guide
Write a new policy for your application. See the Writing a custom SELinux policy section for
more information.
Additional resources
Do not disable SELinux except in specific scenarios, such as performance-sensitive systems where the
weakened security does not impose significant risks.
IMPORTANT
Prerequisites
$ rpm -q grubby
grubby-<version>
Procedure
1. Configure your boot loader to add selinux=0 to the kernel command line:
$ reboot
Verification
After the reboot, confirm that the getenforce command returns Disabled:
$ getenforce
Disabled
Alternative method
In RHEL 8, you can still use the deprecated method for disabling SELinux by using the
SELINUX=disabled option in the /etc/selinux/config file. This results the kernel booting with SELinux
enabled and switching to disabled mode later in the boot process. Consequently, memory leaks and race
180
CHAPTER 10. USING SELINUX
conditions might occur that cause kernel panics. To use this method:
1. Open the /etc/selinux/config file in a text editor of your choice, for example:
# vi /etc/selinux/config
# reboot
enforcing=0
Setting this parameter causes the system to start in permissive mode, which is useful when
troubleshooting issues. Using permissive mode might be the only option to detect a problem if your
file system is too corrupted. Moreover, in permissive mode, the system continues to create the labels
correctly. The AVC messages that are created in this mode can be different than in enforcing mode.
In permissive mode, only the first denial from a series of the same denials is reported. However, in
enforcing mode, you might get a denial related to reading a directory, and an application stops. In
permissive mode, you get the same AVC message, but the application continues reading files in the
directory and you get an AVC for each denial in addition.
selinux=0
This parameter causes the kernel to not load any part of the SELinux infrastructure. The init scripts
notice that the system booted with the selinux=0 parameter and touch the /.autorelabel file. This
causes the system to automatically relabel the next time you boot with SELinux enabled.
IMPORTANT
autorelabel=1
This parameter forces the system to relabel similarly to the following commands:
# touch /.autorelabel
# reboot
181
Red Hat Enterprise Linux 8 System Design Guide
If a file system contains a large amount of mislabeled objects, start the system in permissive mode to
make the autorelabel process successful.
Additional resources
For additional SELinux-related kernel boot parameters, such as checkreqprot, see the
/usr/share/doc/kernel-doc-<KERNEL_VER>/Documentation/admin-guide/kernel-
parameters.txt file installed with the kernel-doc package. Replace the <KERNEL_VER> string
with the version number of the installed kernel, for example:
Procedure
1. When your scenario is blocked by SELinux, the /var/log/audit/audit.log file is the first place to
check for more information about a denial. To query Audit logs, use the ausearch tool. Because
the SELinux decisions, such as allowing or disallowing access, are cached and this cache is known
as the Access Vector Cache (AVC), use the AVC and USER_AVC values for the message type
parameter, for example:
If there are no matches, check if the Audit daemon is running. If it does not, repeat the denied
scenario after you start auditd and check the Audit log again.
2. In case auditd is running, but there are no matches in the output of ausearch, check messages
provided by the systemd Journal:
# journalctl -t setroubleshoot
3. If SELinux is active and the Audit daemon is not running on your system, then search for certain
SELinux messages in the output of the dmesg command:
4. Even after the previous three checks, it is still possible that you have not found anything. In this
case, AVC denials can be silenced because of dontaudit rules.
To temporarily disable dontaudit rules, allowing all denials to be logged:
# semodule -DB
After re-running your denied scenario and finding denial messages using the previous steps, the
182
CHAPTER 10. USING SELINUX
After re-running your denied scenario and finding denial messages using the previous steps, the
following command enables dontaudit rules in the policy again:
# semodule -B
5. If you apply all four previous steps, and the problem still remains unidentified, consider if
SELinux really blocks your scenario:
# setenforce 0
$ getenforce
Permissive
If the problem still occurs, something different than SELinux is blocking your scenario.
Prerequisites
Procedure
1. List more details about a logged denial using the sealert command, for example:
$ sealert -l "*"
SELinux is preventing /usr/bin/passwd from write access on the file
/root/test.
If you want to ignore passwd trying to write access the test file,
because you believe it should not need this access.
Then you should report this as a bug.
You can generate a local policy module to dontaudit this access.
Do
# ausearch -x /usr/bin/passwd --raw | audit2allow -D -M my-passwd
# semodule -X 300 -i my-passwd.pp
...
183
Red Hat Enterprise Linux 8 System Design Guide
...
Hash: passwd,passwd_t,admin_home_t,file,write
2. If the output obtained in the previous step does not contain clear suggestions:
Enable full-path auditing to see full paths to accessed objects and to make additional Linux
Audit event fields visible:
# rm -f /var/lib/setroubleshoot/setroubleshoot.xml
Repeat step 1.
After you finish the process, disable full-path auditing:
3. If sealert returns only catchall suggestions or suggests adding a new rule using the audit2allow
tool, match your problem with examples listed and explained in SELinux denials in the Audit log .
Additional resources
Be careful when the tool suggests using the audit2allow tool for configuration changes. You should not
use audit2allow to generate a local policy module as your first option when you see an SELinux denial.
Troubleshooting should start with a check if there is a labeling problem. The second most often case is
that you have changed a process configuration, and you forgot to tell SELinux about it.
Labeling problems
A common cause of labeling problems is when a non-standard directory is used for a service. For
example, instead of using /var/www/html/ for a website, an administrator might want to use
/srv/myweb/. On Red Hat Enterprise Linux, the /srv directory is labeled with the var_t type. Files and
directories created in /srv inherit this type. Also, newly-created objects in top-level directories, such as
/myserver, can be labeled with the default_t type. SELinux prevents the Apache HTTP Server ( httpd)
from accessing both of these types. To allow access, SELinux must know that the files in /srv/myweb/
are to be accessible by httpd:
This semanage command adds the context for the /srv/myweb/ directory and all files and directories
184
CHAPTER 10. USING SELINUX
This semanage command adds the context for the /srv/myweb/ directory and all files and directories
under it to the SELinux file-context configuration. The semanage utility does not change the context.
As root, use the restorecon utility to apply the changes:
# restorecon -R -v /srv/myweb
Incorrect context
The matchpathcon utility checks the context of a file path and compares it to the default label for that
path. The following example demonstrates the use of matchpathcon on a directory that contains
incorrectly labeled files:
$ matchpathcon -V /var/www/html/*
/var/www/html/index.html has context unconfined_u:object_r:user_home_t:s0, should be
system_u:object_r:httpd_sys_content_t:s0
/var/www/html/page1.html has context unconfined_u:object_r:user_home_t:s0, should be
system_u:object_r:httpd_sys_content_t:s0
In this example, the index.html and page1.html files are labeled with the user_home_t type. This type
is used for files in user home directories. Using the mv command to move files from your home directory
may result in files being labeled with the user_home_t type. This type should not exist outside of home
directories. Use the restorecon utility to restore such files to their correct type:
# restorecon -v /var/www/html/index.html
restorecon reset /var/www/html/index.html context unconfined_u:object_r:user_home_t:s0-
>system_u:object_r:httpd_sys_content_t:s0
To restore the context for all files under a directory, use the -R option:
# restorecon -R -v /var/www/html/
restorecon reset /var/www/html/page1.html context unconfined_u:object_r:samba_share_t:s0-
>system_u:object_r:httpd_sys_content_t:s0
restorecon reset /var/www/html/index.html context unconfined_u:object_r:samba_share_t:s0-
>system_u:object_r:httpd_sys_content_t:s0
For example, to allow the Apache HTTP Server to communicate with MariaDB, enable the
httpd_can_network_connect_db boolean:
# setsebool -P httpd_can_network_connect_db on
Note that the -P option makes the setting persistent across reboots of the system.
If access is denied for a particular service, use the getsebool and grep utilities to see if any booleans
are available to allow access. For example, use the getsebool -a | grep ftp command to search for FTP
related booleans:
185
Red Hat Enterprise Linux 8 System Design Guide
To get a list of booleans and to find out if they are enabled or disabled, use the getsebool -a command.
To get a list of booleans including their meaning, and to find out if they are enabled or disabled, install
the selinux-policy-devel package and use the semanage boolean -l command as root.
Port numbers
Depending on policy configuration, services can only be allowed to run on certain port numbers.
Attempting to change the port a service runs on without changing policy may result in the service failing
to start. For example, run the semanage port -l | grep http command as root to list http related ports:
The http_port_t port type defines the ports Apache HTTP Server can listen on, which in this case, are
TCP ports 80, 443, 488, 8008, 8009, and 8443. If an administrator configures httpd.conf so that httpd
listens on port 9876 (Listen 9876), but policy is not updated to reflect this, the following command fails:
To allow httpd to listen on a port that is not listed for the http_port_t port type, use the semanage port
command to assign a different label to the port:
The -a option adds a new record; the -t option defines a type; and the -p option defines a protocol. The
last argument is the port number to add.
186
CHAPTER 10. USING SELINUX
For these situations, after access is denied, use the audit2allow utility to create a custom policy module
to allow access. You can report missing rules in the SELinux policy in Red Hat Bugzilla. For Red Hat
Enterprise Linux 8, create bugs against the Red Hat Enterprise Linux 8 product, and select the
selinux-policy component. Include the output of the audit2allow -w -a and audit2allow -a commands
in such bug reports.
If an application asks for major security privileges, it could be a signal that the application is
compromised. Use intrusion detection tools to inspect such suspicious behavior.
The Solution Engine on the Red Hat Customer Portal can also provide guidance in the form of an article
containing a possible solution for the same or very similar problem you have. Select the relevant product
and version and use SELinux-related keywords, such as selinux or avc, together with the name of your
blocked service or application, for example: selinux samba.
WARNING
Use only rules provided by Red Hat. Red Hat does not support creating SELinux
policy modules with custom rules, because this falls outside of the Production
Support Scope of Coverage. If you are not an expert, contact your Red Hat sales
representative and request consulting services.
Prerequisites
Procedure
# vim <local_module>.cil
To keep your local modules better organized, use the local_ prefix in the names of local SELinux
policy modules.
2. Insert the custom rules from a Known Issue or a Red Hat Solution.
IMPORTANT
187
Red Hat Enterprise Linux 8 System Design Guide
IMPORTANT
Do not write your own rules. Use only the rules provided in a specific Known Issue
or Red Hat Solution.
For example, to implement the SELinux denies cups-lpd read access to cups.sock in RHEL
solution, insert the following rule:
The example solution has been fixed permanently for RHEL in RHBA-2021:4420. Therefore,
the parts of this procedure specific to this solution have no effect on updated RHEL 8 and 9
systems, and are included only as examples of syntax.
You can use either of the two SELinux rule syntaxes, Common Intermediate Language
(CIL) and m4. For example, (allow cupsd_lpd_t cupsd_var_run_t (sock_file (read))) in
CIL is equivalent to the following in m4:
require {
type cupsd_var_run_t;
type cupsd_lpd_t;
class sock_file read;
}
# semodule -i <local_module>.cil
If you want to remove a local policy module which you created by using semodule -i, refer to the
module name without the .cil suffix. To remove a local policy module, use semodule -r
<local_module>.
Verification
Because local modules have priority 400, you can filter them from the list also by using that
value, for example, by using the semodule -lfull | grep -v ^100 command.
188
CHAPTER 10. USING SELINUX
Where <SOURCENAME> is the source SELinux type, <TARGETNAME> is the target SELinux
type, <CLASSNAME> is the security class or object class name, and <P1> and <P2> are the
specific permissions of the rule.
For example, for the SELinux denies cups-lpd read access to cups.sock in RHEL solution:
b. Check the SELinux context of the process listed in the output of the previous command:
4. Verify that the service does not cause any SELinux denials:
Additional resources
To list only SELinux-related records, use the ausearch command with the message type parameter set
to AVC and AVC_USER at a minimum, for example:
# ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR
An SELinux denial entry in the Audit log file can look as follows:
189
Red Hat Enterprise Linux 8 System Design Guide
avc: denied - the action performed by SELinux and recorded in Access Vector Cache (AVC)
pid=6591 - the process identifier of the subject that tried to perform the denied action
comm="httpd" - the name of the command that was used to invoke the analyzed process
nfs_t - the SELinux type of the object affected by the process action
SELinux denied the httpd process with PID 6591 and the httpd_t type to read from a directory with the
nfs_t type.
The following SELinux denial message occurs when the Apache HTTP Server attempts to access a
directory labeled with a type for the Samba suite:
{ getattr } - the getattr entry indicates the source process was trying to read the target file’s
status information. This occurs before reading files. SELinux denies this action because the
process accesses the file and it does not have an appropriate label. Commonly seen permissions
include getattr, read, and write.
path="/var/www/html/file1" - the path to the object (target) the process attempted to access.
SELinux denied the httpd process with PID 2465 to access the /var/www/html/file1 file with the
samba_share_t type, which is not accessible to processes running in the httpd_t domain unless
configured otherwise.
Additional resources
190
CHAPTER 10. USING SELINUX
What is SELinux trying to tell me? The 4 key causes of SELinux errors (Fedora People)
[1] Text files that include DNS information, such as hostname to IP address mappings.
191
Red Hat Enterprise Linux 8 System Design Guide
192
CHAPTER 11. CONFIGURING IP NETWORKING WITH IFCFG FILES
IMPORTANT
In a future major RHEL release, the keyfile format will be default. Consider using the
keyfile format if you want to manually create and manage configuration files. For details,
see NetworkManager connection profiles in keyfile format .
Procedure
To configure an interface with static network settings using ifcfg files, for an interface with the
name enp1s0, create a file with the name ifcfg-enp1s0 in the /etc/sysconfig/network-scripts/
directory that contains:
DEVICE=enp1s0
BOOTPROTO=none
ONBOOT=yes
PREFIX=24
IPADDR=192.0.2.1
GATEWAY=192.0.2.254
DEVICE=enp1s0
BOOTPROTO=none
ONBOOT=yes
IPV6INIT=yes
IPV6ADDR=2001:db8:1::2/64
Additional resources
Procedure
1. To configure an interface named em1 with dynamic network settings using ifcfg files, create a
file with the name ifcfg-em1 in the /etc/sysconfig/network-scripts/ directory that contains:
DEVICE=em1
BOOTPROTO=dhcp
ONBOOT=yes
A different host name to the DHCP server, add the following line to the ifcfg file:
DHCP_HOSTNAME=hostname
A different fully qualified domain name (FQDN) to the DHCP server, add the following line
to the ifcfg file:
DHCP_FQDN=fully.qualified.domain.name
NOTE
You can use only one of these settings. If you specify both DHCP_HOSTNAME
and DHCP_FQDN, only DHCP_FQDN is used.
3. To configure an interface to use particular DNS servers, add the following lines to the ifcfg file:
PEERDNS=no
DNS1=ip-address
DNS2=ip-address
where ip-address is the address of a DNS server. This will cause the network service to update
/etc/resolv.conf with the specified DNS servers specified. Only one DNS server address is
necessary, the other is optional.
Prerequisite
Procedure
194
CHAPTER 11. CONFIGURING IP NETWORKING WITH IFCFG FILES
1. Edit the ifcfg file in the /etc/sysconfig/network-scripts/ directory that you want to limit to
certain users, and add:
195
Red Hat Enterprise Linux 8 System Design Guide
L2 mode
In IPVLAN L2 mode, virtual devices receive and respond to address resolution protocol (ARP)
requests. The netfilter framework runs only inside the container that owns the virtual device. No
netfilter chains are executed in the default namespace on the containerized traffic. Using L2
mode provides good performance, but less control on the network traffic.
L3 mode
In L3 mode, virtual devices process only L3 traffic and above. Virtual devices do not respond to
ARP request and users must configure the neighbour entries for the IPVLAN IP addresses on
the relevant peers manually. The egress traffic of a relevant container is landed on the netfilter
POSTROUTING and OUTPUT chains in the default namespace while the ingress traffic is
threaded in the same way as L2 mode. Using L3 mode provides good control but decreases the
network traffic performance.
L3S mode
In L3S mode, virtual devices process the same way as in L3 mode, except that both egress and
ingress traffics of a relevant container are landed on netfilter chain in the default namespace.
L3S mode behaves in a similar way to L3 mode but provides greater control of the network.
NOTE
The IPVLAN virtual device does not receive broadcast and multicast traffic in case of L3
and L3S modes.
MACVLAN IPVLAN
Uses MAC address for each MACVLAN device. Uses single MAC address which does not limit the
number of IPVLAN devices.
Note that, if a switch reaches the maximum number
of MAC addresses it can store in its MAC table,
connectivity can be lost.
Netfilter rules for a global namespace cannot affect It is possible to control traffic to or from a IPVLAN
traffic to or from a MACVLAN device in a child device in L3 mode and L3S mode.
namespace.
196
CHAPTER 12. GETTING STARTED WITH IPVLAN
Procedure
Note that network interface controller (NIC) is a hardware component which connects a
computer to a network.
2. To assign an IPv4 or IPv6 address to the interface, enter the following command:
3. In case of configuring an IPVLAN device in L3 mode or L3S mode, make the following setups:
a. Configure the neighbor setup for the remote peer on the remote host:
where MAC_address is the MAC address of the real NIC on which an IPVLAN device is based
on.
197
Red Hat Enterprise Linux 8 System Design Guide
5. To check if the IPVLAN device is active, execute the following command on the remote host:
# ping IP_address
198
CHAPTER 13. REUSING THE SAME IP ADDRESS ON DIFFERENT INTERFACES
One benefit of VRF over partitioning on layer 2 is that routing scales better considering the number of
peers involved.
Red Hat Enterprise Linux uses a virtual vrt device for each VRF domain and adds routes to a VRF
domain by adding existing network devices to a VRF device. Addresses and routes previously attached
to the original device will be moved inside the VRF domain.
IMPORTANT
To enable remote peers to contact both VRF interfaces while reusing the same IP
address, the network interfaces must belong to different broadcasting domains. A
broadcast domain in a network is a set of nodes, which receive broadcast traffic sent by
any of them. In most configurations, all nodes connected to the same switch belong to
the same broadcasting domain.
Prerequisites
Procedure
a. Create a connection for the VRF device and assign it to a routing table. For example, to
create a VRF device named vrf0 that is assigned to the 1001 routing table:
# nmcli connection add type vrf ifname vrf0 con-name vrf0 table 1001 ipv4.method
disabled ipv6.method disabled
c. Assign a network device to the VRF just created. For example, to add the enp1s0 Ethernet
199
Red Hat Enterprise Linux 8 System Design Guide
c. Assign a network device to the VRF just created. For example, to add the enp1s0 Ethernet
device to the vrf0 VRF device and assign an IP address and the subnet mask to enp1s0,
enter:
# nmcli connection add type ethernet con-name vrf.enp1s0 ifname enp1s0 master
vrf0 ipv4.method manual ipv4.address 192.0.2.1/24
a. Create the VRF device and assign it to a routing table. For example, to create a VRF device
named vrf1 that is assigned to the 1002 routing table, enter:
# nmcli connection add type vrf ifname vrf1 con-name vrf1 table 1002 ipv4.method
disabled ipv6.method disabled
c. Assign a network device to the VRF just created. For example, to add the enp7s0 Ethernet
device to the vrf1 VRF device and assign an IP address and the subnet mask to enp7s0,
enter:
# nmcli connection add type ethernet con-name vrf.enp7s0 ifname enp7s0 master
vrf1 ipv4.method manual ipv4.address 192.0.2.1/24
IMPORTANT
To enable remote peers to contact both VRF interfaces while reusing the same IP
address, the network interfaces must belong to different broadcasting domains. A
broadcast domain in a network is a set of nodes which receive broadcast traffic sent by
any of them. In most configurations, all nodes connected to the same switch belong to
the same broadcasting domain.
Prerequisites
200
CHAPTER 13. REUSING THE SAME IP ADDRESS ON DIFFERENT INTERFACES
Procedure
a. Create the VRF device and assign it to a routing table. For example, to create a VRF device
named blue that is assigned to the 1001 routing table:
c. Assign a network device to the VRF device. For example, to add the enp1s0 Ethernet
device to the blue VRF device:
e. Assign an IP address and subnet mask to the enp1s0 device. For example, to set it to
192.0.2.1/24:
a. Create the VRF device and assign it to a routing table. For example, to create a VRF device
named red that is assigned to the 1002 routing table:
c. Assign a network device to the VRF device. For example, to add the enp7s0 Ethernet
device to the red VRF device:
e. Assign the same IP address and subnet mask to the enp7s0 device as you used for enp1s0
in the blue VRF domain:
201
Red Hat Enterprise Linux 8 System Design Guide
202
CHAPTER 14. SECURING NETWORKS
To preserve previously generated key pairs after you reinstall the system, back up the ~/.ssh/ directory
before you create new keys. After reinstalling, copy it back to your home directory. You can do this for all
users on your system, including root.
Prerequisites
You are logged in as a user who wants to connect to the OpenSSH server by using keys.
Procedure
$ ssh-keygen -t ecdsa
Generating public/private ecdsa key pair.
Enter file in which to save the key (/home/<username>/.ssh/id_ecdsa):
Enter passphrase (empty for no passphrase): <password>
Enter same passphrase again: <password>
Your identification has been saved in /home/<username>/.ssh/id_ecdsa.
Your public key has been saved in /home/<username>/.ssh/id_ecdsa.pub.
The key fingerprint is:
SHA256:Q/x+qms4j7PCQ0qFd09iZEFHA+SqwBKRNaU72oZfaCI
<username>@<localhost.example.com>
The key's randomart image is:
+---[ECDSA 256]---+
|.oo..o=++ |
|.. o .oo . |
|. .. o. o |
|....o.+... |
|o.oo.o +S . |
|.=.+. .o |
|E.*+. . . . |
|.=..+ +.. o |
| . oo*+o. |
+----[SHA256]-----+
You can also generate an RSA key pair by using the ssh-keygen command without any
203
Red Hat Enterprise Linux 8 System Design Guide
parameter or an Ed25519 key pair by entering the ssh-keygen -t ed25519 command. Note that
the Ed25519 algorithm is not FIPS-140-compliant, and OpenSSH does not work with Ed25519
keys in FIPS mode.
$ ssh-copy-id <username>@<ssh-server-example.com>
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are
already installed
<username>@<ssh-server-example.com>'s password:
…
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '<username>@<ssh-server-example.com>'" and
check to make sure that only the key(s) you wanted were added.
If you do not use the ssh-agent program in your session, the previous command copies the
most recently modified ~/.ssh/id*.pub public key if it is not yet installed. To specify another
public-key file or to prioritize keys in files over keys cached in memory by ssh-agent, use the
ssh-copy-id command with the -i option.
Verification
Additional resources
Prerequisites
Procedure
# vi /etc/ssh/sshd_config
204
CHAPTER 14. SECURING NETWORKS
PasswordAuthentication no
3. On a system other than a new default installation, check that the PubkeyAuthentication
parameter is either not set or set to yes.
# setsebool -P use_nfs_home_dirs 1
6. If you are connected remotely, not using console or out-of-band access, test the key-based
login process before disabling password authentication.
Additional resources
Prerequisites
You have a remote host with the SSH daemon running and reachable through the network.
You know the IP address or hostname and credentials to log in to the remote host.
You have generated an SSH key pair with a passphrase and transferred the public key to the
remote machine.
See the Generating SSH key pairs section for details.
Procedure
1. Add the command for automatically starting ssh-agent in your session to the ~/.bashrc file:
$ vi ~/.bashrc
eval $(ssh-agent)
205
Red Hat Enterprise Linux 8 System Design Guide
AddKeysToAgent yes
With this option and ssh-agent started in your session, the agent prompts for a password only
for the first time when you connect to a host.
Verification
Log in to a host which uses the corresponding public key of the cached private key in the agent,
for example:
$ ssh <example.user>@<[email protected]>
Prerequisites
On the client side, the opensc package is installed and the pcscd service is running.
Procedure
1. List all keys provided by the OpenSC PKCS #11 module including their PKCS #11 URIs and save
the output to the keys.pub file:
2. Transfer the public key to the remote server. Use the ssh-copy-id command with the keys.pub
file created in the previous step:
3. Connect to <ssh-server-example.com> by using the ECDSA key. You can use just a subset of the
URI, which uniquely references your key, for example:
Because OpenSSH uses the p11-kit-proxy wrapper and the OpenSC PKCS #11 module is
registered to the p11-kit tool, you can simplify the previous command:
206
CHAPTER 14. SECURING NETWORKS
If you skip the id= part of a PKCS #11 URI, OpenSSH loads all keys that are available in the proxy
module. This can reduce the amount of typing required:
4. Optional: You can use the same URI string in the ~/.ssh/config file to make the configuration
permanent:
$ cat ~/.ssh/config
IdentityFile "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so"
$ ssh <ssh-server-example.com>
Enter PIN for 'SSH key':
[ssh-server-example.com] $
The ssh client utility now automatically uses this URI and the key from the smart card.
Additional resources
p11-kit(8), opensc.conf(5), pcscd(8), ssh(1), and ssh-keygen(1) man pages on your system
The TLS protocol sits between an application protocol layer and a reliable transport layer, such as
TCP/IP. It is independent of the application protocol and can thus be layered underneath many different
protocols, for example: HTTP, FTP, SMTP, and so on.
207
Red Hat Enterprise Linux 8 System Design Guide
SSL v2 Do not use. Has serious security vulnerabilities. Removed from the core crypto libraries
since RHEL 7.
SSL v3 Do not use. Has serious security vulnerabilities. Removed from the core crypto libraries
since RHEL 8.
TLS 1.0 Not recommended to use. Has known issues that cannot be mitigated in a way that
guarantees interoperability, and does not support modern cipher suites. In RHEL 8,
enabled only in the LEGACY system-wide cryptographic policy profile.
TLS 1.1 Use for interoperability purposes where needed. Does not support modern cipher suites.
In RHEL 8, enabled only in the LEGACY policy.
TLS 1.2 Supports the modern AEAD cipher suites. This version is enabled in all system-wide
crypto policies, but optional parts of this protocol contain vulnerabilities and TLS 1.2 also
allows outdated algorithms.
TLS 1.3 Recommended version. TLS 1.3 removes known problematic options, provides
additional privacy by encrypting more of the negotiation handshake and can be faster
thanks usage of more efficient modern cryptographic algorithms. TLS 1.3 is also
enabled in all system-wide cryptographic policies.
Additional resources
The default settings provided by libraries included in RHEL 8 are secure enough for most deployments.
The TLS implementations use secure algorithms where possible while not preventing connections from
or to legacy clients or servers. Apply hardened settings in environments with strict security requirements
where legacy clients or servers that do not support secure algorithms or protocols are not expected or
allowed to connect.
The most straightforward way to harden your TLS configuration is switching the system-wide
cryptographic policy level to FUTURE using the update-crypto-policies --set FUTURE command.
208
CHAPTER 14. SECURING NETWORKS
WARNING
Algorithms disabled for the LEGACY cryptographic policy do not conform to Red
Hat’s vision of RHEL 8 security, and their security properties are not reliable.
Consider moving away from using these algorithms instead of re-enabling them. If
you do decide to re-enable them, for example for interoperability with old
hardware, treat them as insecure and apply extra protection measures, such as
isolating their network interactions to separate network segments. Do not use them
across public networks.
If you decide to not follow RHEL system-wide crypto policies or create custom cryptographic policies
tailored to your setup, use the following recommendations for preferred protocols, cipher suites, and
key lengths on your custom configuration:
14.2.2.1. Protocols
The latest version of TLS provides the best security mechanism. Unless you have a compelling reason to
include support for older versions of TLS, allow your systems to negotiate connections using at least
TLS version 1.2.
Note that even though RHEL 8 supports TLS version 1.3, not all features of this protocol are fully
supported by RHEL 8 components. For example, the 0-RTT (Zero Round Trip Time) feature, which
reduces connection latency, is not yet fully supported by the Apache web server.
Modern, more secure cipher suites should be preferred to old, insecure ones. Always disable the use of
eNULL and aNULL cipher suites, which do not offer any encryption or authentication at all. If at all
possible, ciphers suites based on RC4 or HMAC-MD5, which have serious shortcomings, should also be
disabled. The same applies to the so-called export cipher suites, which have been intentionally made
weaker, and thus are easy to break.
While not immediately insecure, cipher suites that offer less than 128 bits of security should not be
considered for their short useful life. Algorithms that use 128 bits of security or more can be expected to
be unbreakable for at least several years, and are thus strongly recommended. Note that while 3DES
ciphers advertise the use of 168 bits, they actually offer 112 bits of security.
Always prefer cipher suites that support (perfect) forward secrecy (PFS), which ensures the
confidentiality of encrypted data even in case the server key is compromised. This rules out the fast
RSA key exchange, but allows for the use of ECDHE and DHE. Of the two, ECDHE is the faster and
therefore the preferred choice.
You should also prefer AEAD ciphers, such as AES-GCM, over CBC-mode ciphers as they are not
vulnerable to padding oracle attacks. Additionally, in many cases, AES-GCM is faster than AES in CBC
mode, especially when the hardware has cryptographic accelerators for AES.
Note also that when using the ECDHE key exchange with ECDSA certificates, the transaction is even
faster than a pure RSA key exchange. To provide support for legacy clients, you can install two pairs of
certificates and keys on a server: one with ECDSA keys (for new clients) and one with RSA keys (for
legacy ones).
209
Red Hat Enterprise Linux 8 System Design Guide
When using RSA keys, always prefer key lengths of at least 3072 bits signed by at least SHA-256, which
is sufficiently large for true 128 bits of security.
WARNING
The security of your system is only as strong as the weakest link in the chain. For
example, a strong cipher alone does not guarantee good security. The keys and the
certificates are just as important, as well as the hash functions and keys used by the
Certification Authority (CA) to sign your keys.
Additional resources
If you want to harden your TLS-related configuration with your customized cryptographic settings, you
can use the cryptographic configuration options described in this section, and override the system-wide
crypto policies just in the minimum required amount.
Regardless of the configuration you choose to use, always ensure that your server application enforces
server-side cipher order , so that the cipher suite to be used is determined by the order you configure.
The Apache HTTP Server can use both OpenSSL and NSS libraries for its TLS needs. RHEL 8
provides the mod_ssl functionality through eponymous packages:
The mod_ssl package installs the /etc/httpd/conf.d/ssl.conf configuration file, which can be used to
modify the TLS-related settings of the Apache HTTP Server.
Install the httpd-manual package to obtain complete documentation for the Apache HTTP Server,
including TLS configuration. The directives available in the /etc/httpd/conf.d/ssl.conf configuration file
are described in detail in the /usr/share/httpd/manual/mod/mod_ssl.html file. Examples of various
settings are described in the /usr/share/httpd/manual/ssl/ssl_howto.html file.
When modifying the settings in the /etc/httpd/conf.d/ssl.conf configuration file, be sure to consider the
following three directives at the minimum:
SSLProtocol
Use this directive to specify the version of TLS or SSL you want to allow.
210
CHAPTER 14. SECURING NETWORKS
SSLCipherSuite
Use this directive to specify your preferred cipher suite or disable the ones you want to disallow.
SSLHonorCipherOrder
Uncomment and set this directive to on to ensure that the connecting clients adhere to the order of
ciphers you specified.
For example, to use only the TLS 1.2 and 1.3 protocol:
See the Configuring TLS encryption on an Apache HTTP Server chapter in the Deploying different
types of servers document for more information.
14.2.3.2. Configuring the Nginx HTTP and proxy server to use TLS
To enable TLS 1.3 support in Nginx, add the TLSv1.3 value to the ssl_protocols option in the server
section of the /etc/nginx/nginx.conf configuration file:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
....
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers
....
}
See the Adding TLS encryption to an Nginx web server chapter in the Deploying different types of
servers document for more information.
To configure your installation of the Dovecot mail server to use TLS, modify the
/etc/dovecot/conf.d/10-ssl.conf configuration file. You can find an explanation of some of the basic
configuration directives available in that file in the
/usr/share/doc/dovecot/wiki/SSL.DovecotConfiguration.txt file, which is installed along with the
standard installation of Dovecot.
ssl_protocols
Use this directive to specify the version of TLS or SSL you want to allow or disable.
ssl_cipher_list
Use this directive to specify your preferred cipher suites or disable the ones you want to disallow.
ssl_prefer_server_ciphers
Uncomment and set this directive to yes to ensure that the connecting clients adhere to the order of
ciphers you specified.
For example, the following line in /etc/dovecot/conf.d/10-ssl.conf allows only TLS 1.1 and later:
211
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport
Layer Security (DTLS).
The IPsec protocol for a VPN is configured using the Internet Key Exchange (IKE) protocol. The terms
IPsec and IKE are used interchangeably. An IPsec VPN is also called an IKE VPN, IKEv2 VPN, XAUTH
VPN, Cisco VPN or IKE/IPsec VPN. A variant of an IPsec VPN that also uses the Layer 2 Tunneling
Protocol (L2TP) is usually called an L2TP/IPsec VPN, which requires the xl2tpd package provided by the
optional repository.
Libreswan is an open-source, user-space IKE implementation. IKE v1 and v2 are implemented as a user-
level daemon. The IKE protocol is also encrypted. The IPsec protocol is implemented by the Linux kernel,
and Libreswan configures the kernel to add and remove VPN tunnel configurations.
The IKE protocol uses UDP port 500 and 4500. The IPsec protocol consists of two protocols:
The AH protocol is not recommended for use. Users of AH are recommended to migrate to ESP with null
encryption.
Transport Mode.
You can configure the kernel with IPsec without IKE. This is called manual keying. You can also configure
manual keying using the ip xfrm commands, however, this is strongly discouraged for security reasons.
Libreswan communicates with the Linux kernel using the Netlink interface. The kernel performs packet
212
CHAPTER 14. SECURING NETWORKS
Libreswan uses the Network Security Services (NSS) cryptographic library. NSS is certified for use with
the Federal Information Processing Standard (FIPS) Publication 140-2.
IMPORTANT
IKE/IPsec VPNs, implemented by Libreswan and the Linux kernel, is the only VPN
technology recommended for use in RHEL. Do not use any other VPN technology
without understanding the risks of doing so.
In RHEL, Libreswan follows system-wide cryptographic policies by default. This ensures that
Libreswan uses secure settings for current threat models including IKEv2 as a default protocol. See
Using system-wide crypto policies for more information.
Libreswan does not use the terms "source" and "destination" or "server" and "client" because IKE/IPsec
are peer to peer protocols. Instead, it uses the terms "left" and "right" to refer to end points (the hosts).
This also allows you to use the same configuration on both end points in most cases. However,
administrators usually choose to always use "left" for the local host and "right" for the remote host.
The leftid and rightid options serve as identification of the respective hosts in the authentication
process. See the ipsec.conf(5) man page for more information.
You can generate a raw RSA key on a host using the ipsec newhostkey command. You can list
generated keys by using the ipsec showhostkey command. The leftrsasigkey= line is required for
connection configurations that use CKA ID keys. Use the authby=rsasig connection option for raw RSA
keys.
X.509 certificates
X.509 certificates are commonly used for large-scale deployments with hosts that connect to a common
IPsec gateway. A central certificate authority (CA) signs RSA certificates for hosts or users. This central
CA is responsible for relaying trust, including the revocations of individual hosts or users.
For example, you can generate X.509 certificates using the openssl command and the NSS certutil
command. Because Libreswan reads user certificates from the NSS database using the certificates'
nickname in the leftcert= configuration option, provide a nickname when you create a certificate.
If you use a custom CA certificate, you must import it to the Network Security Services (NSS) database.
213
Red Hat Enterprise Linux 8 System Design Guide
If you use a custom CA certificate, you must import it to the Network Security Services (NSS) database.
You can import any certificate in the PKCS #12 format to the Libreswan NSS database by using the
ipsec import command.
WARNING
Use the authby=rsasig connection option for authentication based on X.509 certificates using RSA
with SHA-1 and SHA-2. You can further limit it for ECDSA digital signatures using SHA-2 by setting
authby= to ecdsa and RSA Probabilistic Signature Scheme (RSASSA-PSS) digital signatures based
authentication with SHA-2 through authby=rsa-sha2. The default value is authby=rsasig,ecdsa.
The certificates and the authby= signature methods should match. This increases interoperability and
preserves authentication in one digital signature system.
NULL authentication
NULL authentication is used to gain mesh encryption without authentication. It protects against passive
attacks but not against active attacks. However, because IKEv2 allows asymmetric authentication
methods, NULL authentication can also be used for internet-scale opportunistic IPsec. In this model,
clients authenticate the server, but servers do not authenticate the client. This model is similar to secure
websites using TLS. Use authby=null for NULL authentication.
Using IKEv1 with pre-shared keys protects against quantum attackers. The redesign of IKEv2 does not
offer this protection natively. Libreswan offers the use of a Post-quantum Pre-shared Key (PPK) to
protect IKEv2 connections against quantum attacks.
To enable optional PPK support, add ppk=yes to the connection definition. To require PPK, add
ppk=insist. Then, each client can be given a PPK ID with a secret value that is communicated out-of-
band (and preferably quantum-safe). The PPK’s should be very strong in randomness and not based on
dictionary words. The PPK ID and PPK data are stored in the ipsec.secrets file, for example:
The PPKS option refers to static PPKs. This experimental function uses one-time-pad-based Dynamic
PPKs. Upon each connection, a new part of the one-time pad is used as the PPK. When used, that part
of the dynamic PPK inside the file is overwritten with zeros to prevent re-use. If there is no more one-
time-pad material left, the connection fails. See the ipsec.secrets(5) man page for more information.
214
CHAPTER 14. SECURING NETWORKS
WARNING
Prerequisites
Procedure
2. If you are re-installing Libreswan, remove its old database files and create a new database:
3. Start the ipsec service, and enable the service to be started automatically on boot:
4. Configure the firewall to allow 500 and 4500/UDP ports for the IKE, ESP, and AH protocols by
adding the ipsec service:
# firewall-cmd --add-service="ipsec"
# firewall-cmd --runtime-to-permanent
Prerequisites
Procedure
215
Red Hat Enterprise Linux 8 System Design Guide
# ipsec newhostkey
2. The previous step returned the generated key’s ckaid. Use that ckaid with the following
command on left, for example:
The output of the previous command generated the leftrsasigkey= line required for the
configuration. Do the same on the second host (right):
3. In the /etc/ipsec.d/ directory, create a new my_host-to-host.conf file. Write the RSA host keys
from the output of the ipsec showhostkey commands in the previous step to the new file. For
example:
conn mytunnel
leftid=@west
left=192.1.2.23
leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ==
rightid=@east
right=192.1.2.45
rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ==
authby=rsasig
7. To automatically start the tunnel when the ipsec service is started, add the following line to the
connection definition:
auto=start
The configuration of the site-to-site VPN only differs from the host-to-host VPN in that one or more
networks or subnets must be specified in the configuration file.
Prerequisites
216
CHAPTER 14. SECURING NETWORKS
Prerequisites
Procedure
1. Copy the file with the configuration of your host-to-host VPN to a new file, for example:
# cp /etc/ipsec.d/my_host-to-host.conf /etc/ipsec.d/my_site-to-site.conf
2. Add the subnet configuration to the file created in the previous step, for example:
conn mysubnet
also=mytunnel
leftsubnet=192.0.1.0/24
rightsubnet=192.0.2.0/24
auto=start
conn mysubnet6
also=mytunnel
leftsubnet=2001:db8:0:1::/64
rightsubnet=2001:db8:0:2::/64
auto=start
# the following part of the configuration file is the same for both host-to-host and site-to-site
connections:
conn mytunnel
leftid=@west
left=192.1.2.23
leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ==
rightid=@east
right=192.1.2.45
rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ==
authby=rsasig
The following example shows configuration for IKEv2, and it avoids using the IKEv1 XAUTH protocol.
On the server:
conn roadwarriors
ikev2=insist
# support (roaming) MOBIKE clients (RFC 4555)
mobike=yes
fragmentation=yes
left=1.2.3.4
# if access to the LAN is given, enable this, otherwise use 0.0.0.0/0
# leftsubnet=10.10.0.0/16
leftsubnet=0.0.0.0/0
leftcert=gw.example.com
217
Red Hat Enterprise Linux 8 System Design Guide
leftid=%fromcert
leftxauthserver=yes
leftmodecfgserver=yes
right=%any
# trust our own Certificate Agency
rightca=%same
# pick an IP address pool to assign to remote users
# 100.64.0.0/16 prevents RFC1918 clashes when remote users are behind NAT
rightaddresspool=100.64.13.100-100.64.13.254
# if you want remote clients to use some local DNS zones and servers
modecfgdns="1.2.3.4, 5.6.7.8"
modecfgdomains="internal.company.com, corp"
rightxauthclient=yes
rightmodecfgclient=yes
authby=rsasig
# optionally, run the client X.509 ID through pam to allow or deny client
# pam-authorize=yes
# load connection, do not initiate
auto=add
# kill vanished roadwarriors
dpddelay=1m
dpdtimeout=5m
dpdaction=clear
On the mobile client, the road warrior’s device, use a slight variation of the previous configuration:
conn to-vpn-server
ikev2=insist
# pick up our dynamic IP
left=%defaultroute
leftsubnet=0.0.0.0/0
leftcert=myname.example.com
leftid=%fromcert
leftmodecfgclient=yes
# right can also be a DNS hostname
right=1.2.3.4
# if access to the remote LAN is required, enable this, otherwise use 0.0.0.0/0
# rightsubnet=10.10.0.0/16
rightsubnet=0.0.0.0/0
fragmentation=yes
# trust our own Certificate Agency
rightca=%same
authby=rsasig
# allow narrowing to the server’s suggested assigned IP and remote subnet
narrowing=yes
# support (roaming) MOBIKE clients (RFC 4555)
mobike=yes
# initiate connection
auto=start
218
CHAPTER 14. SECURING NETWORKS
To require IPsec.
Authentication between the nodes can be based on X.509 certificates or on DNS Security Extensions
(DNSSEC).
You can use any regular IKEv2 authentication method for opportunistic IPsec, because these
connections are regular Libreswan configurations, except for the opportunistic IPsec that is defined by
right=%opportunisticgroup entry. A common authentication method is for hosts to authenticate each
other based on X.509 certificates using a commonly shared certification authority (CA). Cloud
deployments typically issue certificates for each node in the cloud as part of the standard procedure.
IMPORTANT
Do not use PreSharedKey (PSK) authentication because one compromised host would
result in group PSK secret being compromised as well.
You can use NULL authentication to deploy encryption between nodes without authentication, which
protects only against passive attackers.
The following procedure uses X.509 certificates. You can generate these certificates by using any kind
of CA management system, such as the Dogtag Certificate System. Dogtag assumes that the
certificates for each node are available in the PKCS #12 format (.p12 files), which contain the private
key, the node certificate, and the Root CA certificate used to validate other nodes' X.509 certificates.
Each node has an identical configuration with the exception of its X.509 certificate. This allows for
adding new nodes without reconfiguring any of the existing nodes in the network. The PKCS #12 files
require a "friendly name", for which we use the name "node" so that the configuration files referencing
the friendly name can be identical for all nodes.
Prerequisites
1. If you already have an old NSS database, remove the old database files:
# ipsec initnss
Procedure
1. On each node, import PKCS #12 files. This step requires the password used to generate the
PKCS #12 files:
2. Create the following three connection definitions for the IPsec required (private), IPsec
219
Red Hat Enterprise Linux 8 System Design Guide
2. Create the following three connection definitions for the IPsec required (private), IPsec
optional (private-or-clear), and No IPsec (clear) profiles:
# cat /etc/ipsec.d/mesh.conf
conn clear
auto=ondemand 1
type=passthrough
authby=never
left=%defaultroute
right=%group
conn private
auto=ondemand
type=transport
authby=rsasig
failureshunt=drop
negotiationshunt=drop
ikev2=insist
left=%defaultroute
leftcert=nodeXXXX
leftid=%fromcert 2
rightid=%fromcert
right=%opportunisticgroup
conn private-or-clear
auto=ondemand
type=transport
authby=rsasig
failureshunt=passthrough
negotiationshunt=passthrough
# left
left=%defaultroute
leftcert=nodeXXXX 3
leftid=%fromcert
leftrsasigkey=%cert
# right
rightrsasigkey=%cert
rightid=%fromcert
right=%opportunisticgroup
You can use the ondemand connection option with opportunistic IPsec to initiate the IPsec
connection, or for explicitly configured connections that do not need to be active all the time. This
option sets up a trap XFRM policy in the kernel, enabling the IPsec connection to begin when it
receives the first packet that matches that policy.
You can effectively configure and manage your IPsec connections, whether you use Opportunistic
IPsec or explicitly configured connections, by using the following options:
Loads the connection configuration and prepares it for responding to remote initiations.
220
CHAPTER 14. SECURING NETWORKS
Loads the connection configuration and prepares it for responding to remote initiations.
Additionally, it immediately initiates a connection to the remote peer. You can use this option
for permanent and always active connections.
2 The leftid and rightid variables identify the right and the left channel of the IPsec tunnel
connection. You can use these variables to obtain the value of the local IP address or the subject
DN of the local certificate, if you have configured one.
3 The leftcert variable defines the nickname of the NSS database that you want to use.
3. Add the IP address of the network to the corresponding category. For example, if all nodes
reside in the 10.15.0.0/16 network, and all nodes must use IPsec encryption:
4. To allow certain nodes, for example, 10.15.34.0/24, to work with and without IPsec, add
those nodes to the private-or-clear group:
5. To define a host, for example, 10.15.1.2, which is not capable of IPsec into the clear group,
use:
You can create the files in the /etc/ipsec.d/policies directory from a template for each
new node, or you can provision them by using Puppet or Ansible.
Note that every node has the same list of exceptions or different traffic flow expectations.
Two nodes, therefore, might not be able to communicate because one requires IPsec and
the other cannot use IPsec.
Verification
# ping <nodeYYY>
# certutil -L -d sql:/etc/ipsec.d
west u,u,u
ca CT,,
221
Red Hat Enterprise Linux 8 System Design Guide
# ipsec trafficstatus
006 #2: "private#10.15.0.0/16"[1] ...<nodeYYY>, type=ESP, add_time=1691399301,
inBytes=512, outBytes=512, maxBytes=2^63B, id='C=US, ST=NC, O=Example
Organization, CN=east'
Additional resources
For more information about the authby variable, see 6.2. Authentication methods in Libreswan .
Prerequisites
Procedure
3. Start the ipsec service, and enable the service to be started automatically on boot:
4. Configure the firewall to allow 500 and 4500 UDP ports for the IKE, ESP, and AH protocols by
adding the ipsec service:
# firewall-cmd --add-service="ipsec"
# firewall-cmd --runtime-to-permanent
# fips-mode-setup --enable
# reboot
Verification
222
CHAPTER 14. SECURING NETWORKS
2. Alternatively, check entries for the ipsec unit in the systemd journal:
$ journalctl -u ipsec
...
Jan 22 11:26:50 localhost.localdomain pluto[3076]: FIPS Product: YES
Jan 22 11:26:50 localhost.localdomain pluto[3076]: FIPS Kernel: YES
Jan 22 11:26:50 localhost.localdomain pluto[3076]: FIPS Mode: YES
# ipsec pluto --selftest 2>&1 | grep ESP | grep FIPS | sed "s/^.*FIPS//"
{256,192,*128} aes_ccm, aes_ccm_c
{256,192,*128} aes_ccm_b
{256,192,*128} aes_ccm_a
[*192] 3des
223
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
NOTE
In the previous releases of RHEL up to version 6.6, you had to protect the IPsec NSS
database with a password to meet the FIPS 140-2 requirements because the NSS
cryptographic libraries were certified for the FIPS 140-2 Level 2 standard. In RHEL 8,
NIST certified NSS to Level 1 of this standard, and this status does not require password
protection for the database.
Prerequisites
Procedure
# certutil -N -d sql:/etc/ipsec.d
Enter Password or Pin for "NSS Certificate DB":
Enter a password which will be used to encrypt your keys.
The password should be at least 8 characters long,
and should contain at least one non-alphabetic character.
224
CHAPTER 14. SECURING NETWORKS
2. Create the /etc/ipsec.d/nsspassword file that containins the password you have set in the
previous step, for example:
# cat /etc/ipsec.d/nsspassword
NSS Certificate DB:_<password>_
<token_1>:<password1>
<token_2>:<password2>
The default NSS software token is NSS Certificate DB. If your system is running in FIPS mode,
the name of the token is NSS FIPS 140-2 Certificate DB.
3. Depending on your scenario, either start or restart the ipsec service after you finish the
nsspassword file:
Verification
1. Check that the ipsec service is running after you have added a non-empty password to its NSS
database:
2. Check that the Journal log contains entries that confirm a successful initialization:
# journalctl -u ipsec
...
pluto[6214]: Initializing NSS using read-write database "sql:/etc/ipsec.d"
pluto[6214]: NSS Password from file "/etc/ipsec.d/nsspassword" for token "NSS Certificate
DB" with length 20 passed to NSS
pluto[6214]: NSS crypto library initialized
...
Additional resources
FIPS 140-2 and FIPS 140-3 in the Compliance Activities and Government Standards
Knowledgebase article.
225
Red Hat Enterprise Linux 8 System Design Guide
Prerequisites
Procedure
1. Add the following option to the /etc/ipsec.conf file in the config setup section:
listen-tcp=yes
2. To use TCP encapsulation as a fallback option when the first attempt over UDP fails, add the
following two options to the client’s connection definition:
enable-tcp=fallback
tcp-remoteport=4500
Alternatively, if you know that UDP is permanently blocked, use the following options in the
client’s connection configuration:
enable-tcp=yes
tcp-remoteport=4500
Additional resources
Prerequisites
Procedure
1. Edit the Libreswan configuration file in the /etc/ipsec.d/ directory of the connection that should
use automatic detection of ESP hardware offload support.
Verification
226
CHAPTER 14. SECURING NETWORKS
Verification
1. Display the tx_ipsec and rx_ipsec counters of the Ethernet device the IPsec connection uses:
2. Send traffic through the IPsec tunnel. For example, ping a remote IP address:
# ping -c 5 remote_ip_address
3. Display the tx_ipsec and rx_ipsec counters of the Ethernet device again:
Additional resources
Prerequisites
All network cards in the bond support ESP hardware offload. Use the ethtool -k
<interface_name> | grep "esp-hw-offload" command to verify whether each bond port
supports this feature.
The bond uses the active-backup mode. The bonding driver does not support any other modes
for this feature.
Procedure
This command enables ESP hardware offload support on the bond0 connection.
227
Red Hat Enterprise Linux 8 System Design Guide
3. Edit the Libreswan configuration file in the /etc/ipsec.d/ directory of the connection that should
use ESP hardware offload, and append the nic-offload=yes statement to the connection entry:
conn example
...
nic-offload=yes
Verification
The verification methods depend on various aspects, such as the kernel version and driver. For example,
certain drivers provide counters, but their names can vary. See the documentation of your network
driver for details.
The following verification steps work for the ixgbe driver on Red Hat Enterprise Linux 8:
3. Send traffic through the IPsec tunnel. For example, ping a remote IP address:
# ping -c 5 remote_ip_address
4. Display the tx_ipsec and rx_ipsec counters of the active port again:
Additional resources
14.3.13. Configuring VPN connections with IPsec by using RHEL system roles
With the vpn system role, you can configure VPN connections on RHEL systems by using Red Hat
228
CHAPTER 14. SECURING NETWORKS
With the vpn system role, you can configure VPN connections on RHEL systems by using Red Hat
Ansible Automation Platform. You can use it to set up host-to-host, network-to-network, VPN Remote
Access Server, and mesh configurations.
For host-to-host connections, the role sets up a VPN tunnel between each pair of hosts in the list of
vpn_connections using the default parameters, including generating keys as needed. Alternatively, you
can configure it to create an opportunistic mesh configuration between all hosts listed. The role assumes
that the names of the hosts under hosts are the same as the names of the hosts used in the Ansible
inventory, and that you can use those names to configure the tunnels.
NOTE
The vpn RHEL system role currently supports only Libreswan, which is an IPsec
implementation, as the VPN provider.
14.3.13.1. Creating a host-to-host VPN with IPsec by using the vpn RHEL system role
You can use the vpn system role to configure host-to-host connections by running an Ansible playbook
on the control node, which configures all managed nodes listed in an inventory file.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
To configure connections from managed hosts to external hosts that are not listed in the
inventory file, add the following section to the vpn_connections list of hosts:
vpn_connections:
- hosts:
managed-node-01.example.com:
229
Red Hat Enterprise Linux 8 System Design Guide
<external_node>:
hostname: <IP_address_or_hostname>
NOTE
The connections are configured only on the managed nodes and not on the
external node.
2. Optional: You can specify multiple VPN connections for the managed nodes by using additional
sections within vpn_connections, for example, a control plane and a data plane:
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Replace <connection_name> with the name of the connection from this node, for example
managed_node1-to-managed_node2.
NOTE
230
CHAPTER 14. SECURING NETWORKS
NOTE
By default, the role generates a descriptive name for each connection it creates
from the perspective of each system. For example, when creating a connection
between managed_node1 and managed_node2, the descriptive name of this
connection on managed_node1 is managed_node1-to-managed_node2 but
on managed_node2 the connection is named managed_node2-to-
managed_node1.
3. Optional: If a connection does not successfully load, manually add the connection by entering
the following command. This provides more specific information indicating why the connection
failed to establish:
NOTE
Any errors that may occur during the process of loading and starting the
connection are reported in the /var/log/pluto.log file. Because these logs are
hard to parse, manually add the connection to obtain log messages from the
standard output instead.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.vpn/README.md file
/usr/share/doc/rhel-system-roles/vpn/ directory
14.3.13.2. Creating an opportunistic mesh VPN connection with IPsec by using the vpn RHEL
system role
You can use the vpn system role to configure an opportunistic mesh VPN connection that uses
certificates for authentication by running an Ansible playbook on the control node, which will configure
all the managed nodes listed in an inventory file.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The IPsec Network Security Services (NSS) crypto library in the /etc/ipsec.d/ directory contains
the necessary certificates.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
231
Red Hat Enterprise Linux 8 System Design Guide
In this example procedure, the control node, which is the system from which you will run the
Ansible playbook, shares the same classless inter-domain routing (CIDR) number as both of the
managed nodes (192.0.2.0/24) and has the IP address 192.0.2.7. Therefore, the control node
falls under the private policy which is automatically created for CIDR 192.0.2.0/24.
To prevent SSH connection loss during the play, a clear policy for the control node is included in
the list of policies. Note that there is also an item in the policies list where the CIDR is equal to
default. This is because this playbook overrides the rule from the default policy to make it
private instead of private-or-clear.
Because vpn_manage_firewall and vpn_manage_selinux are both set to true, the vpn role
uses the firewall and selinux roles to manage the ports used by the vpn role.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.vpn/README.md file
/usr/share/doc/rhel-system-roles/vpn/ directory
232
CHAPTER 14. SECURING NETWORKS
14.3.14. Configuring IPsec connections that opt out of the system-wide crypto
policies
The RHEL system-wide cryptographic policies create a special connection called %default. This
connection contains the default values for the ikev2, esp, and ike options. However, you can override
the default values by specifying the mentioned option in the connection configuration file.
For example, the following configuration allows connections that use IKEv1 with AES and SHA-1 or SHA-
2, and IPsec (ESP) with either AES-GCM or AES-CBC:
conn MyExample
...
ikev2=never
ike=aes-sha2,aes-sha1;modp2048
esp=aes_gcm,aes-sha2,aes-sha1
...
Note that AES-GCM is available for IPsec (ESP) and for IKEv2, but not for IKEv1.
include /etc/crypto-policies/back-ends/libreswan.config
Additional resources
# ipsec trafficstatus
006 #8: "vpn.example.com"[1] 192.0.2.1, type=ESP, add_time=1595296930, inBytes=5999,
outBytes=3231, id='@vpn.example.com', lease=100.64.13.5/32
If the output is empty or does not show an entry with the connection name, the tunnel is broken.
233
Red Hat Enterprise Linux 8 System Design Guide
Firewall-related problems
The most common problem is that a firewall on one of the IPsec endpoints or on a router between the
endpoints is dropping all Internet Key Exchange (IKE) packets.
For IKEv2, an output similar to the following example indicates a problem with a firewall:
Because the IKE protocol, which is used to set up IPsec, is encrypted, you can troubleshoot only a limited
subset of problems using the tcpdump tool. If a firewall is dropping IKE or IPsec packets, you can try to
find the cause using the tcpdump utility. However, tcpdump cannot diagnose other problems with IPsec
VPN connections.
To capture the negotiation of the VPN and all encrypted data on the eth0 interface:
# tcpdump -i eth0 -n -n esp or udp port 500 or udp port 4500 or tcp port 4500
VPN connections require that the endpoints have matching IKE algorithms, IPsec algorithms, and IP
234
CHAPTER 14. SECURING NETWORKS
VPN connections require that the endpoints have matching IKE algorithms, IPsec algorithms, and IP
address ranges. If a mismatch occurs, the connection fails. If you identify a mismatch by using one of the
following methods, fix it by aligning algorithms, protocols, or policies.
If the remote endpoint is not running IKE/IPsec, you can see an ICMP packet indicating it. For
example:
A mismatched IKE version could also result in the remote endpoint dropping the request
without a response. This looks identical to a firewall dropping all IKE packets.
Example of mismatched IP address ranges for IKEv2 (called Traffic Selectors - TS):
When using PreSharedKeys (PSK) in IKEv1, if both sides do not put in the same PSK, the entire
IKE message becomes unreadable:
235
Red Hat Enterprise Linux 8 System Design Guide
To work around the problem, reduce MTU size by adding the mtu=1400 option to the tunnel
configuration file.
Alternatively, for TCP connections, enable an iptables rule that changes the MSS value:
If the previous command does not solve the problem in your scenario, directly specify a lower size in the
set-mss parameter:
conn myvpn
left=172.16.0.1
leftsubnet=10.0.2.0/24
right=172.16.0.2
rightsubnet=192.168.0.0/16
…
If the system on address 10.0.2.33 sends a packet to 192.168.0.1, then the router translates the source
10.0.2.33 to 172.16.0.1 before it applies the IPsec encryption.
Then, the packet with the source address 10.0.2.33 no longer matches the conn myvpn configuration,
236
CHAPTER 14. SECURING NETWORKS
Then, the packet with the source address 10.0.2.33 no longer matches the conn myvpn configuration,
and IPsec does not encrypt this packet.
To solve this problem, insert rules that exclude NAT for target IPsec subnet ranges on the router, in this
example:
$ cat /proc/net/xfrm_stat
XfrmInError 0
XfrmInBufferError 0
...
Any non-zero value in the output of the previous command indicates a problem. If you encounter this
problem, open a new support case, and attach the output of the previous command along with the
corresponding IKE logs.
Libreswan logs
Libreswan logs using the syslog protocol by default. You can use the journalctl command to find log
entries related to IPsec. Because the corresponding entries to the log are sent by the pluto IKE daemon,
search for the “pluto” keyword, for example:
$ journalctl -f -u ipsec
If the default level of logging does not reveal your configuration problem, enable debug logs by adding
the plutodebug=all option to the config setup section in the /etc/ipsec.conf file.
Note that debug logging produces a lot of entries, and it is possible that either the journald or syslogd
service rate-limits the syslog messages. To ensure you have complete logs, redirect the logging to a
file. Edit the /etc/ipsec.conf, and add the logfile=/var/log/pluto.log in the config setup section.
Additional resources
Prerequisites
237
Red Hat Enterprise Linux 8 System Design Guide
Procedure
1. Press the Super key, type Settings, and press Enter to open the control-center application.
4. Select VPN.
5. Select the Identity menu entry to see the basic configuration options:
General
Authentication
Type
IKEv1 (XAUTH) - client is authenticated by user name and password, or a pre-shared key
(PSK).
The following configuration settings are available under the Advanced section:
239
Red Hat Enterprise Linux 8 System Design Guide
WARNING
Identification
Phase1 Algorithms — corresponds to the ike Libreswan parameter — enter the algorithms
to be used to authenticate and set up an encrypted channel.
Phase2 Algorithms — corresponds to the esp Libreswan parameter — enter the algorithms
to be used for the IPsec negotiations.
Check the Disable PFS field to turn off Perfect Forward Secrecy (PFS) to ensure
compatibility with old servers that do not support PFS.
Phase1 Lifetime — corresponds to the ikelifetime Libreswan parameter — how long the key
used to encrypt the traffic will be valid.
Automatic (DHCP) — Choose this option if the network you are connecting to uses a DHCP
server to assign dynamic IP addresses.
Link-Local Only — Choose this option if the network you are connecting to does not have a
DHCP server and you do not want to assign IP addresses manually. Random addresses will
be assigned as per RFC 3927 with prefix 169.254/16.
240
CHAPTER 14. SECURING NETWORKS
In the DNS section, when Automatic is ON, switch it to OFF to enter the IP address of a
DNS server you want to use separating the IPs by comma.
Routes
Note that in the Routes section, when Automatic is ON, routes from DHCP are used, but
you can also add additional static routes. When OFF, only static routes are used.
Gateway — The IP address of the gateway leading to the remote network or host entered
above.
Metric — A network cost, a preference value to give to this route. Lower values will be
preferred over higher values.
Use this connection only for resources on its network
Select this check box to prevent the connection from becoming the default route. Selecting
this option means that only traffic specifically destined for routes learned automatically over
the connection or entered here manually is routed over the connection.
7. To configure IPv6 settings in a VPN connection, select the IPv6 menu entry:
IPv6 Method
Automatic, DHCP only — Choose this option to not use RA, but request information from
DHCPv6 directly to create a stateful configuration.
Link-Local Only — Choose this option if the network you are connecting to does not have a
DHCP server and you do not want to assign IP addresses manually. Random addresses will
be assigned as per RFC 4862 with prefix FE80::0.
8. Once you have finished editing the VPN connection, click the Add button to customize the
configuration or the Apply button to save it for the existing one.
Additional resources
nm-settings-libreswan(5)
241
Red Hat Enterprise Linux 8 System Design Guide
Prerequisites
The certificate is imported into the IPsec network security services (NSS) database.
Procedure
$ nm-connection-editor
3. Select the IPsec based VPN connection type, and click Create.
a. Enter the host name or IP address of the VPN gateway into the Gateway field, and select
an authentication type. Based on the authentication type, you must enter different
additional information:
IKEv2 (Certifiate) authenticates the client by using a certificate, which is more secure.
This setting requires the nickname of the certificate in the IPsec NSS database
IKEv1 (XAUTH) authenticates the user by using a user name and password (pre-shared
key). This setting requires that you enter the following values:
User name
Password
Group name
Secret
b. If the remote server specifies a local identifier for the IKE exchange, enter the exact string
in the Remote ID field. In the remote server runs Libreswan, this value is set in the server’s
leftid parameter.
242
CHAPTER 14. SECURING NETWORKS
c. Optional: Configure additional settings by clicking the Advanced button. You can configure
the following settings:
Identification
Security
Connectivity
243
Red Hat Enterprise Linux 8 System Design Guide
5. On the IPv4 Settings tab, select the IP assignment method and, optionally, set additional static
addresses, DNS servers, search domains, and routes.
7. Close nm-connection-editor.
NOTE
When you add a new connection by clicking the + button, NetworkManager creates a new
configuration file for that connection and then opens the same dialog that is used for
editing an existing connection. The difference between these dialogs is that an existing
connection profile has a Details menu entry.
Additional resources
244
CHAPTER 14. SECURING NETWORKS
/usr/share/doc/libreswan-version/ directory.
MACsec encrypts and authenticates all traffic in LANs, by default with the GCM-AES-128 algorithm, and
uses a pre-shared key to establish the connection between the participant hosts. To change the pre-
shared key, you must update the NM configuration on all network hosts that use MACsec.
A MACsec connection uses an Ethernet device, such as an Ethernet network card, VLAN, or tunnel
device, as a parent. You can either set an IP configuration only on the MACsec device to communicate
with other hosts only by using the encrypted connection, or you can also set an IP configuration on the
parent device. In the latter case, you can use the parent device to communicate with other hosts using an
unencrypted connection and the MACsec device for encrypted connections.
MACsec does not require any special hardware. For example, you can use any switch, except if you want
to encrypt traffic only between a host and a switch. In this scenario, the switch must also support
MACsec.
In other words, you can configure MACsec for two common scenarios:
Host-to-host
IMPORTANT
You can use MACsec only between hosts being in the same physical or virtual LAN.
245
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
You can use the nmcli utility to configure Ethernet interfaces to use MACsec. For example, you can
create a MACsec connection between two hosts that are connected over Ethernet.
Procedure
Create the connectivity association key (CAK) and connectivity-association key name
(CKN) for the pre-shared key:
Use the CAK and CKN generated in the previous step in the macsec.mka-cak and
macsec.mka-ckn parameters. The values must be the same on every host in the MACsec-
protected network.
a. Configure the IPv4 settings. For example, to set a static IPv4 address, network mask,
default gateway, and DNS server to the macsec0 connection, enter:
b. Configure the IPv6 settings. For example, to set a static IPv6 address, network mask,
default gateway, and DNS server to the macsec0 connection, enter:
246
CHAPTER 14. SECURING NETWORKS
Verification
# ip macsec show
4. Display individual counters for each type of protection: integrity-only (encrypt off) and
encryption (encrypt on)
# ip -s macsec show
Additional resources
firewalld is a firewall service daemon that provides a dynamic customizable host-based firewall with a
D-Bus interface. Being dynamic, it enables creating, changing, and deleting the rules without the
necessity to restart the firewall daemon each time the rules are changed.
firewalld uses the concepts of zones and services, that simplify the traffic management. Zones are
predefined sets of rules. Network interfaces and sources can be assigned to a zone. The traffic allowed
depends on the network your computer is connected to and the security level this network is assigned.
Firewall services are predefined rules that cover all necessary settings to allow incoming traffic for a
specific service and they apply within a zone.
Services use one or more ports or addresses for network communication. Firewalls filter communication
based on ports. To allow network traffic for a service, its ports must be open. firewalld blocks all traffic
on ports that are not explicitly set as open. Some zones, such as trusted, allow all traffic by default.
Note that firewalld with nftables backend does not support passing custom nftables rules to firewalld,
using the --direct option.
firewalld: Use the firewalld utility for simple firewall use cases. The utility is easy to use and
247
Red Hat Enterprise Linux 8 System Design Guide
firewalld: Use the firewalld utility for simple firewall use cases. The utility is easy to use and
covers the typical use cases for these scenarios.
nftables: Use the nftables utility to set up complex and performance-critical firewalls, such as
for a whole network.
iptables: The iptables utility on Red Hat Enterprise Linux uses the nf_tables kernel API instead
of the legacy back end. The nf_tables API provides backward compatibility so that scripts that
use iptables commands still work on Red Hat Enterprise Linux. For new firewall scripts, Red Hat
recommends to use nftables.
IMPORTANT
NetworkManager notifies firewalld of the zone of an interface. You can assign zones to interfaces with
the following utilities:
NetworkManager
firewall-config utility
firewall-cmd utility
The RHEL web console, firewall-config, and firewall-cmd can only edit the appropriate
NetworkManager configuration files. If you change the zone of the interface using the web console,
firewall-cmd, or firewall-config, the request is forwarded to NetworkManager and is not handled by
firewalld.
The /usr/lib/firewalld/zones/ directory stores the predefined zones, and you can instantly apply them to
248
CHAPTER 14. SECURING NETWORKS
The /usr/lib/firewalld/zones/ directory stores the predefined zones, and you can instantly apply them to
any available network interface. These files are copied to the /etc/firewalld/zones/ directory only after
they are modified. The default settings of the predefined zones are as follows:
block
Suitable for: Any incoming network connections are rejected with an icmp-host-prohibited
message for IPv4 and icmp6-adm-prohibited for IPv6.
dmz
Suitable for: Computers in your DMZ that are publicly-accessible with limited access to your
internal network.
drop
Suitable for: Any incoming network packets are dropped without any notification.
external
Suitable for: External networks with masquerading enabled, especially for routers. Situations
when you do not trust the other computers on the network.
home
Suitable for: Home environment where you mostly trust the other computers on the network.
internal
Suitable for: Internal networks where you mostly trust the other computers on the network.
public
Suitable for: Public areas where you do not trust other computers on the network.
trusted
work
Suitable for: Work environment where you mostly trust the other computers on the network.
249
Red Hat Enterprise Linux 8 System Design Guide
One of these zones is set as the default zone. When interface connections are added to
NetworkManager, they are assigned to the default zone. On installation, the default zone in firewalld is
the public zone. You can change the default zone.
NOTE
Make network zone names self-explanatory to help users understand them quickly.
To avoid any security problems, review the default zone configuration and disable any unnecessary
services according to your needs and risk assessments.
Additional resources
Incoming traffic
Outgoing traffic
Forward traffic
Firewall policies use the concept of firewall zones. Each zone is associated with a specific set of firewall
rules that determine the traffic allowed. Policies apply firewall rules in a stateful, unidirectional manner.
This means you only consider one direction of the traffic. The traffic return path is implicitly allowed due
to stateful filtering of firewalld.
Policies are associated with an ingress zone and an egress zone. The ingress zone is where the traffic
originated (received). The egress zone is where the traffic leaves (sent).
The firewall rules defined in a policy can reference the firewall zones to apply consistent configurations
across multiple network interfaces.
Firewall rules typically define certain criteria based on various attributes. The attributes can be as:
Source IP addresses
Destination IP addresses
250
CHAPTER 14. SECURING NETWORKS
Ports
Network interfaces
The firewalld utility organizes the firewall rules into zones (such as public, internal, and others) and
policies. Each zone has its own set of rules that determine the level of traffic freedom for network
interfaces associated with a particular zone.
The following example shows a configuration that allows one service (SSH) and one port range, for both
the TCP and UDP protocols:
Additional resources
Local port
Network protocol
A service simplifies packet filtering and saves you time because it achieves several tasks at once. For
example, firewalld can perform the following tasks at once:
Open a port
251
Red Hat Enterprise Linux 8 System Design Guide
Service configuration options and generic file information are described in the firewalld.service(5) man
page on your system. The services are specified by means of individual XML configuration files, which
are named in the following format: service-name.xml. Protocol names are preferred over service or
application names in firewalld.
Use utilities:
Additional resources
You can strengthen your network security by modifying the firewall settings and associating a specific
network interface or connection with a particular firewall zone. By defining granular rules and restrictions
for a zone, you can control inbound and outbound traffic based on your intended security levels.
Prerequisites
Procedure
# firewall-cmd --get-zones
The firewall-cmd --get-zones command displays all zones that are available on the system, but
252
CHAPTER 14. SECURING NETWORKS
The firewall-cmd --get-zones command displays all zones that are available on the system, but
it does not show any details for particular zones. To see more detailed information for all zones,
use the firewall-cmd --list-all-zones command.
3. Modify firewall settings for the chosen zone. For example, to allow the SSH service and remove
the ftp service:
# firewall-cmd --get-active-zones
Assigning a network interface to a zone is more suitable for applying consistent firewall
settings to all traffic on a particular interface (physical or virtual).
The firewall-cmd command, when used with the --permanent option, often involves
updating NetworkManager connection profiles to make changes to the firewall
configuration permanent. This integration between firewalld and NetworkManager ensures
consistent network and firewall settings.
Verification
The command output displays all zone settings including the assigned services, network
interface, and network connections (sources).
System administrators assign a zone to a networking interface in its configuration files. If an interface is
not assigned to a specific zone, it is assigned to the default zone. After each restart of the firewalld
service, firewalld loads the settings for the default zone and makes it active. Note that settings for all
other zones are preserved and ready to be used.
253
Red Hat Enterprise Linux 8 System Design Guide
Prerequisites
Procedure
To set up the default zone:
# firewall-cmd --get-default-zone
NOTE
Following this procedure, the setting is a permanent setting, even without the --
permanent option.
It is possible to define different sets of rules for different zones and then change the settings quickly by
changing the zone for the interface that is being used. With multiple interfaces, a specific zone can be
set for each of them to distinguish traffic that is coming through them.
Procedure
To assign the zone to a specific interface:
# firewall-cmd --get-active-zones
You can add a firewalld zone to a NetworkManager connection using the nmcli utility.
Procedure
254
CHAPTER 14. SECURING NETWORKS
If you cannot use the nmcli utility to modify a connection profile, you can manually edit the
corresponding file of the profile to assign a firewalld zone.
NOTE
Modifying the connection profile with the nmcli utility to assign a firewalld zone is more
efficient. For details, see Assigning a network interface to a zone .
Procedure
NetworkManager uses separate directories and file names for the different connection profile
formats:
Profiles in /etc/NetworkManager/system-
connections/<connection_name>.nmconnection files use the keyfile format.
If the file uses the keyfile format, append zone=<name> to the [connection] section of the
/etc/NetworkManager/system-connections/<connection_name>.nmconnection file:
[connection]
...
zone=internal
If the file uses the ifcfg format, append ZONE=<name> to the /etc/sysconfig/network-
scripts/ifcfg-<interface_name> file:
ZONE=internal
Verification
255
Red Hat Enterprise Linux 8 System Design Guide
When the connection is managed by NetworkManager, it must be aware of a zone that it uses. For
every network connection profile, a zone can be specified, which provides the flexibility of various
firewall settings according to the location of the computer with portable devices. Thus, zones and
settings can be specified for different locations, such as company or home.
Procedure
ZONE=zone_name
To use custom zones, create a new zone and use it just like a predefined zone. New zones require the --
permanent option, otherwise the command does not work.
Prerequisites
Procedure
# firewall-cmd --reload
The command applies recent changes to the firewall configuration without interrupting network
services that are already running.
Verification
You can apply predefined and existing firewall zones on a particular interface or a range of IP addresses
through the RHEL web console.
Prerequisites
256
CHAPTER 14. SECURING NETWORKS
Procedure
2. Click Networking.
If you do not see the Edit rules and zones button, log in to the web console with the
administrator privileges.
5. In the Add zone dialog box, select a zone from the Trust level options.
The web console displays all zones predefined in the firewalld service.
6. In the Interfaces part, select an interface or interfaces on which the selected zone is applied.
7. In the Allowed Addresses part, you can select whether the zone is applied on:
192.168.1.0
192.168.1.0/24
192.168.1.0/24, 192.168.1.0
257
Red Hat Enterprise Linux 8 System Design Guide
Verification
You can disable a firewall zone in your firewall configuration by using the web console.
Prerequisites
Procedure
2. Click Networking.
258
CHAPTER 14. SECURING NETWORKS
If you do not see the Edit rules and zones button, log in to the web console with the
administrator privileges.
5. Click Delete.
The zone is now disabled and the interface does not include opened services and ports which were
configured in the zone.
14.5.7.10. Using zone targets to set default behavior for incoming traffic
For every zone, you can set a default behavior that handles incoming traffic that is not further specified.
Such behavior is defined by setting the target of the zone. There are four options:
ACCEPT: Accepts all incoming packets except those disallowed by specific rules.
REJECT: Rejects all incoming packets except those allowed by specific rules. When firewalld
rejects packets, the source machine is informed about the rejection.
DROP: Drops all incoming packets except those allowed by specific rules. When firewalld drops
packets, the source machine is not informed about the packet drop.
default: Similar behavior as for REJECT, but with special meanings in certain scenarios.
Prerequisites
Procedure
To set a target for a zone:
1. List the information for the specific zone to see the default target:
259
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
The firewalld package installs a large number of predefined service files and you can add more or
customize them. You can then use these service definitions to open or close ports for services without
knowing the protocol and port numbers they use.
The most straightforward method to control traffic is to add a predefined service to firewalld. This
opens all necessary ports and modifies other settings according to the service definition file .
Prerequisites
Procedure
# firewall-cmd --list-services
ssh dhcpv6-client
The command lists the services that are enabled in the default zone.
# firewall-cmd --get-services
RH-Satellite-6 amanda-client amanda-k5-client bacula bacula-client bitcoin bitcoin-rpc
bitcoin-testnet bitcoin-testnet-rpc ceph ceph-mon cfengine condor-collector ctdb dhcp dhcpv6
dhcpv6-client dns docker-registry ...
The command displays a list of available services for the default zone.
# firewall-cmd --add-service=<service_name>
# firewall-cmd --runtime-to-permanent
The command applies these runtime changes to the permanent configuration of the firewall. By
260
CHAPTER 14. SECURING NETWORKS
The command applies these runtime changes to the permanent configuration of the firewall. By
default, it applies these changes to the configuration of the default zone.
Verification
The command displays complete configuration with the permanent firewall rules of the default
firewall zone (public).
# firewall-cmd --check-config
success
If the permanent configuration is invalid, the command returns an error with further details:
# firewall-cmd --check-config
Error: INVALID_PROTOCOL: 'public.xml': 'tcpx' not from {'tcp'|'udp'|'sctp'|'dccp'}
You can also manually inspect the permanent configuration files to verify the settings. The main
configuration file is /etc/firewalld/firewalld.conf. The zone-specific configuration files are in the
/etc/firewalld/zones/ directory and the policies are in the /etc/firewalld/policies/ directory.
You can control the network traffic with predefined services using a graphical user interface. The
Firewall Configuration application provides an accessible and user-friendly alternative to the command-
line utilities.
Prerequisites
Procedure
261
Red Hat Enterprise Linux 8 System Design Guide
a. Start the firewall-config utility and select the network zone whose services are to be
configured.
b. Select the Zones tab and then the Services tab below.
c. Select the checkbox for each type of service you want to trust or clear the checkbox to
block a service in the selected zone.
2. To edit a service:
b. Select Permanent from the menu labeled Configuration. Additional icons and menu
buttons appear at the bottom of the Services window.
The Ports, Protocols, and Source Port tabs enable adding, changing, and removing of ports, protocols,
and source port for the selected service. The modules tab is for configuring Netfilter helper modules.
The Destination tab enables limiting traffic to a particular destination address and Internet Protocol
(IPv4 or IPv6).
NOTE
Verification
You can also start the graphical firewall configuration utility using the command-line, by
entering the firewall-config command.
262
CHAPTER 14. SECURING NETWORKS
The Firewall Configuration window opens. Note that this command can be run as a normal user, but you
are prompted for an administrator password occasionally.
By default, services are added to the default firewall zone. If you use more firewall zones on more
network interfaces, you must select a zone first and then add the service with port.
The RHEL 8 web console displays predefined firewalld services and you can add them to active firewall
zones.
IMPORTANT
The web console does not allow generic firewalld rules which are not listed in the web
console.
Prerequisites
Procedure
2. Click Networking.
263
Red Hat Enterprise Linux 8 System Design Guide
If you do not see the Edit rules and zones button, log in to the web console with the
administrator privileges.
4. In the Firewall section, select a zone for which you want to add the service and click Add
Services.
5. In the Add Services dialog box, find the service you want to enable on the firewall.
At this point, the RHEL 8 web console displays the service in the zone’s list of Services.
You can add configure custom ports for services through the RHEL web console.
264
CHAPTER 14. SECURING NETWORKS
Prerequisites
Procedure
2. Click Networking.
If you do not see the Edit rules and zones button, log in to the web console with the
administrative privileges.
4. In the Firewall section, select a zone for which you want to configure a custom port and click
Add Services.
5. In the Add services dialog box, click on the Custom Ports radio button.
6. In the TCP and UDP fields, add ports according to examples. You can add ports in the following
formats:
NOTE
You can add multiple values into each field. Values must be separated with the
comma and without the space, for example: 8080,8081,http
265
Red Hat Enterprise Linux 8 System Design Guide
7. After adding the port number in the TCP filed, the UDP filed, or both, verify the service name in
the Name field.
The Name field displays the name of the service for which is this port reserved. You can rewrite
the name if you are sure that this port is free to use and no server needs to communicate on this
port.
8. In the Name field, add a name for the service including defined ports.
To verify the settings, go to the Firewall page and find the service in the list of zone’s Services.
Ports are logical services that enable an operating system to receive and distinguish network traffic and
forward it to system services. The system services are represented by a daemon that listens on the port
and waits for any traffic coming to this port.
Normally, system services listen on standard ports that are reserved for them. The httpd daemon, for
example, listens on port 80. However, system administrators can directly specify the port number
instead of the service name.
266
CHAPTER 14. SECURING NETWORKS
You can use the firewalld service to configure access to a secure web server for hosting your data.
Prerequisites
Procedure
# firewall-cmd --get-active-zones
# firewall-cmd --reload
Verification
When an open port is no longer needed, you can use the firewalld utility to close it.
IMPORTANT
Close all unnecessary ports to reduce the potential attack surface and minimize the risk
of unauthorized access or exploitation of vulnerabilities.
Procedure
# firewall-cmd --list-ports
By default, this command lists the ports that are enabled in the default zone.
NOTE
267
Red Hat Enterprise Linux 8 System Design Guide
NOTE
This command will only give you a list of ports that are opened as ports. You will
not be able to see any open ports that are opened as a service. For that case,
consider using the --list-all option instead of --list-ports.
2. Remove the port from the list of allowed ports to close it for the incoming traffic:
# firewall-cmd --remove-port=port-number/port-type
This command removes a port from a zone. If you do not specify a zone, it will remove the port
from the default zone.
# firewall-cmd --runtime-to-permanent
Without specifying a zone, this command applies runtime changes to the permanent
configuration of the default zone.
Verification
1. List the active zones and choose the zone you want to inspect:
# firewall-cmd --get-active-zones
2. List the currently open ports in the selected zone to check if the unused or unnecessary ports
are closed:
As a result, you can for example enhance your system defenses, ensure data privacy or optimize network
resources.
IMPORTANT
Enabling panic mode stops all networking traffic. For this reason, it should be used only
when you have the physical access to the machine or if you are logged in using a serial
console.
Procedure
# firewall-cmd --panic-on
268
CHAPTER 14. SECURING NETWORKS
2. Switching off panic mode reverts the firewall to its permanent settings. To switch panic mode
off, enter:
# firewall-cmd --panic-off
Verification
# firewall-cmd --query-panic
To permit traffic through the firewall using a certain protocol, you can use the GUI.
Prerequisites
Procedure
1. Start the firewall-config tool and select the network zone whose settings you want to change.
2. Select the Protocols tab and click the Add button on the right-hand side. The Protocol window
opens.
3. Either select a protocol from the list or select the Other Protocol check box and enter the
protocol in the field.
Matching by source address takes precedence over matching by interface name. When you add a source
to a zone, the firewall will prioritize the source-based rules for incoming traffic over interface-based
rules. This means that if incoming traffic matches a source address specified for a particular zone, the
zone associated with that source address will determine how the traffic is handled, regardless of the
interface through which it arrives. On the other hand, interface-based rules are generally a fallback for
traffic that does not match specific source-based rules. These rules apply to traffic, for which the source
is not explicitly associated with a zone. This allows you to define a default behavior for traffic that does
not have a specific source-defined zone.
To route incoming traffic into a specific zone, add the source to that zone. The source can be an IP
address or an IP mask in the classless inter-domain routing (CIDR) notation.
NOTE
269
Red Hat Enterprise Linux 8 System Design Guide
NOTE
In case you add multiple zones with an overlapping network range, they are ordered
alphanumerically by zone name and only the first one is considered.
# firewall-cmd --add-source=<source>
The following procedure allows all incoming traffic from 192.168.2.15 in the trusted zone:
Procedure
# firewall-cmd --get-zones
# firewall-cmd --runtime-to-permanent
When you remove a source from a zone, the traffic which originates from the source is no longer
directed through the rules specified for that source. Instead, the traffic falls back to the rules and
settings of the zone associated with the interface from which it originates, or goes to the default zone.
Procedure
# firewall-cmd --runtime-to-permanent
270
CHAPTER 14. SECURING NETWORKS
By removing a source port you disable sorting the traffic based on a port of origin.
Procedure
14.5.9.4. Using zones and sources to allow a service for only a specific domain
To allow traffic from a specific network to use a service on a machine, use zones and source. The
following procedure allows only HTTP traffic from the 192.0.2.0/24 network while any other traffic is
blocked.
WARNING
When you configure this scenario, use a zone that has the default target. Using a
zone that has the target set to ACCEPT is a security risk, because for traffic from
192.0.2.0/24, all network connections would be accepted.
Procedure
# firewall-cmd --get-zones
block dmz drop external home internal public trusted work
2. Add the IP range to the internal zone to route the traffic originating from the source through
the zone:
# firewall-cmd --runtime-to-permanent
Verification
Check that the internal zone is active and that the service is allowed in it:
271
Red Hat Enterprise Linux 8 System Design Guide
icmp-block-inversion: no
interfaces:
sources: 192.0.2.0/24
services: cockpit dhcpv6-client mdns samba-client ssh http
...
Additional resources
The policy objects feature provides forward and output filtering in firewalld. You can use firewalld to
filter traffic between different zones to allow access to locally hosted VMs to connect the host.
Policy objects allow the user to attach firewalld’s primitives such as services, ports, and rich rules to the
policy. You can apply the policy objects to traffic that passes between zones in a stateful and
unidirectional manner.
HOST and ANY are the symbolic zones used in the ingress and egress zone lists.
The HOST symbolic zone allows policies for the traffic originating from or has a destination to
the host running firewalld.
The ANY symbolic zone applies policy to all the current and future zones. ANY symbolic zone
acts as a wildcard for all zones.
Multiple policies can apply to the same set of traffic, therefore, priorities should be used to create an
order of precedence for the policies that may be applied.
In the above example -500 is a lower priority value but has higher precedence. Thus, -500 will execute
before -100.
Lower numerical priority values have higher precedence and are applied first.
14.5.10.3. Using policy objects to filter traffic between locally hosted containers and a
network physically connected to the host
272
CHAPTER 14. SECURING NETWORKS
The policy objects feature allows users to filter traffic between Podman and firewalld zones.
NOTE
Red Hat recommends blocking all traffic by default and opening the selective services
needed for the Podman utility.
Procedure
2. Block all traffic from Podman to other zones and allow only necessary services on Podman:
Setting the egress zone to ANY means that you filter from Podman to other zones. If you want
to filter to the host, then set the egress zone to HOST.
Verification
273
Red Hat Enterprise Linux 8 System Design Guide
You can specify --set-target options for policies. The following targets are available:
CONTINUE (default) - packets will be subject to rules in following policies and zones.
Verification
If your web server runs in a DMZ with private IP addresses, you can configure destination network
address translation (DNAT) to enable clients on the internet to connect to this web server. In this case,
the host name of the web server resolves to the public IP address of the router. When a client
establishes a connection to a defined port on the router, the router forwards the packets to the internal
web server.
Prerequisites
The DNS server resolves the host name of the web server to the router’s IP address.
The private IP address and port number that you want to forward
The destination IP address and port of the web server where you want to redirect the
packets
Procedure
The policies, as opposed to zones, allow packet filtering for input, output, and forwarded traffic.
This is important, because forwarding traffic to endpoints on locally run web servers, containers,
or virtual machines requires such capability.
2. Configure symbolic zones for the ingress and egress traffic to also enable the router itself to
connect to its local IP address and forward this traffic:
274
CHAPTER 14. SECURING NETWORKS
The --add-ingress-zone=HOST option refers to packets generated locally and transmitted out
of the local host. The --add-egress-zone=ANY option refers to traffic moving to any zone.
The rich rule forwards TCP traffic from port 443 on the IP address of the router (192.0.2.1) to
port 443 of the IP address of the web server (192.51.100.20).
# firewall-cmd --reload
success
The command persistently configures the route_localnet kernel parameter and ensures
that the setting is preserved after the system reboots.
# sysctl -p /etc/sysctl.d/90-enable-route-localnet.conf
The sysctl command is useful for applying on-the-fly changes, however the configuration
will not persist across system reboots.
Verification
1. Connect to the IP address of the router and to the port that you have forwarded to the web
server:
# curl https://192.0.2.1:443
# sysctl net.ipv4.conf.all.route_localnet
net.ipv4.conf.all.route_localnet = 1
3. Verify that <example_policy> is active and contains the settings you need, especially the
source IP address and port, protocol to be used, and the destination IP address and port:
# firewall-cmd --info-policy=<example_policy>
example_policy (active)
275
Red Hat Enterprise Linux 8 System Design Guide
priority: -1
target: CONTINUE
ingress-zones: HOST
egress-zones: ANY
services:
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule family="ipv4" destination address="192.0.2.1" forward-port port="443" protocol="tcp" to-
port="443" to-addr="192.51.100.20"
Additional resources
Masquerading
Redirect
Masquerading
Use one of these NAT types to change the source IP address of packets. For example, Internet
Service Providers (ISPs) do not route private IP ranges, such as 10.0.0.0/8. If you use private IP
ranges in your network and users should be able to reach servers on the internet, map the source IP
address of packets from these ranges to a public IP address.
Masquerading automatically uses the IP address of the outgoing interface. Therefore, use
masquerading if the outgoing interface uses a dynamic IP address.
276
CHAPTER 14. SECURING NETWORKS
You can enable IP masquerading on your system. IP masquerading hides individual machines behind a
gateway when accessing the internet.
Procedure
1. To check if IP masquerading is enabled (for example, for the external zone), enter the following
command as root:
The command prints yes with exit status 0 if enabled. It prints no with exit status 1 otherwise. If
zone is omitted, the default zone will be used.
3. To make this setting persistent, pass the --permanent option to the command.
To make this setting permanent, pass the --permanent option to the command.
You can use destination network address translation (DNAT) to direct incoming traffic from one
destination address and port to another. Typically, this is useful for redirecting incoming requests from
an external network interface to specific internal servers or services.
Prerequisites
Procedure
net.ipv4.ip_forward=1
This setting enables IP forwarding in the kernel. It makes the internal RHEL server act as a router
and forward packets from network to network.
# sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf
277
Red Hat Enterprise Linux 8 System Design Guide
The previous command defines a DNAT rule with the following settings:
--zone=public - The firewall zone for which you configure the DNAT rule. You can adjust
this to whatever zone you need.
--add-forward-port - The option that indicates you are adding a port-forwarding rule.
--permanent - The option that makes the DNAT rule persistent across reboots.
# firewall-cmd --reload
Verification
Verify the DNAT rule for the firewall zone that you used:
# cat /etc/firewalld/zones/public.xml
<?xml version="1.0" encoding="utf-8"?>
<zone>
<short>Public</short>
<description>For use in public areas. You do not trust the other computers on networks to
not harm your computer. Only selected incoming connections are accepted.</description>
<service name="ssh"/>
<service name="dhcpv6-client"/>
<service name="cockpit"/>
<forward-port port="80" protocol="tcp" to-port="8080" to-addr="198.51.100.10"/>
<forward/>
</zone>
Additional resources
14.5.11.4. Redirecting traffic from a non-standard port to make the web service accessible
on a standard port
You can use the redirect mechanism to make the web service that internally runs on a non-standard port
accessible without requiring users to specify the port in the URL. As a result, the URLs are simpler and
278
CHAPTER 14. SECURING NETWORKS
provide better browsing experience, while a non-standard port is still used internally or for specific
requirements.
Prerequisites
Procedure
net.ipv4.ip_forward=1
# sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf
The previous command defines the NAT redirect rule with the following settings:
--zone=public - The firewall zone, for which you configure the rule. You can adjust this to
whatever zone you need.
--permanent - The option that makes the rule persist across reboots.
# firewall-cmd --reload
Verification
Verify the redirect rule for the firewall zone that you used:
# firewall-cmd --list-forward-ports
port=8080:proto=tcp:toport=80:toaddr=
# cat /etc/firewalld/zones/public.xml
279
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
You can use the ICMP messages, especially echo-request and echo-reply, to reveal information about a
network and misuse such information for various kinds of fraudulent activities. Therefore, firewalld
enables controlling the ICMP requests to protect your network information.
You can use ICMP filtering to define which ICMP types and codes you want the firewall to permit or
deny from reaching your system. ICMP types and codes are specific categories and subcategories of
ICMP messages.
Security enhancement - Block potentially harmful ICMP types and codes to reduce your attack
surface.
Network performance - Permit only necessary ICMP types to optimize network performance
and prevent potential network congestion caused by excessive ICMP traffic.
Prerequisites
Procedure
# firewall-cmd --get-icmptypes
address-unreachable bad-header beyond-scope communication-prohibited destination-
280
CHAPTER 14. SECURING NETWORKS
From this predefined list, select which ICMP types and codes to allow or block.
The command removes any existing blocking rules for the echo requests ICMP type.
The command ensures that the redirect messages ICMP type is blocked by the firewall.
# firewall-cmd --reload
Verification
# firewall-cmd --list-icmp-blocks
redirect
The command output displays the ICMP types and codes that you allowed or blocked.
Additional resources
IP sets are a RHEL feature for grouping of IP addresses and networks into sets to achieve more flexible
and efficient firewall rule management.
The IP sets are valuable in scenarios when you need to for example:
281
Red Hat Enterprise Linux 8 System Design Guide
WARNING
Red Hat recommends using the firewall-cmd command to create and manage IP
sets.
You can make near real-time updates to flexibly allow specific IP addresses or ranges in the IP sets even
in unpredictable conditions. These updates can be triggered by various events, such as detection of
security threats or changes in the network behavior. Typically, such a solution leverages automation to
reduce manual effort and improve security by responding quickly to the situation.
Prerequisites
Procedure
The new IP set called allowlist contains IP addresses that you want your firewall to allow.
This configuration updates the allowlist IP set with a newly added IP address that is allowed to
pass network traffic by your firewall.
Without this rule, the IP set would not have any impact on network traffic. The default firewall
policy would prevail.
# firewall-cmd --reload
Verification
# firewall-cmd --get-ipsets
allowlist
282
CHAPTER 14. SECURING NETWORKS
# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: enp0s1
sources: ipset:allowlist
services: cockpit dhcpv6-client ssh
ports:
protocols:
...
The sources section of the command-line output provides insights to what origins of traffic
(hostnames, interfaces, IP sets, subnets, and others) are permitted or denied access to a
particular firewall zone. In this case, the IP addresses contained in the allowlist IP set are
allowed to pass traffic through the firewall for the public zone.
# cat /etc/firewalld/ipsets/allowlist.xml
<?xml version="1.0" encoding="utf-8"?>
<ipset type="hash:ip">
<entry>198.51.100.10</entry>
</ipset>
Next steps
Use a script or a security utility to fetch your threat intelligence feeds and update allowlist
accordingly in an automated fashion.
Additional resources
14.5.14.1. How the priority parameter organizes rules into different chains
You can set the priority parameter in a rich rule to any number between -32768 and 32767, and lower
numerical values have higher precedence.
The firewalld service organizes rules based on their priority value into different chains:
Priority lower than 0: the rule is redirected into a chain with the _pre suffix.
Priority higher than 0: the rule is redirected into a chain with the _post suffix.
Priority equals 0: based on the action, the rule is redirected into a chain with the _log, _deny, or
283
Red Hat Enterprise Linux 8 System Design Guide
Priority equals 0: based on the action, the rule is redirected into a chain with the _log, _deny, or
_allow the action.
Inside these sub-chains, firewalld sorts the rules based on their priority value.
The following is an example of how to create a rich rule that uses the priority parameter to log all traffic
that is not allowed or denied by other rules. You can use this rule to flag unexpected traffic.
Procedure
Add a rich rule with a very low precedence to log all traffic that has not been matched by other
rules:
The command additionally limits the number of log entries to 5 per minute.
Verification
Display the nftables rule that the command in the previous step created:
You can enable or disable the lockdown feature using the command line.
Procedure
# firewall-cmd --query-lockdown
Enabling lockdown:
# firewall-cmd --lockdown-on
284
CHAPTER 14. SECURING NETWORKS
Disabling lockdown:
# firewall-cmd --lockdown-off
The default allowlist configuration file contains the NetworkManager context and the default context
of libvirt. The user ID 0 is also on the list.
Following is an example allowlist configuration file enabling all commands for the firewall-cmd utility, for
a user called user whose user ID is 815:
This example shows both user id and user name, but only one option is required. Python is the
interpreter and is prepended to the command line.
In Red Hat Enterprise Linux, all utilities are placed in the /usr/bin/ directory and the /bin/ directory is
sym-linked to the /usr/bin/ directory. In other words, although the path for firewall-cmd when entered
as root might resolve to /bin/firewall-cmd, /usr/bin/firewall-cmd can now be used. All new scripts
should use the new location. But be aware that if scripts that run as root are written to use the
/bin/firewall-cmd path, then that command path must be added in the allowlist in addition to the
/usr/bin/firewall-cmd path traditionally used only for non- root users.
The * at the end of the name attribute of a command means that all commands that start with this string
match. If the * is not there then the absolute command including arguments must match.
14.5.16.1. The difference between intra-zone forwarding and zones with the default target
set to ACCEPT
With intra-zone forwarding enabled, the traffic within a single firewalld zone can flow from one interface
285
Red Hat Enterprise Linux 8 System Design Guide
With intra-zone forwarding enabled, the traffic within a single firewalld zone can flow from one interface
or source to another interface or source. The zone specifies the trust level of interfaces and sources. If
the trust level is the same, the traffic stays inside the same zone.
NOTE
Enabling intra-zone forwarding in the default zone of firewalld, applies only to the
interfaces and sources added to the current default zone.
firewalld uses different zones to manage incoming and outgoing traffic. Each zone has its own set of
rules and behaviors. For example, the trusted zone, allows all forwarded traffic by default.
Other zones can have different default behaviors. In standard zones, forwarded traffic is typically
dropped by default when the target of the zone is set to default.
To control how the traffic is forwarded between different interfaces or sources within a zone, make sure
you understand and configure the target of the zone accordingly.
14.5.16.2. Using intra-zone forwarding to forward traffic between an Ethernet and Wi-Fi
network
You can use intra-zone forwarding to forward traffic between interfaces and sources within the same
firewalld zone. This feature brings the following benefits:
Seamless connectivity between wired and wireless devices (you can forward traffic between an
Ethernet network connected to enp1s0 and a Wi-Fi network connected to wlp0s20)
Shared resources that are accessible and used by multiple devices or users within a network
(such as printers, databases, network-attached storage, and others)
Procedure
2. Ensure that interfaces between which you want to enable intra-zone forwarding are assigned
only to the internal zone:
# firewall-cmd --get-active-zones
3. If the interface is currently assigned to a zone other than internal, reassign it:
286
CHAPTER 14. SECURING NETWORKS
Verification
The following Verification require that the nmap-ncat package is installed on both hosts.
1. Log in to a host that is on the same network as the enp1s0 interface of the host on which you
enabled zone forwarding.
4. Connect to the echo server running on the host that is in the same network as the enp1s0:
5. Type something and press Enter. Verify the text is sent back.
Additional resources
RHEL system roles is a set of contents for the Ansible automation utility. This content together with the
Ansible automation utility provides a consistent configuration interface to remotely manage multiple
systems at once.
The rhel-system-roles package contains the rhel-system-roles.firewall RHEL system role. This role
was introduced for automated configurations of the firewalld service.
With the firewall RHEL system role you can configure many different firewalld parameters, for
example:
Zones
14.5.17.1. Resetting the firewalld settings by using the firewall RHEL system role
Over time, updates to your firewall configuration can accumulate to the point, where they could lead to
unintended security risks. With the firewall RHEL system role, you can reset the firewalld settings to
their default state in an automated fashion. This way you can efficiently remove any unintentional or
287
Red Hat Enterprise Linux 8 System Design Guide
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Reset firewalld example
hosts: managed-node-01.example.com
tasks:
- name: Reset firewalld
ansible.builtin.include_role:
name: rhel-system-roles.firewall
vars:
firewall:
- previous: replaced
previous: replaced
Removes all existing user-defined settings and resets the firewalld settings to defaults. If
you combine the previous:replaced parameter with other settings, the firewall role
removes all existing settings before applying new ones.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.firewall/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Run this command on the control node to remotely check that all firewall configuration on your
managed node was reset to its default values:
288
CHAPTER 14. SECURING NETWORKS
Additional resources
/usr/share/ansible/roles/rhel-system-roles.firewall/README.md file
/usr/share/doc/rhel-system-roles/firewall/ directory
14.5.17.2. Forwarding incoming traffic in firewalld from one local port to a different local
port by using the firewall RHEL system role
You can use the firewall RHEL system role to remotely configure forwarding of incoming traffic from
one local port to a different local port.
For example, if you have an environment where multiple services co-exist on the same machine and
need the same default port, there are likely to become port conflicts. These conflicts can disrupt
services and cause a downtime. With the firewall RHEL system role, you can efficiently forward traffic
to alternative ports to ensure that your services can run simultaneously without modification to their
configuration.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure firewalld
hosts: managed-node-01.example.com
tasks:
- name: Forward incoming traffic on port 8080 to 443
ansible.builtin.include_role:
name: rhel-system-roles.firewall
vars:
firewall:
- forward_port: 8080/tcp;443;
state: enabled
runtime: true
permanent: true
forward_port: 8080/tcp;443
Traffic coming to the local port 8080 using the TCP protocol is forwarded to the port 443.
runtime: true
Enables changes in the runtime configuration. The default is set to true.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.firewall/README.md file on the control node.
289
Red Hat Enterprise Linux 8 System Design Guide
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
On the control node, run the following command to remotely check the forwarded-ports on your
managed node:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.firewall/README.md file
/usr/share/doc/rhel-system-roles/firewall/ directory
14.5.17.3. Configuring a firewalld DMZ zone by using the firewall RHEL system role
As a system administrator, you can use the firewall RHEL system role to configure a dmz zone on the
enp1s0 interface to permit HTTPS traffic to the zone. In this way, you enable external users to access
your web servers.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure firewalld
hosts: managed-node-01.example.com
tasks:
- name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ
ansible.builtin.include_role:
name: rhel-system-roles.firewall
vars:
firewall:
290
CHAPTER 14. SECURING NETWORKS
- zone: dmz
interface: enp1s0
service: https
state: enabled
runtime: true
permanent: true
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.firewall/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
On the control node, run the following command to remotely check the information about the
dmz zone on your managed node:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.firewall/README.md file
/usr/share/doc/rhel-system-roles/firewall/ directory
291
Red Hat Enterprise Linux 8 System Design Guide
All rules applied atomically instead of fetching, updating, and storing a complete rule set
Support for debugging and tracing in the rule set (nftrace) and monitoring trace events (in the
nft tool)
The nftables framework uses tables to store chains. The chains contain individual rules for performing
actions. The nft utility replaces all tools from the previous packet-filtering frameworks. You can use the
libnftnl library for low-level interaction with nftables Netlink API through the libmnl library.
To display the effect of rule set changes, use the nft list ruleset command. Because these utilities add
tables, chains, rules, sets, and other objects to the nftables rule set, be aware that nftables rule-set
operations, such as the nft flush ruleset command, might affect rule sets installed using the iptables
command.
A table in nftables is a namespace that contains a collection of chains, rules, sets, and other objects.
Each table must have an address family assigned. The address family defines the packet types that this
table processes. You can set one of the following address families when you create a table:
ip: Matches only IPv4 packets. This is the default if you do not specify an address family.
If you want to add a table, the format to use depends on your firewall script:
292
CHAPTER 14. SECURING NETWORKS
Tables consist of chains which in turn are containers for rules. The following two rule types exists:
Base chain: You can use base chains as an entry point for packets from the networking stack.
Regular chain: You can use regular chains as a jump target to better organize rules.
If you want to add a base chain to a table, the format to use depends on your firewall script:
To avoid that the shell interprets the semicolons as the end of the command, place the \ escape
character in front of the semicolons.
Both examples create base chains. To create a regular chain, do not set any parameters in the curly
brackets.
Chain types
The following are the chain types and an overview with which address families and hooks you can use
them:
nat ip, ip6, inet prerouting, input, Chains of this type perform native address
output, translation based on connection tracking
postrouting entries. Only the first packet traverses this
chain type.
route ip, ip6 output Accepted packets that traverse this chain type
cause a new route lookup if relevant parts of
the IP header have changed.
Chain priorities
The priority parameter specifies the order in which packets traverse chains with the same hook value.
You can set this parameter to an integer value or use a standard priority name.
The following matrix is an overview of the standard priority names and their numeric values, and with
which address families and hooks you can use them:
293
Red Hat Enterprise Linux 8 System Design Guide
Chain policies
The chain policy defines whether nftables should accept or drop packets if rules in this chain do not
specify any action. You can set one of the following policies in a chain:
accept (default)
drop
Rules define actions to perform on packets that pass a chain that contains this rule. If the rule also
contains matching expressions, nftables performs the actions only if all previous expressions apply.
If you want to add a rule to a chain, the format to use depends on your firewall script:
This shell command appends the new rule at the end of the chain. If you prefer to add a rule at
294
CHAPTER 14. SECURING NETWORKS
This shell command appends the new rule at the end of the chain. If you prefer to add a rule at
the beginning of the chain, use the nft insert command instead of nft add.
To manage an nftables firewall on the command line or in shell scripts, use the nft utility.
IMPORTANT
The commands in this procedure do not represent a typical workflow and are not
optimized. This procedure only demonstrates how to use nft commands to manage
tables, chains, and rules in general.
Procedure
1. Create a table named nftables_svc with the inet address family so that the table can process
both IPv4 and IPv6 packets:
2. Add a base chain named INPUT, that processes incoming network traffic, to the inet
nftables_svc table:
# nft add chain inet nftables_svc INPUT { type filter hook input priority filter \; policy
accept \; }
To avoid that the shell interprets the semicolons as the end of the command, escape the
semicolons using the \ character.
3. Add rules to the INPUT chain. For example, allow incoming TCP traffic on port 22 and 443, and,
as the last rule of the INPUT chain, reject other incoming traffic with an Internet Control
Message Protocol (ICMP) port unreachable message:
If you enter the nft add rule commands as shown, nft adds the rules in the same order to the
chain as you run the commands.
5. Insert a rule before the existing rule with handle 3. For example, to insert a rule that allows TCP
traffic on port 636, enter:
295
Red Hat Enterprise Linux 8 System Design Guide
# nft insert rule inet nftables_svc INPUT position 3 tcp dport 636 accept
6. Append a rule after the existing rule with handle 3. For example, to insert a rule that allows TCP
traffic on port 80, enter:
# nft add rule inet nftables_svc INPUT position 3 tcp dport 80 accept
7. Display the rule set again with handles. Verify that the later added rules have been added to the
specified positions:
9. Display the rule set, and verify that the removed rule is no longer present:
11. Display the rule set, and verify that the INPUT chain is empty:
296
CHAPTER 14. SECURING NETWORKS
You can also use this command to delete chains that still contain rules.
13. Display the rule set, and verify that the INPUT chain has been deleted:
You can also use this command to delete tables that still contain chains.
NOTE
To delete the entire rule set, use the nft flush ruleset command instead of
manually deleting all rules, chains, and tables in separate commands.
Additional resources
The following is a brief overview in which scenario you should use one of the following utilities:
firewalld: Use the firewalld utility for simple firewall use cases. The utility is easy to use and
covers the typical use cases for these scenarios.
nftables: Use the nftables utility to set up complex and performance-critical firewalls, such as
for a whole network.
iptables: The iptables utility on Red Hat Enterprise Linux uses the nf_tables kernel API instead
of the legacy back end. The nf_tables API provides backward compatibility so that scripts that
use iptables commands still work on Red Hat Enterprise Linux. For new firewall scripts, Red Hat
recommends to use nftables.
IMPORTANT
297
Red Hat Enterprise Linux 8 System Design Guide
Compared to the iptables framework, nftables offers a more modern, efficient, and flexible alternative.
There are several concepts and features that provide advanced capabilities and improvements over
iptables. These enhancements simplify the rule management and improve performance to make
nftables a modern alternative for complex and high-performance networking environments.
The hook point in the packet processing path, for example "input", "output", "forward"
This flexibility enables precise control over when and how the rules are applied to packets as they
pass through the network stack. A special case of a chain is the route chain, which is used to
influence the routing decisions made by the kernel, based on packet headers.
Enhancements in nftables can be introduced as new instructions for that virtual machine. This typically
requires a new kernel module and updates to the libnftnl library and the nft command-line utility.
Alternatively, you can introduce new features by combining existing instructions in innovative ways
without a need for kernel modifications. The syntax of nftables rules reflects the flexibility of the
underlying virtual machine. For example, the rule meta mark set tcp dport map { 22: 1, 80: 2 } sets a
packet’s firewall mark to 1 if the TCP destination port is 22, and to 2 if the port is 80. This demonstrates
how complex logic can be expressed concisely.
298
CHAPTER 14. SECURING NETWORKS
directly within nftables. Next, nftables natively supports matching packets based on multiple values
or ranges for any data type, which enhances its capability to handle complex filtering requirements.
With nftables you can manipulate any field within a packet.
In nftables, sets can be either named or anonymous. The named sets can be referenced by multiple
rules and modified dynamically. The anonymous sets are defined inline within a rule and are immutable.
Sets can contain elements that are combinations of different types, for example IP address and port
number pairs. This feature provides greater flexibility in matching complex criteria. To manage sets, the
kernel can select the most appropriate backend based on the specific requirements (performance,
memory efficiency, and others). Sets can also function as maps with key-value pairs. The value part can
be used as data points (values to write into packet headers), or as verdicts or chains to jump to. This
enables complex and dynamic rule behaviors, known as "verdict maps".
Conditions in a rule are logically connected (with the AND operator) together, which means that all
conditions must be evaluated as "true" for the rule to match. If any condition fails, the evaluation moves
to the next rule.
Actions in nftables can be final, such as drop or accept, which stop further rule processing for the
packet. Non-terminal actions, such as counter log meta mark set 0x3, perform specific tasks (counting
packets, logging, setting a mark, and others), but allow subsequent rules to be evaluated.
Additional resources
Similar to the actively-maintained nftables framework, the deprecated iptables framework enables you
to perform a variety of packet filtering tasks, logging and auditing, NAT-related configuration tasks, and
more.
The iptables framework is structured into multiple tables, where each table is designed for a specific
purpose:
filter
The default table, ensures general packet filtering
nat
For Network Address Translation (NAT), includes altering the source and destination addresses of
packets
mangle
For specific packet alteration, enables you to do modification of packet headers for advanced routing
decisions
raw
For configurations that need to happen before connection tracking
These tables are implemented as separate kernel modules, where each table offers a fixed set of builtin
299
Red Hat Enterprise Linux 8 System Design Guide
These tables are implemented as separate kernel modules, where each table offers a fixed set of builtin
chains such as INPUT, OUTPUT, and FORWARD. A chain is a sequence of rules that packets are
evaluated against. These chains hook into specific points in the packet processing flow in the kernel. The
chains have the same names across different tables, however their order of execution is determined by
their respective hook priorities. The priorities are managed internally by the kernel to make sure that the
rules are applied in the correct sequence.
Originally, iptables was designed to process IPv4 traffic. However, with the inception of the IPv6
protocol, the ip6tables utility needed to be introduced to provide comparable functionality (as
iptables) and enable users to create and manage firewall rules for IPv6 packets. With the same logic,
the arptables utility was created to process Address Resolution Protocol (ARP) and the ebtables utility
was developed to handle Ethernet bridging frames. These tools ensure that you can apply the packet
filtering abilities of iptables across various network protocols and provide comprehensive network
coverage.
To enhance the functionality of iptables, the extensions started to be developed. The functionality
extensions are typically implemented as kernel modules that are paired with user-space dynamic shared
objects (DSOs). The extensions introduce "matches" and "targets" that you can use in firewall rules to
perform more sophisticated operations. Extensions can enable complex matches and targets. For
instance you can match on, or manipulate specific layer 4 protocol header values, perform rate-limiting,
enforce quotas, and so on. Some extensions are designed to address limitations in the default iptables
syntax, for example the "multiport" match extension. This extension allows a single rule to match
multiple, non-consecutive ports to simplify rule definitions, and thereby reducing the number of
individual rules required.
An ipset is a special kind of functionality extension to iptables. It is a kernel-level data structure that is
used together with iptables to create collections of IP addresses, port numbers, and other network-
related elements that you can match against packets. These sets significantly streamline, optimize, and
accelerate the process of writing and managing firewall rules.
Additional resources
Prerequisites
Procedure
# iptables-save >/root/iptables.dump
# ip6tables-save >/root/ip6tables.dump
300
CHAPTER 14. SECURING NETWORKS
from-iptables.nft
# ip6tables-restore-translate -f /root/ip6tables.dump > /etc/nftables/ruleset-migrated-
from-ip6tables.nft
4. To enable the nftables service to load the generated files, add the following to the
/etc/sysconfig/nftables.conf file:
include "/etc/nftables/ruleset-migrated-from-iptables.nft"
include "/etc/nftables/ruleset-migrated-from-ip6tables.nft"
If you used a custom script to load the iptables rules, ensure that the script no longer starts
automatically and reboot to flush all tables.
Verification
Additional resources
Red Hat Enterprise Linux provides the iptables-translate and ip6tables-translate utilities to convert an
iptables or ip6tables rule into the equivalent one for nftables.
Prerequisites
Procedure
Note that some extensions lack translation support. In these cases, the utility prints the
untranslated rule prefixed with the # sign, for example:
301
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
iptables-translate --help
iptables nftables
iptables nftables
The nft command does not pre-create tables and chains. They exist only if a user created them
manually.
Add comments
Define variables
When you install the nftables package, Red Hat Enterprise Linux automatically creates *.nft scripts in
302
CHAPTER 14. SECURING NETWORKS
When you install the nftables package, Red Hat Enterprise Linux automatically creates *.nft scripts in
the /etc/nftables/ directory. These scripts contain commands that create tables and empty chains for
different purposes.
You can write scripts in the nftables scripting environment in the following formats:
The same format as the nft list ruleset command displays the rule set:
#!/usr/sbin/nft -f
#!/usr/sbin/nft -f
# Create a table
add table inet example_table
You can run an nftables script either by passing it to the nft utility or by executing the script directly.
Procedure
# nft -f /etc/nftables/<example_firewall_script>.nft
i. Ensure that the script starts with the following shebang sequence:
#!/usr/sbin/nft -f
IMPORTANT
If you omit the -f parameter, the nft utility does not read the script and
displays: Error: syntax error, unexpected newline, expecting string.
# /etc/nftables/<example_firewall_script>.nft
IMPORTANT
Even if nft executes the script successfully, incorrectly placed rules, missing parameters,
or other problems in the script can cause that the firewall behaves not as expected.
Additional resources
The nftables scripting environment interprets everything to the right of a # character to the end of a
line as a comment.
...
# Flush the rule set
flush ruleset
To define a variable in an nftables script, use the define keyword. You can store single values and
304
CHAPTER 14. SECURING NETWORKS
To define a variable in an nftables script, use the define keyword. You can store single values and
anonymous sets in a variable. For more complex scenarios, use sets or verdict maps.
You can use the variable in the script by entering the $ sign followed by the variable name:
...
add rule inet example_table example_chain iifname $INET_DEV tcp dport ssh accept
...
You can use the variable in the script by writing the $ sign followed by the variable name:
NOTE
Curly braces have special semantics when you use them in a rule because they indicate
that the variable represents a set.
Additional resources
In the nftables scripting environment, you can include other scripts by using the include statement.
If you specify only a file name without an absolute or relative path, nftables includes files from the
default search path, which is set to /etc on Red Hat Enterprise Linux.
include "example.nft"
305
Red Hat Enterprise Linux 8 System Design Guide
To include all files ending with *.nft that are stored in the /etc/nftables/rulesets/ directory:
include "/etc/nftables/rulesets/*.nft"
Note that the include statement does not match files beginning with a dot.
Additional resources
The Include files section in the nft(8) man page on your system
The nftables systemd service loads firewall scripts that are included in the /etc/sysconfig/nftables.conf
file.
Prerequisites
Procedure
If you modified the *.nft scripts that were created in /etc/nftables/ with the installation of
the nftables package, uncomment the include statement for these scripts.
If you wrote new scripts, add include statements to include these scripts. For example, to
load the /etc/nftables/example.nft script when the nftables service starts, add:
include "/etc/nftables/_example_.nft"
2. Optional: Start the nftables service to load the firewall rules without rebooting the system:
Additional resources
Masquerading
306
CHAPTER 14. SECURING NETWORKS
Redirect
IMPORTANT
You can only use real interface names in iifname and oifname parameters, and
alternative names (altname) are not supported.
Masquerading automatically uses the IP address of the outgoing interface. Therefore, use
masquerading if the outgoing interface uses a dynamic IP address.
SNAT sets the source IP address of packets to a specified IP and does not dynamically look
up the IP of the outgoing interface. Therefore, SNAT is faster than masquerading. Use SNAT
if the outgoing interface uses a fixed IP address.
Masquerading enables a router to dynamically change the source IP of packets sent through an
interface to the IP address of the interface. This means that if the interface gets a new IP assigned,
nftables automatically uses the new IP when replacing the source IP.
Replace the source IP of packets leaving the host through the ens3 interface to the IP set on ens3.
Procedure
1. Create a table:
# nft add chain nat postrouting { type nat hook postrouting priority 100 \; }
IMPORTANT
307
Red Hat Enterprise Linux 8 System Design Guide
IMPORTANT
Even if you do not add a rule to the prerouting chain, the nftables framework
requires this chain to match incoming packet replies.
Note that you must pass the -- option to the nft command to prevent the shell from interpreting
the negative priority value as an option of the nft command.
3. Add a rule to the postrouting chain that matches outgoing packets on the ens3 interface:
On a router, Source NAT (SNAT) enables you to change the IP of packets sent through an interface to a
specific IP address. The router then replaces the source IP of outgoing packets.
Procedure
1. Create a table:
# nft add chain nat postrouting { type nat hook postrouting priority 100 \; }
IMPORTANT
Even if you do not add a rule to the postrouting chain, the nftables framework
requires this chain to match outgoing packet replies.
Note that you must pass the -- option to the nft command to prevent the shell from interpreting
the negative priority value as an option of the nft command.
3. Add a rule to the postrouting chain that replaces the source IP of outgoing packets through
ens3 with 192.0.2.1:
Additional resources
Destination NAT (DNAT) enables you to redirect traffic on a router to a host that is not directly
accessible from the internet.
For example, with DNAT the router redirects incoming traffic sent to port 80 and 443 to a web server
with the IP address 192.0.2.1.
Procedure
308
CHAPTER 14. SECURING NETWORKS
Procedure
1. Create a table:
# nft -- add chain nat prerouting { type nat hook prerouting priority -100 \; }
# nft add chain nat postrouting { type nat hook postrouting priority 100 \; }
IMPORTANT
Even if you do not add a rule to the postrouting chain, the nftables framework
requires this chain to match outgoing packet replies.
Note that you must pass the -- option to the nft command to prevent the shell from interpreting
the negative priority value as an option of the nft command.
3. Add a rule to the prerouting chain that redirects incoming traffic to port 80 and 443 on the
ens3 interface of the router to the web server with the IP address 192.0.2.1:
# nft add rule nat prerouting iifname ens3 tcp dport { 80, 443 } dnat to 192.0.2.1
4. Depending on your environment, add either a SNAT or masquerading rule to change the source
address for packets returning from the web server to the sender:
b. If the ens3 interface uses a static IP address, add a SNAT rule. For example, if the ens3
uses the 198.51.100.1 IP address:
Additional resources
NAT types
The redirect feature is a special case of destination network address translation (DNAT) that redirects
packets to the local machine depending on the chain hook.
For example, you can redirect incoming and forwarded traffic sent to port 22 of the local host to port
2222.
Procedure
309
Red Hat Enterprise Linux 8 System Design Guide
Procedure
1. Create a table:
# nft -- add chain nat prerouting { type nat hook prerouting priority -100 \; }
Note that you must pass the -- option to the nft command to prevent the shell from interpreting
the negative priority value as an option of the nft command.
3. Add a rule to the prerouting chain that redirects incoming traffic on port 22 to port 2222:
Additional resources
NAT types
The nftables utility uses the netfilter framework to provide network address translation (NAT) for
network traffic and provides the fastpath feature-based flowtable mechanism to accelerate packet
forwarding.
Avoids revisiting the routing table by bypassing the classic packet processing.
Procedure
2. Add an example-flowtable flowtable with ingress hook and filter as a priority type:
This command adds a flowtable of filter type with forward hook and filter priority.
310
CHAPTER 14. SECURING NETWORKS
4. Add a rule with established connection tracking state to offload example-flowtable flow:
Verification
chain example-forwardchain {
type filter hook forward priority filter; policy accept;
ct state established flow add @example-flowtable
}
}
Additional resources
An anonymous set contains comma-separated values enclosed in curly brackets, such as { 22, 80, 443 },
that you use directly in a rule. You can use anonymous sets also for IP addresses and any other match
criteria.
The drawback of anonymous sets is that if you want to change the set, you must replace the rule. For a
dynamic solution, use named sets as described in Using named sets in nftables .
Prerequisites
The example_chain chain and the example_table table in the inet family exists.
Procedure
1. For example, to add a rule to example_chain in example_table that allows incoming traffic to
port 22, 80, and 443:
# nft add rule inet example_table example_chain tcp dport { 22, 80, 443 } accept
311
Red Hat Enterprise Linux 8 System Design Guide
The nftables framework supports mutable named sets. A named set is a list or range of elements that
you can use in multiple rules within a table. Another benefit over anonymous sets is that you can update
a named set without replacing the rules that use the set.
When you create a named set, you must specify the type of elements the set contains. You can set the
following types:
ipv4_addr for a set that contains IPv4 addresses or ranges, such as 192.0.2.1 or 192.0.2.0/24.
ipv6_addr for a set that contains IPv6 addresses or ranges, such as 2001:db8:1::1 or
2001:db8:1::1/64.
ether_addr for a set that contains a list of media access control (MAC) addresses, such as
52:54:00:6b:66:42.
inet_proto for a set that contains a list of internet protocol types, such as tcp.
inet_service for a set that contains a list of internet services, such as ssh.
mark for a set that contains a list of packet marks. Packet marks can be any positive 32-bit
integer value (0 to 2147483647).
Prerequisites
Procedure
1. Create an empty set. The following examples create a set for IPv4 addresses:
# nft add set inet example_table example_set { type ipv4_addr \; flags interval \; }
IMPORTANT
To prevent the shell from interpreting the semicolons as the end of the
command, you must escape the semicolons with a backslash.
2. Optional: Create rules that use the set. For example, the following command adds a rule to the
312
CHAPTER 14. SECURING NETWORKS
2. Optional: Create rules that use the set. For example, the following command adds a rule to the
example_chain in the example_table that will drop all packets from IPv4 addresses in
example_set.
When you specify an IP address range, you can alternatively use the Classless Inter-Domain
Routing (CIDR) notation, such as 192.0.2.0/24 in the above example.
An anonymous map is a { match_criteria : action } statement that you use directly in a rule. The
statement can contain multiple comma-separated mappings.
The drawback of an anonymous map is that if you want to change the map, you must replace the rule.
For a dynamic solution, use named maps as described in Using named maps in nftables .
For example, you can use an anonymous map to route both TCP and UDP packets of the IPv4 and IPv6
protocol to different chains to count incoming TCP and UDP packets separately.
Procedure
313
Red Hat Enterprise Linux 8 System Design Guide
6. Create a chain for incoming traffic. For example, to create a chain named incoming_traffic in
example_table that filters incoming traffic:
# nft add chain inet example_table incoming_traffic { type filter hook input priority 0 \;
}
# nft add rule inet example_table incoming_traffic ip protocol vmap { tcp : jump
tcp_packets, udp : jump udp_packets }
The anonymous map distinguishes the packets and sends them to the different counter chains
based on their protocol.
chain udp_packets {
counter packets 10 bytes 1559
}
chain incoming_traffic {
type filter hook input priority filter; policy accept;
ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets }
}
}
The counters in the tcp_packets and udp_packets chain display both the number of received
packets and bytes.
The nftables framework supports named maps. You can use these maps in multiple rules within a table.
Another benefit over anonymous maps is that you can update a named map without replacing the rules
that use it.
When you create a named map, you must specify the type of elements:
ipv4_addr for a map whose match part contains an IPv4 address, such as 192.0.2.1.
314
CHAPTER 14. SECURING NETWORKS
ipv6_addr for a map whose match part contains an IPv6 address, such as 2001:db8:1::1.
ether_addr for a map whose match part contains a media access control (MAC) address, such
as 52:54:00:6b:66:42.
inet_proto for a map whose match part contains an internet protocol type, such as tcp.
inet_service for a map whose match part contains an internet services name port number, such
as ssh or 22.
mark for a map whose match part contains a packet mark. A packet mark can be any positive
32-bit integer value (0 to 2147483647).
counter for a map whose match part contains a counter value. The counter value can be any
positive 64-bit integer value.
quota for a map whose match part contains a quota value. The quota value can be any positive
64-bit integer value.
For example, you can allow or drop incoming packets based on their source IP address. Using a named
map, you require only a single rule to configure this scenario while the IP addresses and actions are
dynamically stored in the map.
Procedure
1. Create a table. For example, to create a table named example_table that processes IPv4
packets:
# nft add chain ip example_table example_chain { type filter hook input priority 0 \; }
IMPORTANT
To prevent the shell from interpreting the semicolons as the end of the
command, you must escape the semicolons with a backslash.
3. Create an empty map. For example, to create a map for IPv4 addresses:
4. Create rules that use the map. For example, the following command adds a rule to
example_chain in example_table that applies actions to IPv4 addresses which are both
defined in example_map:
This example defines the mappings of IPv4 addresses to actions. In combination with the rule
315
Red Hat Enterprise Linux 8 System Design Guide
This example defines the mappings of IPv4 addresses to actions. In combination with the rule
created above, the firewall accepts packet from 192.0.2.1 and drops packets from 192.0.2.2.
6. Optional: Enhance the map by adding another IP address and action statement:
chain example_chain {
type filter hook input priority filter; policy accept;
ip saddr vmap @example_map
}
}
IMPORTANT
This example is only for demonstration purposes and describes a scenario with specific
requirements.
Firewall scripts highly depend on the network infrastructure and security requirements.
Use this example to learn the concepts of nftables firewalls when you write scripts for
your own environment.
316
CHAPTER 14. SECURING NETWORKS
The internet interface of the router has both a static IPv4 address (203.0.113.1) and IPv6
address (2001:db8:a::1) assigned.
The clients in the internal LAN use only private IPv4 addresses from the range 10.0.0.0/24.
Consequently, traffic from the LAN to the internet requires source network address translation
(SNAT).
The administrator PCs in the internal LAN use the IP addresses 10.0.0.100 and 10.0.0.200.
The DMZ uses public IP addresses from the ranges 198.51.100.0/24 and 2001:db8:b::/56.
The web server in the DMZ uses the IP addresses 198.51.100.5 and 2001:db8:b::5.
The router acts as a caching DNS server for hosts in the LAN and DMZ.
The following are the requirements to the nftables firewall in the example network:
The PCs of the administrators must be able to access the router and every server in the DMZ
using SSH.
By default, systemd logs kernel messages, such as for dropped packets, to the journal. Additionally, you
317
Red Hat Enterprise Linux 8 System Design Guide
By default, systemd logs kernel messages, such as for dropped packets, to the journal. Additionally, you
can configure the rsyslog service to log such entries to a separate file. To ensure that the log file does
not grow infinitely, configure a rotation policy.
Prerequisites
Procedure
Using this configuration, the rsyslog service logs dropped packets to the /var/log/nftables.log
file instead of /var/log/messages.
/var/log/nftables.log {
size +10M
maxage 30
sharedscripts
postrotate
/usr/bin/systemctl kill -s HUP rsyslog.service >/dev/null 2>&1 || true
endscript
}
The maxage 30 setting defines that logrotate removes rotated logs older than 30 days during
the next rotation operation.
Additional resources
This example is an nftables firewall script that runs on a RHEL router and protects the clients in an
internal LAN and a web server in a DMZ. For details about the network and the requirements for the
firewall used in the example, see Network conditions and Security requirements to the firewall script .
318
CHAPTER 14. SECURING NETWORKS
WARNING
This nftables firewall script is only for demonstration purposes. Do not use it
without adapting it to your environments and security requirements.
Prerequisites
Procedure
319
Red Hat Enterprise Linux 8 System Design Guide
# IPv4 access from LAN and internet to the HTTPS server in the DMZ
iifname { $LAN_DEV, $INET_DEV } oifname $DMZ_DEV ip daddr 198.51.100.5 tcp dport
443 accept
320
CHAPTER 14. SECURING NETWORKS
include "/etc/nftables/firewall.nft"
Verification
2. Try to perform an access that the firewall prevents. For example, try to access the router using
SSH from the DMZ:
# ssh router.example.com
ssh: connect to host router.example.com port 22: Network is unreachable
For example, if your web server does not have a public IP address, you can set a port forwarding rule on
your firewall that forwards incoming packets on port 80 and 443 on the firewall to the web server. With
this firewall rule, users on the internet can access the web server using the IP or host name of the
firewall.
You can use nftables to forward packets. For example, you can forward incoming IPv4 packets on port
321
Red Hat Enterprise Linux 8 System Design Guide
You can use nftables to forward packets. For example, you can forward incoming IPv4 packets on port
8022 to port 22 on the local system.
Procedure
# nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \; }
NOTE
Pass the -- option to the nft command to prevent the shell from interpreting the
negative priority value as an option of the nft command.
3. Add a rule to the prerouting chain that redirects incoming packets on port 8022 to the local
port 22:
# nft add rule ip nat prerouting tcp dport 8022 redirect to :22
You can use a destination network address translation (DNAT) rule to forward incoming packets on a
local port to a remote host. This enables users on the internet to access a service that runs on a host
with a private IP address.
For example, you can forward incoming IPv4 packets on the local port 443 to the same port number on
the remote system with the 192.0.2.1 IP address.
Prerequisites
You are logged in as the root user on the system that should forward the packets.
Procedure
# nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \; }
# nft add chain ip nat postrouting { type nat hook postrouting priority 100 \; }
NOTE
Pass the -- option to the nft command to prevent the shell from interpreting the
negative priority value as an option of the nft command.
322
CHAPTER 14. SECURING NETWORKS
3. Add a rule to the prerouting chain that redirects incoming packets on port 443 to the same port
on 192.0.2.1:
# nft add rule ip nat prerouting tcp dport 443 dnat to 192.0.2.1
By using the ct count parameter of the nft utility, you can limit the number of simultaneous connections
per IP address. For example, you can use this feature to configure that each source IP address can only
establish two parallel SSH connections to a host.
Procedure
# nft add chain inet filter input { type filter hook input priority 0 \; }
# nft add set inet filter limit-ssh { type ipv4_addr\; flags dynamic \;}
4. Add a rule to the input chain that allows only two simultaneous incoming connections to the
SSH port (22) from an IPv4 address and rejects all further connections from the same IP:
# nft add rule inet filter input tcp dport ssh ct state new add @limit-ssh { ip saddr ct
count over 2 } counter reject
Verification
1. Establish more than two new simultaneous SSH connections from the same IP address to the
host. Nftables refuses connections to the SSH port if two connections are already established.
323
Red Hat Enterprise Linux 8 System Design Guide
The elements entry displays addresses that currently match the rule. In this example, elements
lists IP addresses that have active connections to the SSH port. Note that the output does not
display the number of active connections or if connections were rejected.
14.6.9.2. Blocking IP addresses that attempt more than ten new incoming TCP connections
within one minute
You can temporarily block hosts that are establishing more than ten IPv4 TCP connections within one
minute.
Procedure
# nft add chain ip filter input { type filter hook input priority 0 \; }
3. Add a rule that drops all packets from source addresses that attempt to establish more than ten
TCP connections within one minute:
# nft add rule ip filter input ip protocol tcp ct state new, untracked meter ratemeter { ip
saddr timeout 5m limit rate over 10/minute } drop
The timeout 5m parameter defines that nftables automatically removes entries after five
minutes to prevent that the meter fills up with stale entries.
Verification
324
CHAPTER 14. SECURING NETWORKS
For more information about a procedure that adds a counter to an existing rule, see Adding a
counter to an existing rule in Configuring and managing networking
Prerequisites
Procedure
1. Add a new rule with the counter parameter to the chain. The following example adds a rule with
a counter that allows TCP traffic on port 22 and counts the packets and traffic that match this
rule:
# nft add rule inet example_table example_chain tcp dport 22 counter accept
For more information about a procedure that adds a new rule with a counter, see Creating a rule
with the counter in Configuring and managing networking
Prerequisites
Procedure
325
Red Hat Enterprise Linux 8 System Design Guide
2. Add the counter by replacing the rule but with the counter parameter. The following example
replaces the rule displayed in the previous step and adds a counter:
# nft replace rule inet example_table example_chain handle 4 tcp dport 22 counter
accept
The tracing feature in nftables in combination with the nft monitor command enables administrators to
display packets that match a rule. You can enable tracing for a rule an use it to monitoring packets that
match this rule.
Prerequisites
Procedure
2. Add the tracing feature by replacing the rule but with the meta nftrace set 1 parameters. The
following example replaces the rule displayed in the previous step and enables tracing:
# nft replace rule inet example_table example_chain handle 4 tcp dport 22 meta nftrace
set 1 accept
3. Use the nft monitor command to display the tracing. The following example filters the output of
the command to display only entries that contain inet example_table example_chain:
326
CHAPTER 14. SECURING NETWORKS
WARNING
Depending on the number of rules with tracing enabled and the amount of
matching traffic, the nft monitor command can display a lot of output. Use
grep or other utilities to filter the output.
You can use the nft utility to back up the nftables rule set to a file.
Procedure
In JSON format:
Procedure
If the file to restore is in the format produced by nft list ruleset or contains nft commands
directly:
# nft -f file.nft
327
Red Hat Enterprise Linux 8 System Design Guide
# nft -j -f file.json
328
PART IV. DESIGN OF HARD DISK
329
Red Hat Enterprise Linux 8 System Design Guide
The following sections describe the file systems that Red Hat Enterprise Linux 8 includes by default, and
recommendations on the most suitable file system for your application.
Disk or local FS XFS XFS is the default file system in RHEL. Red Hat
recommends deploying XFS as your local file system
unless there are specific reasons to do otherwise: for
example, compatibility or corner cases around
performance.
Network or client-and- NFS Use NFS to share files between multiple systems on
server FS the same network.
330
CHAPTER 15. OVERVIEW OF AVAILABLE FILE SYSTEMS
For example, a local file system is the only choice for internal SATA or SAS disks, and is used when your
server has internal hardware RAID controllers with local drives. Local file systems are also the most
common file systems used on SAN attached storage when the device exported on the SAN is not
shared.
All local file systems are POSIX-compliant and are fully compatible with all supported Red Hat
Enterprise Linux releases. POSIX-compliant file systems provide support for a well-defined set of
system calls, such as read(), write(), and seek().
When considering a file system choice, choose a file system based on how large the file system needs to
be, what unique features it must have, and how it performs under your workload.
XFS
ext4
Reliability
Metadata journaling, which ensures file system integrity after a system crash by keeping a
record of file system operations that can be replayed when the system is restarted and the
file system remounted
Quota journaling. This avoids the need for lengthy quota consistency checks after a crash.
Allocation schemes
331
Red Hat Enterprise Linux 8 System Design Guide
Extent-based allocation
Delayed allocation
Space pre-allocation
Other features
Online defragmentation
Extended attributes (xattr). This allows the system to associate several additional
name/value pairs per file.
Project or directory quotas. This allows quota restrictions over a directory tree.
Subsecond timestamps
Performance characteristics
XFS has a high performance on large systems with enterprise workloads. A large system is one with a
relatively high number of CPUs, multiple HBAs, and connections to external disk arrays. XFS also
performs well on smaller systems that have a multi-threaded, parallel I/O workload.
XFS has a relatively low performance for single threaded, metadata-intensive workloads: for example, a
workload that creates or deletes large numbers of small files in a single thread.
The ext4 driver can read and write to ext2 and ext3 file systems, but the ext4 file system format is not
compatible with ext2 and ext3 drivers.
Extent-based metadata
Delayed allocation
Journal checksumming
332
CHAPTER 15. OVERVIEW OF AVAILABLE FILE SYSTEMS
The extent-based metadata and the delayed allocation features provide a more compact and efficient
way to track utilized space in a file system. These features improve file system performance and reduce
the space consumed by metadata. Delayed allocation allows the file system to postpone selection of the
permanent location for newly written user data until the data is flushed to disk. This enables higher
performance since it can allow for larger, more contiguous allocations, allowing the file system to make
decisions with much better information.
File system repair time using the fsck utility in ext4 is much faster than in ext2 and ext3. Some file
system repairs have demonstrated up to a six-fold increase in performance.
Running the quotacheck command on an XFS file system has no effect. The first time you turn on
quota accounting, XFS checks quotas automatically.
Certain applications cannot properly handle inode numbers larger than 232 on an XFS file system.
These applications might cause the failure of 32-bit stat calls with the EOVERFLOW return value.
Inode number exceed 232 under the following conditions:
If your application fails with large inode numbers, mount the XFS file system with the -o inode32
option to enforce inode numbers below 232. Note that using inode32 does not affect inodes that are
already allocated with 64-bit numbers.
IMPORTANT
333
Red Hat Enterprise Linux 8 System Design Guide
IMPORTANT
Do not use the inode32 option unless a specific environment requires it. The inode32
option changes allocation behavior. As a consequence, the ENOSPC error might
occur if no space is available to allocate inodes in the lower disk blocks.
XFS
For large-scale deployments, use XFS, particularly when handling large files (hundreds of
megabytes) and high I/O concurrency. XFS performs optimally in environments with high bandwidth
(greater than 200MB/s) and more than 1000 IOPS. However, it consumes more CPU resources for
metadata operations compared to ext4 and does not support file system shrinking.
ext4
For smaller systems or environments with limited I/O bandwidth, ext4 might be a better fit. It
performs better in single-threaded, lower I/O workloads and environments with lower throughput
requirements. ext4 also supports offline shrinking, which can be beneficial if resizing the file system is
a requirement.
Benchmark your application’s performance on your target server and storage system to ensure the
selected file system meets your performance and scalability requirements.
334
CHAPTER 15. OVERVIEW OF AVAILABLE FILE SYSTEMS
Such file systems are built from one or more servers that export a set of file systems to one or more
clients. The client nodes do not have access to the underlying block storage, but rather interact with the
storage using a protocol that allows for better access control.
The most common client/server file system for RHEL customers is the NFS file system.
RHEL provides both an NFS server component to export a local file system over the network
and an NFS client to import these file systems.
RHEL also includes a CIFS client that supports the popular Microsoft SMB file servers for
Windows interoperability. The userspace Samba server provides Windows clients with a
Microsoft SMB service from a RHEL server.
Performance characteristics
Shared disk file systems do not always perform as well as local file systems running on the same
system due to the computational cost of the locking overhead. Shared disk file systems perform well
with workloads where each node writes almost exclusively to a particular set of files that are not
shared with other nodes or where a set of files is shared in an almost exclusively read-only manner
across a set of nodes. This results in a minimum of cross-node cache invalidation and can maximize
performance.
Setting up a shared disk file system is complex, and tuning an application to perform well on a shared
disk file system can be challenging.
335
Red Hat Enterprise Linux 8 System Design Guide
Red Hat Enterprise Linux provides the GFS2 file system. GFS2 comes tightly integrated with
the Red Hat Enterprise Linux High Availability Add-On and the Resilient Storage Add-On.
Red Hat Enterprise Linux supports GFS2 on clusters that range in size from 2 to 16 nodes.
NFS-based network file systems are an extremely common and popular choice for
environments that provide NFS servers.
Network file systems can be deployed using very high-performance networking technologies
like Infiniband or 10 Gigabit Ethernet. This means that you should not turn to shared storage file
systems just to get raw bandwidth to your storage. If the speed of access is of prime
importance, then use NFS to export a local file system like XFS.
Shared storage file systems are not easy to set up or to maintain, so you should deploy them
only when you cannot provide your required availability with either local or network file systems.
A shared storage file system in a clustered environment helps reduce downtime by eliminating
the steps needed for unmounting and mounting that need to be done during a typical fail-over
scenario involving the relocation of a high-availability service.
Red Hat recommends that you use network file systems unless you have a specific use case for shared
storage file systems. Use shared storage file systems primarily for deployments that need to provide
high-availability services with minimum downtime and have stringent service-level requirements.
Red Hat Enterprise Linux 8 provides the Stratis volume manager as a Technology Preview.
Stratis uses XFS for the file system layer and integrates it with LVM, Device Mapper, and
other components.
Stratis was first released in Red Hat Enterprise Linux 8.0. It is conceived to fill the gap created when
Red Hat deprecated Btrfs. Stratis 1.0 is an intuitive, command line-based volume manager that can
perform significant storage management operations while hiding the complexity from the user:
Volume management
Pool creation
Snapshots
Stratis offers powerful features, but currently lacks certain capabilities of other offerings that it
336
CHAPTER 15. OVERVIEW OF AVAILABLE FILE SYSTEMS
Stratis offers powerful features, but currently lacks certain capabilities of other offerings that it
might be compared to, such as Btrfs or ZFS. Most notably, it does not support CRCs with self healing.
337
Red Hat Enterprise Linux 8 System Design Guide
NOTE
In the context of SMB, you can find mentions about the Common Internet File System
(CIFS) protocol, which is a dialect of SMB. Both the SMB and CIFS protocol are
supported, and the kernel module and utilities involved in mounting SMB and CIFS shares
both use the name cifs.
Set and display Access Control Lists (ACL) in a security descriptor on SMB and CIFS shares
SMB 1
WARNING
The SMB1 protocol is deprecated due to known security issues, and is only
safe to use on a private network. The main reason that SMB1 is still
provided as a supported option is that currently it is the only SMB protocol
version that supports UNIX extensions. If you do not need to use UNIX
extensions on SMB, Red Hat strongly recommends using SMB2 or later.
SMB 2.0
SMB 2.1
SMB 3.0
SMB 3.1.1
NOTE
Depending on the protocol version, not all SMB features are implemented.
Samba uses the CAP_UNIX capability bit in the SMB protocol to provide the UNIX extensions feature.
338
CHAPTER 16. MOUNTING AN SMB SHARE
Samba uses the CAP_UNIX capability bit in the SMB protocol to provide the UNIX extensions feature.
These extensions are also supported by the cifs.ko kernel module. However, both Samba and the kernel
module support UNIX extensions only in the SMB 1 protocol.
Prerequisites
Procedure
1. Set the server min protocol parameter in the [global] section in the /etc/samba/smb.conf file
to NT1.
2. Mount the share using the SMB 1 protocol by providing the -o vers=1.0 option to the mount
command. For example:
By default, the kernel module uses SMB 2 or the highest later protocol version supported by the
server. Passing the -o vers=1.0 option to the mount command forces that the kernel module
uses the SMB 1 protocol that is required for using UNIX extensions.
Verification
# mount
...
//<server_name>/<share_name> on /mnt type cifs (...,unix,...)
If the unix entry is displayed in the list of mount options, UNIX extensions are enabled.
NOTE
Manually mounted shares are not mounted automatically again when you reboot the
system. To configure that Red Hat Enterprise Linux automatically mounts the share when
the system boots, see Mounting an SMB share automatically when the system boots .
Prerequisites
Procedure
Use the mount utility with the -t cifs parameter to mount an SMB share:
339
Red Hat Enterprise Linux 8 System Design Guide
In the -o parameter, you can specify options that are used to mount the share. For details, see
the OPTIONS section in the mount.cifs(8) man page and Frequently used mount options .
Verification
# ls -l /mnt/
total 4
drwxr-xr-x. 2 root root 8748 Dec 4 16:27 test.txt
drwxr-xr-x. 17 root root 4096 Dec 4 07:43 Demo-Directory
Prerequisites
Procedure
1. Add an entry for the share to the /etc/fstab file. For example:
IMPORTANT
To enable the system to mount a share automatically, you must store the user
name, password, and domain name in a credentials file. For details, see Creating
a credentials file to authenticate to an SMB share
In the fourth field of the row in the /etc/fstab, specify mount options, such as the path to the
credentials file. For details, see the OPTIONS section in the mount.cifs(8) man page and
Frequently used mount options .
Verification
340
CHAPTER 16. MOUNTING AN SMB SHARE
# mount /mnt/
Prerequisites
Procedure
1. Create a file, such as /root/smb.cred, and specify the user name, password, and domain name
that file:
username=user_name
password=password
domain=domain_name
2. Set the permissions to only allow the owner to access the file:
You can now pass the credentials=file_name mount option to the mount utility or use it in the
/etc/fstab file to mount the share without being prompted for the user name and password.
However, in certain situations, the administrator wants to mount a share automatically when the system
boots, but users should perform actions on the share’s content using their own credentials. The
multiuser mount options lets you configure this scenario.
IMPORTANT
To use the multiuser mount option, you must additionally set the sec mount option to a
security type that supports providing credentials in a non-interactive way, such as krb5 or
the ntlmssp option with a credentials file. For details, see Accessing a share as a user .
The root user mounts the share using the multiuser option and an account that has minimal access to
the contents of the share. Regular users can then provide their user name and password to the current
session’s kernel keyring using the cifscreds utility. If the user accesses the content of the mounted
share, the kernel uses the credentials from the kernel keyring instead of the one initially used to mount
the share.
341
Red Hat Enterprise Linux 8 System Design Guide
Optionally, verify if the share was successfully mounted with the multiuser option.
Prerequisites
Procedure
To mount a share automatically with the multiuser option when the system boots:
1. Create the entry for the share in the /etc/fstab file. For example:
# mount /mnt/
If you do not want to mount the share automatically when the system boots, mount it manually by
passing -o multiuser,sec=security_type to the mount command. For details about mounting an SMB
share manually, see Manually mounting an SMB share .
Procedure
# mount
...
//server_name/share_name on /mnt type cifs (sec=ntlmssp,multiuser,...)
If the multiuser entry is displayed in the list of mount options, the feature is enabled.
When the user performs operations in the directory that contains the mounted SMB share, the server
342
CHAPTER 16. MOUNTING AN SMB SHARE
When the user performs operations in the directory that contains the mounted SMB share, the server
applies the file system permissions for this user, instead of the one initially used when the share was
mounted.
NOTE
Multiple users can perform operations using their own credentials on the mounted share
at the same time.
How the connection will be established with the server. For example, which SMB protocol
version is used when connecting to the server.
How the share will be mounted into the local file system. For example, if the system overrides
the remote file and directory permissions to enable multiple local users to access the content
on the server.
To set multiple options in the fourth field of the /etc/fstab file or in the -o parameter of a mount
command, separate them with commas. For example, see Mounting a share with the multiuser option .
Option Description
credentials=file_name Sets the path to the credentials file. See Authenticating to an SMB share
using a credentials file.
dir_mode=mode Sets the directory mode if the server does not support CIFS UNIX extensions.
file_mode=mode Sets the file mode if the server does not support CIFS UNIX extensions.
password=password Sets the password used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.
seal Enables encryption support for connections using SMB 3.0 or a later
protocol version. Therefore, use seal together with the vers mount option
set to 3.0 or later. See the example in Manually mounting an SMB share.
sec=security_mode Sets the security mode, such as ntlmsspi, to enable NTLMv2 password
hashing and enabled packet signing. For a list of supported values, see the
option’s description in the mount.cifs(8) man page on your system.
If the server does not support the ntlmv2 security mode, use sec=ntlmssp,
which is the default.
For security reasons, do not use the insecure ntlm security mode.
username=user_name Sets the user name used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.
343
Red Hat Enterprise Linux 8 System Design Guide
Option Description
vers=SMB_protocol_version Sets the SMB protocol version used for the communication with the server.
For a complete list, see the OPTIONS section in the mount.cifs(8) man page on your system.
344
CHAPTER 17. OVERVIEW OF PERSISTENT NAMING ATTRIBUTES
Traditionally, non-persistent names in the form of /dev/sd(major number)(minor number) are used on
Linux to refer to storage devices. The major and minor number range and associated sd names are
allocated for each device when it is detected. This means that the association between the major and
minor number range and associated sd names can change if the order of device detection changes.
The parallelization of the system boot process detects storage devices in a different order with
each system boot.
A disk fails to power up or respond to the SCSI controller. This results in it not being detected by
the normal device probe. The disk is not accessible to the system and subsequent devices will
have their major and minor number range, including the associated sd names shifted down. For
example, if a disk normally referred to as sdb is not detected, a disk that is normally referred to
as sdc would instead appear as sdb.
A SCSI controller (host bus adapter, or HBA) fails to initialize, causing all disks connected to that
HBA to not be detected. Any disks connected to subsequently probed HBAs are assigned
different major and minor number ranges, and different associated sd names.
The order of driver initialization changes if different types of HBAs are present in the system.
This causes the disks connected to those HBAs to be detected in a different order. This might
also occur if HBAs are moved to different PCI slots on the system.
Disks connected to the system with Fibre Channel, iSCSI, or FCoE adapters might be
inaccessible at the time the storage devices are probed, due to a storage array or intervening
switch being powered off, for example. This might occur when a system reboots after a power
failure, if the storage array takes longer to come online than the system take to boot. Although
some Fibre Channel drivers support a mechanism to specify a persistent SCSI target ID to
WWPN mapping, this does not cause the major and minor number ranges, and the associated sd
names to be reserved; it only provides consistent SCSI target ID numbers.
These reasons make it undesirable to use the major and minor number range or the associated sd
names when referring to devices, such as in the /etc/fstab file. There is the possibility that the wrong
device will be mounted and data corruption might result.
Occasionally, however, it is still necessary to refer to the sd names even when another mechanism is
used, such as when errors are reported by a device. This is because the Linux kernel uses sd names (and
also SCSI host/channel/target/LUN tuples) in kernel messages regarding the device.
File system identifiers are tied to the file system itself, while device identifiers are linked to the physical
345
Red Hat Enterprise Linux 8 System Design Guide
File system identifiers are tied to the file system itself, while device identifiers are linked to the physical
block device. Understanding the difference is important for proper storage management.
Label
Device identifiers
Device identifiers are tied to a block device: for example, a disk or a partition. If you rewrite the device,
such as by formatting it with the mkfs utility, the device keeps the attribute, because it is not stored in
the file system.
Partition UUID
Serial number
Recommendations
Some file systems, such as logical volumes, span multiple devices. Red Hat recommends
accessing these file systems using file system identifiers rather than device identifiers.
Their content
A unique identifier
Although udev naming attributes are persistent, in that they do not change on their own across system
reboots, some are also configurable.
346
CHAPTER 17. OVERVIEW OF PERSISTENT NAMING ATTRIBUTES
/dev/disk/by-uuid/3e6be9de-8139-11d1-9106-a43f08d823a6
You can use the UUID to refer to the device in the /etc/fstab file using the following syntax:
UUID=3e6be9de-8139-11d1-9106-a43f08d823a6
You can configure the UUID attribute when creating a file system, and you can also change it later on.
For example:
/dev/disk/by-label/Boot
You can use the label to refer to the device in the /etc/fstab file using the following syntax:
LABEL=Boot
You can configure the Label attribute when creating a file system, and you can also change it later on.
This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital
Product Data (page 0x83) or Unit Serial Number (page 0x80).
Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device
name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/ name to
reference the data on the disk, even if the path to the device changes, and even when accessing the
device from different systems.
347
Red Hat Enterprise Linux 8 System Design Guide
In addition to these persistent names provided by the system, you can also use udev rules to implement
persistent names of your own, mapped to the WWID of the storage.
/dev/disk/by-partuuid/4cd1448a-01 /dev/sda1
/dev/disk/by-partuuid/4cd1448a-02 /dev/sda2
/dev/disk/by-partuuid/4cd1448a-03 /dev/sda3
The Path attribute fails if any part of the hardware path (for example, the PCI ID, target port, or LUN
number) changes. The Path attribute is therefore unreliable. However, the Path attribute may be useful
in one of the following scenarios:
You need to identify a disk that you are planning to replace later.
If there are multiple paths from a system to a device, DM Multipath uses the WWID to detect this. DM
Multipath then presents a single "pseudo-device" in the /dev/mapper/wwid directory, such as
/dev/mapper/3600508b400105df70000e00000ac0000.
Host:Channel:Target:LUN
/dev/sd name
major:minor number
348
CHAPTER 17. OVERVIEW OF PERSISTENT NAMING ATTRIBUTES
[size=20G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 5:0:1:1 sdc 8:32 [active][undef]
\_ 6:0:1:1 sdg 8:96 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 5:0:0:1 sdb 8:16 [active][undef]
\_ 6:0:0:1 sdf 8:80 [active][undef]
DM Multipath automatically maintains the proper mapping of each WWID-based device name to its
corresponding /dev/sd name on the system. These names are persistent across path changes, and they
are consistent when accessing the device from different systems.
When the user_friendly_names feature of DM Multipath is used, the WWID is mapped to a name of the
form /dev/mapper/mpathN. By default, this mapping is maintained in the file /etc/multipath/bindings.
These mpathN names are persistent as long as that file is maintained.
IMPORTANT
If you use user_friendly_names, then additional steps are required to obtain consistent
names in a cluster.
It is possible that the device might not be accessible at the time the query is performed
because the udev mechanism might rely on the ability to query the storage device when the
udev rules are processed for a udev event. This is more likely to occur with Fibre Channel, iSCSI
or FCoE storage devices when the device is not located in the server chassis.
The kernel might send udev events at any time, causing the rules to be processed and possibly
causing the /dev/disk/by-*/ links to be removed if the device is not accessible.
There might be a delay between when the udev event is generated and when it is processed,
such as when a large number of devices are detected and the user-space udevd service takes
some amount of time to process the rules for each one. This might cause a delay between when
the kernel detects the device and when the /dev/disk/by-*/ names are available.
External programs such as blkid invoked by the rules might open the device for a brief period of
time, making the device inaccessible for other uses.
The device names managed by the udev mechanism in /dev/disk/ may change between major
releases, requiring you to update the links.
Procedure
To list the UUID and Label attributes, use the lsblk utility:
349
Red Hat Enterprise Linux 8 System Design Guide
For example:
To list the PARTUUID attribute, use the lsblk utility with the --output +PARTUUID option:
For example:
To list the WWID attribute, examine the targets of symbolic links in the /dev/disk/by-id/
directory. For example:
Example 17.6. Viewing the WWID of all storage devices on the system
$ file /dev/disk/by-id/*
/dev/disk/by-id/ata-QEMU_HARDDISK_QM00001
symbolic link to ../../sda
/dev/disk/by-id/ata-QEMU_HARDDISK_QM00001-part1
symbolic link to ../../sda1
/dev/disk/by-id/ata-QEMU_HARDDISK_QM00001-part2
symbolic link to ../../sda2
/dev/disk/by-id/dm-name-rhel_rhel8-root
symbolic link to ../../dm-0
/dev/disk/by-id/dm-name-rhel_rhel8-swap
symbolic link to ../../dm-1
/dev/disk/by-id/dm-uuid-LVM-
QIWtEHtXGobe5bewlIUDivKOz5ofkgFhP0RMFsNyySVihqEl2cWWbR7MjXJolD6g
symbolic link to ../../dm-1
/dev/disk/by-id/dm-uuid-LVM-
QIWtEHtXGobe5bewlIUDivKOz5ofkgFhXqH2M45hD2H9nAf2qfWSrlRLhzfMyOKd
symbolic link to ../../dm-0
/dev/disk/by-id/lvm-pv-uuid-atlr2Y-vuMo-ueoH-CpMG-4JuH-AhEF-wu4QQm
symbolic link to ../../sda2
350
CHAPTER 17. OVERVIEW OF PERSISTENT NAMING ATTRIBUTES
You can change the UUID or Label persistent naming attribute of a file system.
NOTE
Changing udev attributes happens in the background and might take a long time. The
udevadm settle command waits until the change is fully registered, which ensures that
your next command will be able to use the new attribute correctly.
Replace new-uuid with the UUID you want to set; for example, 1cdfbc07-1c90-4984-b5ec-
f61943f5ea50. You can generate a UUID using the uuidgen command.
Prerequisites
If you are modifying the attributes of an XFS file system, unmount it first.
Procedure
To change the UUID or Label attributes of an XFS file system, use the xfs_admin utility:
To change the UUID or Label attributes of an ext4, ext3, or ext2 file system, use the tune2fs
utility:
To change the UUID or Label attributes of a swap volume, use the swaplabel utility:
351
Red Hat Enterprise Linux 8 System Design Guide
For an overview of the advantages and disadvantages to using partitions on block devices, see the
Red Hat Knowledgebase solution What are the advantages and disadvantages to using partitioning on
LUNs, either directly or with LVM in between?.
WARNING
Formatting a block device with a partition table deletes all data stored on the
device.
Procedure
# parted block-device
# (parted) print
If the device already contains partitions, they will be deleted in the following steps.
352
CHAPTER 18. GETTING STARTED WITH PARTITIONS
# (parted) print
# (parted) quit
Additional resources
Procedure
1. Start the parted utility. For example, the following output lists the device /dev/sda:
# parted /dev/sda
# (parted) print
For a detailed description of the print command output, see the following:
353
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
NOTE
Prerequisites
If the partition you want to create is larger than 2TiB, format the disk with the GUID Partition
Table (GPT).
Procedure
# parted block-device
2. View the current partition table to determine if there is enough free space:
# (parted) print
354
CHAPTER 18. GETTING STARTED WITH PARTITIONS
Replace part-type with with primary, logical, or extended. This applies only to the MBR
partition table.
Replace name with an arbitrary partition name. This is required for GPT partition tables.
Replace fs-type with xfs, ext2, ext3, ext4, fat16, fat32, hfs, hfs+, linux-swap, ntfs, or
reiserfs. The fs-type parameter is optional. Note that the parted utility does not create the
file system on the partition.
Replace start and end with the sizes that determine the starting and ending points of the
partition, counting from the beginning of the disk. You can use size suffixes, such as 512MiB,
20GiB, or 1.5TiB. The default size is in megabytes.
To create a primary partition from 1024MiB until 2048MiB on an MBR table, use:
4. View the partition table to confirm that the created partition is in the partition table with the
correct partition type, file system type, and size:
# (parted) print
# (parted) quit
# udevadm settle
# cat /proc/partitions
Additional resources
355
Red Hat Enterprise Linux 8 System Design Guide
You can set a partition type or flag, using the fdisk utility.
Prerequisites
Procedure
# fdisk block-device
2. View the current partition table to determine the minor partition number:
You can see the current partition type in the Type column and its corresponding type ID in the
Id column.
3. Enter the partition type command and select a partition using its minor number:
Prerequisites
If the partition you want to create is larger than 2TiB, format the disk with the GUID Partition
356
CHAPTER 18. GETTING STARTED WITH PARTITIONS
If the partition you want to create is larger than 2TiB, format the disk with the GUID Partition
Table (GPT).
If you want to shrink the partition, first shrink the file system so that it is not larger than the
resized partition.
NOTE
Procedure
# parted block-device
# (parted) print
The location of the existing partition and its new ending point after resizing.
Replace 1 with the minor number of the partition that you are resizing.
Replace 2 with the size that determines the new ending point of the resized partition,
counting from the beginning of the disk. You can use size suffixes, such as 512MiB, 20GiB,
or 1.5TiB. The default size is in megabytes.
4. View the partition table to confirm that the resized partition is in the partition table with the
correct size:
# (parted) print
# (parted) quit
# cat /proc/partitions
7. Optional: If you extended the partition, extend the file system on it as well.
Additional resources
357
Red Hat Enterprise Linux 8 System Design Guide
WARNING
Procedure
# parted block-device
Replace block-device with the path to the device where you want to remove a partition: for
example, /dev/sda.
2. View the current partition table to determine the minor number of the partition to remove:
(parted) print
(parted) rm minor-number
Replace minor-number with the minor number of the partition you want to remove.
4. Verify that you have removed the partition from the partition table:
(parted) print
(parted) quit
# cat /proc/partitions
7. Remove the partition from the /etc/fstab file, if it is present. Find the line that declares the
removed partition, and remove it from the file.
8. Regenerate mount units so that your system registers the new /etc/fstab configuration:
# systemctl daemon-reload
9. If you have deleted a swap partition or removed pieces of LVM, remove all references to the
358
CHAPTER 18. GETTING STARTED WITH PARTITIONS
9. If you have deleted a swap partition or removed pieces of LVM, remove all references to the
partition from the kernel command line:
a. List active kernel options and see if any option references the removed partition:
# grubby --info=ALL
10. To register the changes in the early boot system, rebuild the initramfs file system:
Additional resources
359
Red Hat Enterprise Linux 8 System Design Guide
Reliability
Metadata journaling, which ensures file system integrity after a system crash by keeping a
record of file system operations that can be replayed when the system is restarted and the
file system remounted
Quota journaling. This avoids the need for lengthy quota consistency checks after a crash.
Allocation schemes
Extent-based allocation
Delayed allocation
Space pre-allocation
Other features
Online defragmentation
360
CHAPTER 19. GETTING STARTED WITH XFS
Extended attributes (xattr). This allows the system to associate several additional
name/value pairs per file.
Project or directory quotas. This allows quota restrictions over a directory tree.
Subsecond timestamps
Performance characteristics
XFS has a high performance on large systems with enterprise workloads. A large system is one with a
relatively high number of CPUs, multiple HBAs, and connections to external disk arrays. XFS also
performs well on smaller systems that have a multi-threaded, parallel I/O workload.
XFS has a relatively low performance for single threaded, metadata-intensive workloads: for example, a
workload that creates or deletes large numbers of small files in a single thread.
361
Red Hat Enterprise Linux 8 System Design Guide
On Linux, UNIX, and similar operating systems, file systems on different partitions and removable
devices (CDs, DVDs, or USB flash drives for example) can be attached to a certain point (the mount
point) in the directory tree, and then detached again. While a file system is mounted on a directory, the
original content of the directory is not accessible.
Note that Linux does not prevent you from mounting a file system to a directory with a file system
already attached to it.
When you mount a file system using the mount command without all required information, that is
without the device name, the target directory, or the file system type, the mount utility reads the
content of the /etc/fstab file to check if the given file system is listed there. The /etc/fstab file contains
a list of device names and the directories in which the selected file systems are set to be mounted as
well as the file system type and mount options. Therefore, when mounting a file system that is specified
in /etc/fstab, the following command syntax is sufficient:
# mount directory
# mount device
Additional resources
Procedure
362
CHAPTER 20. MOUNTING FILE SYSTEMS
$ findmnt
To limit the listed file systems only to a certain file system type, add the --types option:
For example:
Additional resources
Prerequisites
Verify that no file system is already mounted on your chosen mount point:
$ findmnt mount-point
Procedure
2. If mount cannot recognize the file system type automatically, specify it using the --types
option:
363
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
Procedure
For example, to move the file system mounted in the /mnt/userdirs/ directory to the /home/
mount point:
$ findmnt
$ ls old-directory
$ ls new-directory
Additional resources
Procedure
1. Try unmounting the file system using either of the following commands:
By mount point:
# umount mount-point
By device:
364
CHAPTER 20. MOUNTING FILE SYSTEMS
# umount device
If the command fails with an error similar to the following, it means that the file system is in use
because of a process is using resources on it:
2. If the file system is in use, use the fuser utility to determine which processes are accessing it.
For example:
/run/media/user/FlashDrive: 18351
Afterwards, stop the processes using the file system and try unmounting it again.
NOTE
You also can unmount a file system and the RHEL system will stop using it. Unmounting
the file system enables you to delete, remove, or re-format devices.
Prerequisites
If you want to unmount a file system, ensure that the system does not use any file, service, or
application stored in the partition.
Procedure
3. In the Storage table, select a volume from which you want to delete the partition.
4. In the GPT partitions section, click the menu button, ⋮ next to the partition whose file system
you want to mount or unmount.
365
Red Hat Enterprise Linux 8 System Design Guide
The following table lists the most common options of the mount utility. You can apply these mount
options using the following syntax:
Option Description
async Enables asynchronous input and output operations on the file system.
auto Enables the file system to be mounted automatically using the mount -a
command.
exec Allows the execution of binary files on the particular file system.
noauto Default behavior disables the automatic mount of the file system using the
mount -a command.
noexec Disallows the execution of binary files on the particular file system.
nouser Disallows an ordinary user (that is, other than root) to mount and unmount the file
system.
user Allows an ordinary user (that is, other than root) to mount and unmount the file
system.
366
CHAPTER 21. SHARING A MOUNT ON MULTIPLE MOUNT POINTS
private
This type does not receive or forward any propagation events.
When you mount another file system under either the duplicate or the original mount point, it is not
reflected in the other.
shared
This type creates an exact replica of a given mount point.
When a mount point is marked as a shared mount, any mount within the original mount point is
reflected in it, and vice versa.
slave
This type creates a limited duplicate of a given mount point.
When a mount point is marked as a slave mount, any mount within the original mount point is
reflected in it, but no mount within a slave mount is reflected in its original.
unbindable
This type prevents the given mount point from being duplicated whatsoever.
Additional resources
Procedure
1. Create a virtual file system (VFS) node from the original mount point:
367
Red Hat Enterprise Linux 8 System Design Guide
Alternatively, to change the mount type for the selected mount point and all mount points
under it, use the --make-rprivate option instead of --make-private.
4. It is now possible to verify that /media and /mnt share content but none of the mounts within
/media appear in /mnt. For example, if the CD-ROM drive contains non-empty media and
the /media/cdrom/ directory exists, use:
5. It is also possible to verify that file systems mounted in the /mnt directory are not reflected
in /media. For example, if a non-empty USB flash drive that uses the /dev/sdc1 device is
plugged in and the /mnt/flashdisk/ directory is present, use:
Additional resources
368
CHAPTER 21. SHARING A MOUNT ON MULTIPLE MOUNT POINTS
Procedure
1. Create a virtual file system (VFS) node from the original mount point:
Alternatively, to change the mount type for the selected mount point and all mount points
under it, use the --make-rshared option instead of --make-shared.
To make the /media and /mnt directories share the same content:
4. It is now possible to verify that a mount within /media also appears in /mnt. For example, if
the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, use:
5. Similarly, it is possible to verify that any file system mounted in the /mnt directory is
reflected in /media. For example, if a non-empty USB flash drive that uses the /dev/sdc1
device is plugged in and the /mnt/flashdisk/ directory is present, use:
369
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
Procedure
1. Create a virtual file system (VFS) node from the original mount point:
Alternatively, to change the mount type for the selected mount point and all mount points
under it, use the --make-rshared option instead of --make-shared.
This example shows how to get the content of the /media directory to appear in /mnt as well, but
without any mounts in the /mnt directory to be reflected in /media.
4. Verify that a mount within /media also appears in /mnt. For example, if the CD-ROM drive
contains non-empty media and the /media/cdrom/ directory exists, use:
370
CHAPTER 21. SHARING A MOUNT ON MULTIPLE MOUNT POINTS
5. Also verify that file systems mounted in the /mnt directory are not reflected in /media. For
example, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and
the /mnt/flashdisk/ directory is present, use:
Additional resources
Procedure
Alternatively, to change the mount type for the selected mount point and all mount points
under it, use the --make-runbindable option instead of --make-unbindable.
Any subsequent attempt to make a duplicate of this mount fails with the following error:
Additional resources
371
Red Hat Enterprise Linux 8 System Design Guide
1. The block device identified by a persistent attribute or a path in the /dev directory.
4. Mount options for the file system, which includes the defaults option to mount the partition at
boot time with default options. The mount option field also recognizes the systemd mount unit
options in the x-systemd.option format.
NOTE
The systemd-fstab-generator dynamically converts the entries from the /etc/fstab file
to the systemd-mount units. The systemd auto mounts LVM volumes from /etc/fstab
during manual activation unless the systemd-mount unit is masked.
The systemd service automatically generates mount units from entries in /etc/fstab.
Additional resources
372
CHAPTER 22. PERSISTENTLY MOUNTING FILE SYSTEMS
Procedure
For example:
3. As root, edit the /etc/fstab file and add a line for the file system, identified by the UUID.
For example:
4. Regenerate mount units so that your system registers the new configuration:
# systemctl daemon-reload
5. Try mounting the file system to verify that the configuration works:
# mount mount-point
Additional resources
373
Red Hat Enterprise Linux 8 System Design Guide
One drawback of permanent mounting using the /etc/fstab configuration is that, regardless of how
infrequently a user accesses the mounted file system, the system must dedicate resources to keep the
mounted file system in place. This might affect system performance when, for example, the system is
maintaining NFS mounts to many systems at one time.
An alternative to /etc/fstab is to use the kernel-based autofs service. It consists of the following
components:
Additional resources
All on-demand mount points must be configured in the master map. Mount point, host name, exported
directory, and options can all be specified in a set of files (or other supported network sources) rather
than configuring them manually for each host.
The master map file lists mount points controlled by autofs, and their corresponding configuration files
or network sources known as automount maps. The format of the master map is as follows:
mount-point
The autofs mount point; for example, /mnt/data.
map-file
The map source file, which contains a list of mount points and the file system location from which
those mount points should be mounted.
374
CHAPTER 23. MOUNTING FILE SYSTEMS ON DEMAND
options
If supplied, these apply to all entries in the given map, if they do not themselves have options
specified.
/mnt/data /etc/auto.data
Map files
Map files configure the properties of individual on-demand mount points.
The automounter creates the directories if they do not exist. If the directories exist before the
automounter was started, the automounter will not remove them when it exits. If a timeout is specified,
the directory is automatically unmounted if the directory is not accessed for the timeout period.
The general format of maps is similar to the master map. However, the options field appears between
the mount point and the location instead of at the end of the entry as in the master map:
mount-point
This refers to the autofs mount point. This can be a single directory name for an indirect mount or
the full path of the mount point for direct mounts. Each direct and indirect map entry key (mount-
point) can be followed by a space separated list of offset directories (subdirectory names each
beginning with /) making them what is known as a multi-mount entry.
options
When supplied, these options are appended to the master map entry options, if any, or used instead
of the master map options if the configuration entry append_options is set to no.
location
This refers to the file system location such as a local file system path (preceded with the Sun map
format escape character : for map names beginning with /), an NFS file system or other valid file
system location.
The first column in the map file indicates the autofs mount point: sales and payroll from the server
called personnel. The second column indicates the options for the autofs mount. The third column
indicates the source of the mount.
Following the given configuration, the autofs mount points will be /home/payroll and /home/sales.
The -fstype= option is often omitted and is not needed if the file system is NFS, including mounts for
NFSv4 if the system default is NFSv4 for NFS mounts.
375
Red Hat Enterprise Linux 8 System Design Guide
Using the given configuration, if a process requires access to an autofs unmounted directory such as
/home/payroll/2006/July.sxc, the autofs service automatically mounts the directory.
However, Red Hat recommends using the simpler autofs format described in the previous sections.
Additional resources
/usr/share/doc/autofs/README.amd-maps file
Prerequisites
Procedure
1. Create a map file for the on-demand mount point, located at /etc/auto.identifier. Replace
identifier with a name that identifies the mount point.
2. In the map file, enter the mount point, options, and location fields as described in The autofs
configuration files section.
3. Register the map file in the master map file, as described in The autofs configuration files
section.
4. Allow the service to re-read the configuration, so it can manage the newly configured autofs
mount:
# ls automounted-directory
Prerequisites
Procedure
1. Specify the mount point and location of the map file by editing the /etc/auto.master file on a
server on which you need to mount user home directories. To do so, add the following line into
the /etc/auto.master file:
/home /etc/auto.home
2. Create a map file with the name of /etc/auto.home on a server on which you need to mount
user home directories, and edit the file with the following parameters:
* -fstype=nfs,rw,sync host.example.com:/home/&
You can skip fstype parameter, as it is nfs by default. For more information, see autofs(5) man
page on your system.
Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following
directive:
+auto.master
/home auto.home
377
Red Hat Enterprise Linux 8 System Design Guide
beth fileserver.example.com:/export/home/beth
joe fileserver.example.com:/export/home/joe
* fileserver.example.com:/export/home/&
BROWSE_MODE="yes"
Procedure
This section describes the examples of mounting home directories from a different server and
augmenting auto.home with only selected entries.
Given the preceding conditions, let’s assume that the client system needs to override the NIS map
auto.home and mount home directories from a different server.
In this case, the client needs to use the following /etc/auto.master map:
/home /etc/auto.home
+auto.master
* host.example.com:/export/home/&
Because the automounter only processes the first occurrence of a mount point, the /home directory
contains the content of /etc/auto.home instead of the NIS auto.home map.
Alternatively, to augment the site-wide auto.home map with just a few entries:
1. Create an /etc/auto.home file map, and in it put the new entries. At the end, include the NIS
auto.home map. Then the /etc/auto.home file map looks similar to:
mydir someserver:/export/mydir
+auto.home
2. With these NIS auto.home map conditions, listing the content of the /home directory
outputs:
$ ls /home
378
CHAPTER 23. MOUNTING FILE SYSTEMS ON DEMAND
This last example works as expected because autofs does not include the contents of a file map of
the same name as the one it is reading. As such, autofs moves on to the next map source in the
nsswitch configuration.
Prerequisites
LDAP client libraries must be installed on all systems configured to retrieve automounter maps
from LDAP. On Red Hat Enterprise Linux, the openldap package should be installed
automatically as a dependency of the autofs package.
Procedure
1. To configure LDAP access, modify the /etc/openldap/ldap.conf file. Ensure that the BASE,
URI, and schema options are set appropriately for your site.
2. The most recently established schema for storing automount maps in LDAP is described by the
rfc2307bis draft. To use this schema, set it in the /etc/autofs.conf configuration file by
removing the comment characters from the schema definition. For example:
Example 23.6. Setting autofs configuration
DEFAULT_MAP_OBJECT_CLASS="automountMap"
DEFAULT_ENTRY_OBJECT_CLASS="automount"
DEFAULT_MAP_ATTRIBUTE="automountMapName"
DEFAULT_ENTRY_ATTRIBUTE="automountKey"
DEFAULT_VALUE_ATTRIBUTE="automountInformation"
3. Ensure that all other schema entries are commented in the configuration. The automountKey
attribute of the rfc2307bis schema replaces the cn attribute of the rfc2307 schema. Following
is an example of an LDAP Data Interchange Format (LDIF) configuration:
Example 23.7. LDIF Configuration
# auto.master, example.com
dn: automountMapName=auto.master,dc=example,dc=com
objectClass: top
objectClass: automountMap
automountMapName: auto.master
# auto.home, example.com
dn: automountMapName=auto.home,dc=example,dc=com
objectClass: automountMap
automountMapName: auto.home
379
Red Hat Enterprise Linux 8 System Design Guide
# /, auto.home, example.com
dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: /
automountInformation: filer.example.com:/export/&
Additional resources
Procedure
1. Add desired fstab entry as documented in Persistently mounting file systems . For example:
2. Add x-systemd.automount to the options field of entry created in the previous step.
3. Load newly created units so that your system registers the new configuration:
# systemctl daemon-reload
Verification
# ls /mount/point
Additional resources
380
CHAPTER 23. MOUNTING FILE SYSTEMS ON DEMAND
Managing systemd
Procedure
mount-point.mount
[Mount]
What=/dev/disk/by-uuid/f5755511-a714-44c1-a123-cfde0e4ac688
Where=/mount/point
Type=xfs
2. Create a unit file with the same name as the mount unit, but with extension .automount.
3. Open the file and create an [Automount] section. Set the Where= option to the mount path:
[Automount]
Where=/mount/point
[Install]
WantedBy=multi-user.target
4. Load newly created units so that your system registers the new configuration:
# systemctl daemon-reload
Verification
# ls /mount/point
Additional resources
Managing systemd
381
Red Hat Enterprise Linux 8 System Design Guide
Procedure
1. Edit the /etc/autofs.conf file to specify the schema attributes that autofs searches for:
#
# Other common LDAP naming
#
map_object_class = "automountMap"
entry_object_class = "automount"
map_attribute = "automountMapName"
entry_attribute = "automountKey"
value_attribute = "automountInformation"
NOTE
User can write the attributes in both lower and upper cases in the
/etc/autofs.conf file.
2. Optional: Specify the LDAP configuration. There are two ways to do this. The simplest is to let
the automount service discover the LDAP server and locations on its own:
ldap_uri = "ldap:///dc=example,dc=com"
This option requires DNS to contain SRV records for the discoverable servers.
Alternatively, explicitly set which LDAP server to use and the base DN for LDAP searches:
ldap_uri = "ldap://ipa.example.com"
search_base = "cn=location,cn=automount,dc=example,dc=com"
3. Edit the /etc/autofs_ldap_auth.conf file so that autofs allows client authentication with the IdM
LDAP server.
Set the principal to the Kerberos host principal for the IdM LDAP server,
host/FQDN@REALM. The principal name is used to connect to the IdM directory as part of
GSS client authentication.
<autofs_ldap_sasl_conf
382
CHAPTER 24. USING SSSD COMPONENT FROM IDM TO CACHE THE AUTOFS MAPS
usetls="no"
tlsrequired="no"
authrequired="yes"
authtype="GSSAPI"
clientprinc="host/[email protected]"
/>
For more information about host principal, see Using canonicalized DNS host names in IdM .
Prerequisites
Procedure
# vim /etc/sssd/sssd.conf
[sssd]
domains = ldap
services = nss,pam,autofs
3. Create a new [autofs] section. You can leave this blank, because the default settings for an
autofs service work with most infrastructures.
[nss]
[pam]
[sudo]
[autofs]
[ssh]
[pac]
For more information, see the sssd.conf man page on your system.
4. Optional: Set a search base for the autofs entries. By default, this is the LDAP search base, but
a subtree can be specified in the ldap_autofs_search_base parameter.
[domain/EXAMPLE]
383
Red Hat Enterprise Linux 8 System Design Guide
ldap_search_base = "dc=example,dc=com"
ldap_autofs_search_base = "ou=automount,dc=example,dc=com"
6. Check the /etc/nsswitch.conf file, so that SSSD is listed as a source for automount
configuration:
8. Test the configuration by listing a user’s /home directory, assuming there is a master map entry
for /home:
# ls /home/userName
If this does not mount the remote file system, check the /var/log/messages file for errors. If
necessary, increase the debug level in the /etc/sysconfig/autofs file by setting the logging
parameter to debug.
384
CHAPTER 25. SETTING READ-ONLY PERMISSIONS FOR THE ROOT FILE SYSTEM
The default set of such files and directories is read from the /etc/rwtab file. Note that the readonly-root
package is required to have this file present in your system.
dirs /var/cache/man
dirs /var/gdm
<content truncated>
empty /tmp
empty /var/cache/foomatic
<content truncated>
files /etc/adjtime
files /etc/ntp.conf
<content truncated>
copy-method path
In this syntax:
Replace copy-method with one of the keywords specifying how the file or directory is copied to
tmpfs.
The /etc/rwtab file recognizes the following ways in which a file or directory can be copied to tmpfs:
empty
An empty path is copied to tmpfs. For example:
empty /tmp
dirs
A directory tree is copied to tmpfs, empty. For example:
dirs /var/run
files
385
Red Hat Enterprise Linux 8 System Design Guide
files /etc/resolv.conf
Procedure
1. In the /etc/sysconfig/readonly-root file, set the READONLY option to yes to mount the file
systems as read-only:
READONLY=yes
5. If you need to add files and directories to be mounted with write permissions in the tmpfs file
system, create a text file in the /etc/rwtab.d/ directory and put the configuration there.
For example, to mount the /etc/example/file file with write permissions, add this line to the
/etc/rwtab.d/example file:
files /etc/example/file
IMPORTANT
Changes made to files and directories in tmpfs do not persist across boots.
Troubleshooting
If you mount the root file system with read-only permissions by mistake, you can remount it with
read-and-write permissions again using the following command:
# mount -o remount,rw /
386
CHAPTER 26. MANAGING STORAGE DEVICES
IMPORTANT
Stratis is a Technology Preview feature only. Technology Preview features are not
supported with Red Hat production service level agreements (SLAs) and might not be
functionally complete. Red Hat does not recommend using them in production. These
features provide early access to upcoming product features, enabling customers to test
functionality and provide feedback during the development process. For more
information about the support scope of Red Hat Technology Preview features, see
https://access.redhat.com/support/offerings/techpreview.
Stratis is a local storage management system that supports advanced storage features. The central
concept of Stratis is a storage pool. This pool is created from one or more local disks or partitions, and
file systems are created from the pool.
Thin provisioning
Tiering
Encryption
Additional resources
Stratis website
Externally, Stratis presents the following volume components in the command-line interface and the
API:
387
Red Hat Enterprise Linux 8 System Design Guide
blockdev
Block devices, such as a disk or a disk partition.
pool
Composed of one or more block devices.
A pool has a fixed total size, equal to the size of the block devices.
The pool contains most Stratis layers, such as the non-volatile data cache using the dm-cache
target.
Stratis creates a /dev/stratis/my-pool/ directory for each pool. This directory contains links to
devices that represent Stratis file systems in the pool.
filesystem
Each pool can contain one or more file systems, which store files.
File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system
grows with the data stored on it. If the size of the data approaches the virtual size of the file system,
Stratis grows the thin volume and the file system automatically.
IMPORTANT
Stratis tracks information about file systems created using Stratis that XFS is not
aware of, and changes made using XFS do not automatically create updates in Stratis.
Users must not reformat or reconfigure XFS file systems that are managed by Stratis.
NOTE
Stratis uses many Device Mapper devices, which show up in dmsetup listings and the
/proc/partitions file. Similarly, the lsblk command output reflects the internal workings
and layers of Stratis.
Supported devices
Stratis pools have been tested to work on these types of block devices:
LUKS
MD RAID
DM Multipath
iSCSI
388
CHAPTER 26. MANAGING STORAGE DEVICES
NVMe devices
Unsupported devices
Because Stratis contains a thin-provisioning layer, Red Hat does not recommend placing a Stratis pool
on block devices that are already thinly-provisioned.
Procedure
1. Install packages that provide the Stratis service and command-line utilities:
Prerequisites
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
Each block device on which you are creating a Stratis pool is at least 1 GB.
On the IBM Z architecture, the /dev/dasd* block devices must be partitioned. Use the partition
device for creating the Stratis pool.
For information about partitioning DASD devices, see Configuring a Linux instance on IBM Z
NOTE
Procedure
1. Erase any file system, partition table, or RAID signatures that exist on each block device that
you want to use in the Stratis pool:
where block-device is the path to the block device; for example, /dev/sdb.
2. Create the new unencrypted Stratis pool on the selected block device:
389
Red Hat Enterprise Linux 8 System Design Guide
NOTE
Prerequisites
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
Each block device on which you are creating a Stratis pool is at least 1 GB.
NOTE
Procedure
2. Click Storage.
390
CHAPTER 26. MANAGING STORAGE DEVICES
5. In the Create Stratis pool dialog box, enter a name for the Stratis pool.
6. Select the Block devices from which you want to create the Stratis pool.
7. Optional: If you want to specify the maximum size for each file system that is created in pool,
select Manage filesystem sizes.
8. Click Create.
Verification
Go to the Storage section and verify that you can see the new Stratis pool in the Devices table.
391
Red Hat Enterprise Linux 8 System Design Guide
To secure your data, you can create an encrypted Stratis pool from one or more block devices.
When you create an encrypted Stratis pool, the kernel keyring is used as the primary encryption
mechanism. After subsequent system reboots this kernel keyring is used to unlock the encrypted Stratis
pool.
When creating an encrypted Stratis pool from one or more block devices, note the following:
Each block device is encrypted using the cryptsetup library and implements the LUKS2 format.
Each Stratis pool can either have a unique key or share the same key with other pools. These
keys are stored in the kernel keyring.
The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It
is not possible to have both encrypted and unencrypted block devices in the same Stratis pool.
Block devices added to the data tier of an encrypted Stratis pool are automatically encrypted.
Prerequisites
Stratis v2.1.0 or later is installed. For more information, see Installing Stratis.
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
The block devices on which you are creating a Stratis pool are at least 1GB in size each.
On the IBM Z architecture, the /dev/dasd* block devices must be partitioned. Use the partition
in the Stratis pool.
For information about partitioning DASD devices, see link:Configuring a Linux instance on IBM Z .
Procedure
1. Erase any file system, partition table, or RAID signatures that exist on each block device that
you want to use in the Stratis pool:
where block-device is the path to the block device; for example, /dev/sdb.
2. If you have not created a key set already, run the following command and follow the prompts to
create a key set to use for the encryption.
where key-description is a reference to the key that gets created in the kernel keyring.
3. Create the encrypted Stratis pool and specify the key description to use for the encryption. You
can also specify the key path using the --keyfile-path option instead of using the key-
description option.
where
392
CHAPTER 26. MANAGING STORAGE DEVICES
key-description
References the key that exists in the kernel keyring, which you created in the previous step.
my-pool
Specifies the name of the new Stratis pool.
block-device
Specifies the path to an empty or wiped block device.
NOTE
When creating an encrypted Stratis pool from one or more block devices, note the following:
Each block device is encrypted using the cryptsetup library and implements the LUKS2 format.
Each Stratis pool can either have a unique key or share the same key with other pools. These
keys are stored in the kernel keyring.
The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It
is not possible to have both encrypted and unencrypted block devices in the same Stratis pool.
Block devices added to the data tier of an encrypted Stratis pool are automatically encrypted.
Prerequisites
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
Each block device on which you are creating a Stratis pool is at least 1 GB.
Procedure
393
Red Hat Enterprise Linux 8 System Design Guide
2. Click Storage.
5. In the Create Stratis pool dialog box, enter a name for the Stratis pool.
6. Select the Block devices from which you want to create the Stratis pool.
7. Select the type of encryption, you can use a passphrase, a Tang keyserver, or both:
Passphrase:
i. Enter a passphrase.
394
CHAPTER 26. MANAGING STORAGE DEVICES
Tang keyserver:
i. Enter the keyserver address. For more information, see Deploying a Tang server with
SELinux in enforcing mode.
8. Optional: If you want to specify the maximum size for each file system that is created in pool,
select Manage filesystem sizes.
9. Click Create.
Verification
Go to the Storage section and verify that you can see the new Stratis pool in the Devices table.
Prerequisites
Stratis is installed.
The web console detects and installs Stratis by default. However, for manually installing Stratis,
see Installing Stratis.
Procedure
2. Click Storage.
3. In the Storage table, click the Stratis pool you want to rename.
4. On the Stratis pool page, click edit next to the Name field.
395
Red Hat Enterprise Linux 8 System Design Guide
6. Click Rename.
If you enable overprovisioning, an API signal notifies you when your storage has been fully allocated. The
notification serves as a warning to the user to inform them that when all the remaining pool space fills up,
Stratis has no space left to extend to.
Prerequisites
Procedure
To set up the pool correctly, you have two possibilities:
If set to "yes", you enable overprovisioning to the pool. This means that the sum of the
logical sizes of the Stratis filesystems, supported by the pool, can exceed the amount of
available data space.
396
CHAPTER 26. MANAGING STORAGE DEVICES
Verification
2. Check if there is an indication of the pool overprovisioning mode flag in the stratis pool list
output. The " ~ " is a math symbol for "NOT", so ~Op means no-overprovisioning.
Additional resources
NOTE
Binding a Stratis pool to a supplementary Clevis encryption mechanism does not remove
the primary kernel keyring encryption.
Prerequisites
Stratis v2.3.0 or later is installed. For more information, see Installing Stratis.
You have created an encrypted Stratis pool, and you have the key description of the key that
was used for the encryption. For more information, see Creating an encrypted Stratis pool .
You can connect to the Tang server. For more information, see Deploying a Tang server with
SELinux in enforcing mode
Procedure
397
Red Hat Enterprise Linux 8 System Design Guide
where
my-pool
Specifies the name of the encrypted Stratis pool.
tang-server
Specifies the IP address or URL of the Tang server.
Additional resources
Prerequisites
Stratis v2.3.0 or later is installed. For more information, see Installing Stratis.
You have created an encrypted Stratis pool. For more information, see Creating an encrypted
Stratis pool.
Procedure
where
my-pool
Specifies the name of the encrypted Stratis pool.
key-description
References the key that exists in the kernel keyring, which was generated when you created
the encrypted Stratis pool.
Prerequisites
398
CHAPTER 26. MANAGING STORAGE DEVICES
You have created an encrypted Stratis pool. For more information, see Creating an encrypted
Stratis pool.
Procedure
1. Re-create the key set using the same key description that was used previously:
where key-description references the key that exists in the kernel keyring, which was generated
when you created the encrypted Stratis pool.
Prerequisites
Stratis v2.3.0 or later is installed on your system. For more information, see Installing Stratis.
You have created an encrypted Stratis pool. For more information, see Creating an encrypted
Stratis pool.
Procedure
where
my-pool specifies the name of the Stratis pool you want to unbind.
Additional resources
399
Red Hat Enterprise Linux 8 System Design Guide
The stopped state is recorded in the pool’s metadata. These pools do not start on the following boot,
until the pool receives a start command.
Prerequisites
You have created either an unencrypted or an encrypted Stratis pool. See Creating an
unencrypted Stratis pool
Procedure
Use the following command to start the Stratis pool. The --unlock-method option specifies the
method of unlocking the pool if it is encrypted:
Alternatively, use the following command to stop the Stratis pool. This tears down the storage
stack but leaves all metadata intact:
Verification
Use the following command to list all not previously started pools. If the UUID is specified, the
command prints detailed information about the pool corresponding to the UUID:
Prerequisites
You have created a Stratis pool. See Creating an unencrypted Stratis pool
Procedure
400
CHAPTER 26. MANAGING STORAGE DEVICES
where
number-and-unit
Specifies the size of a file system. The specification format must follow the standard size
specification format for input, that is B, KiB, MiB, GiB, TiB or PiB.
my-pool
Specifies the name of the Stratis pool.
my-fs
Specifies an arbitrary name for the file system.
For example:
Verification
List file systems within the pool to check if the Stratis filesystem is created:
Additional resources
26.1.17. Creating a file system on a Stratis pool by using the web console
You can use the web console to create a file system on an existing Stratis pool.
Prerequisites
Procedure
2. Click Storage.
401
Red Hat Enterprise Linux 8 System Design Guide
3. Click the Stratis pool on which you want to create a file system.
4. On the Stratis pool page, scroll to the Stratis filesystems section and click Create new
filesystem.
5. In the Create filesystem dialog box, enter a Name for the file system.
8. In the At boot drop-down menu, select when you want to mount your file system.
If you want to create and mount the file system, click Create and mount.
If you want to only create the file system, click Create only.
402
CHAPTER 26. MANAGING STORAGE DEVICES
Verification
The new file system is visible on the Stratis pool page under the Stratis filesystems tab.
Prerequisites
You have created a Stratis file system. For more information, see Creating a Stratis filesystem .
Procedure
To mount the file system, use the entries that Stratis maintains in the /dev/stratis/ directory:
The file system is now mounted on the mount-point directory and ready to use.
Additional resources
Prerequisites
You have created a Stratis file system. See Creating a Stratis filesystem .
Procedure
As root, edit the /etc/fstab file and add a line to set up non-root filesystems:
Additional resources
IMPORTANT
Stratis is a Technology Preview feature only. Technology Preview features are not
supported with Red Hat production service level agreements (SLAs) and might not be
functionally complete. Red Hat does not recommend using them in production. These
features provide early access to upcoming product features, enabling customers to test
functionality and provide feedback during the development process. For more
information about the support scope of Red Hat Technology Preview features, see
https://access.redhat.com/support/offerings/techpreview.
Externally, Stratis presents the following volume components in the command-line interface and the
API:
blockdev
Block devices, such as a disk or a disk partition.
pool
Composed of one or more block devices.
A pool has a fixed total size, equal to the size of the block devices.
The pool contains most Stratis layers, such as the non-volatile data cache using the dm-cache
target.
Stratis creates a /dev/stratis/my-pool/ directory for each pool. This directory contains links to
devices that represent Stratis file systems in the pool.
filesystem
Each pool can contain one or more file systems, which store files.
File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system
grows with the data stored on it. If the size of the data approaches the virtual size of the file system,
Stratis grows the thin volume and the file system automatically.
IMPORTANT
Stratis tracks information about file systems created using Stratis that XFS is not
aware of, and changes made using XFS do not automatically create updates in Stratis.
Users must not reformat or reconfigure XFS file systems that are managed by Stratis.
NOTE
404
CHAPTER 26. MANAGING STORAGE DEVICES
NOTE
Stratis uses many Device Mapper devices, which show up in dmsetup listings and the
/proc/partitions file. Similarly, the lsblk command output reflects the internal workings
and layers of Stratis.
Prerequisites
The block devices that you are adding to the Stratis pool are not in use and not mounted.
The block devices that you are adding to the Stratis pool are at least 1 GiB in size each.
Procedure
Additional resources
26.2.3. Adding a block device to a Stratis pool by using the web console
You can use the web console to add a block device to an existing Stratis pool. You can also add caches
as a block device.
Prerequisites
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
Each block device on which you are creating a Stratis pool is at least 1 GB.
Procedure
2. Click Storage.
405
Red Hat Enterprise Linux 8 System Design Guide
3. In the Storage table, click the Stratis pool to which you want to add a block device.
5. In the Add block devices dialog box, select the Tier, whether you want to add a block device as
data or cache.
6. Optional: If you are adding the block device to a Stratis pool that is encrypted with a passphrase,
then you must enter the passphrase.
7. Under Block devices, select the devices you want to add to the pool.
8. Click Add.
IMPORTANT
406
CHAPTER 26. MANAGING STORAGE DEVICES
IMPORTANT
Stratis is a Technology Preview feature only. Technology Preview features are not
supported with Red Hat production service level agreements (SLAs) and might not be
functionally complete. Red Hat does not recommend using them in production. These
features provide early access to upcoming product features, enabling customers to test
functionality and provide feedback during the development process. For more
information about the support scope of Red Hat Technology Preview features, see
https://access.redhat.com/support/offerings/techpreview.
Standard Linux utilities such as df report the size of the XFS file system layer on Stratis, which is 1 TiB.
This is not useful information, because the actual storage usage of Stratis is less due to thin provisioning,
and also because Stratis automatically grows the file system when the XFS layer is close to full.
IMPORTANT
Regularly monitor the amount of data written to your Stratis file systems, which is
reported as the Total Physical Used value. Make sure it does not exceed the Total Physical
Size value.
Additional resources
Prerequisites
Procedure
To display information about all block devices used for Stratis on your system:
# stratis blockdev
# stratis pool
407
Red Hat Enterprise Linux 8 System Design Guide
# stratis filesystem
Additional resources
Prerequisites
Procedure
2. Click Storage.
3. In the Storage table, click the Stratis pool you want to view.
The Stratis pool page displays all the information about the pool and the file systems that you
created in the pool.
408
CHAPTER 26. MANAGING STORAGE DEVICES
IMPORTANT
Stratis is a Technology Preview feature only. Technology Preview features are not
supported with Red Hat production service level agreements (SLAs) and might not be
functionally complete. Red Hat does not recommend using them in production. These
features provide early access to upcoming product features, enabling customers to test
functionality and provide feedback during the development process. For more
information about the support scope of Red Hat Technology Preview features, see
https://access.redhat.com/support/offerings/techpreview.
A snapshot and its origin are not linked in lifetime. A snapshotted file system can live longer than
the file system it was created from.
409
Red Hat Enterprise Linux 8 System Design Guide
A file system does not have to be mounted to create a snapshot from it.
Each snapshot uses around half a gigabyte of actual backing storage, which is needed for the
XFS log.
Prerequisites
You have created a Stratis file system. See Creating a Stratis filesystem .
Procedure
Additional resources
Prerequisites
Procedure
To access the snapshot, mount it as a regular file system from the /dev/stratis/my-pool/
directory:
Additional resources
410
CHAPTER 26. MANAGING STORAGE DEVICES
Prerequisites
Procedure
1. Optional: Back up the current state of the file system to be able to access it later:
# umount /dev/stratis/my-pool/my-fs
# stratis filesystem destroy my-pool my-fs
3. Create a copy of the snapshot under the name of the original file system:
4. Mount the snapshot, which is now accessible with the same name as the original file system:
The content of the file system named my-fs is now identical to the snapshot my-fs-snapshot.
Additional resources
Prerequisites
Procedure
411
Red Hat Enterprise Linux 8 System Design Guide
# umount /dev/stratis/my-pool/my-fs-snapshot
Additional resources
IMPORTANT
Stratis is a Technology Preview feature only. Technology Preview features are not
supported with Red Hat production service level agreements (SLAs) and might not be
functionally complete. Red Hat does not recommend using them in production. These
features provide early access to upcoming product features, enabling customers to test
functionality and provide feedback during the development process. For more
information about the support scope of Red Hat Technology Preview features, see
https://access.redhat.com/support/offerings/techpreview.
Externally, Stratis presents the following volume components in the command-line interface and the
API:
blockdev
Block devices, such as a disk or a disk partition.
pool
Composed of one or more block devices.
A pool has a fixed total size, equal to the size of the block devices.
The pool contains most Stratis layers, such as the non-volatile data cache using the dm-cache
target.
Stratis creates a /dev/stratis/my-pool/ directory for each pool. This directory contains links to
devices that represent Stratis file systems in the pool.
filesystem
Each pool can contain one or more file systems, which store files.
File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system
412
CHAPTER 26. MANAGING STORAGE DEVICES
File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system
grows with the data stored on it. If the size of the data approaches the virtual size of the file system,
Stratis grows the thin volume and the file system automatically.
IMPORTANT
Stratis tracks information about file systems created using Stratis that XFS is not
aware of, and changes made using XFS do not automatically create updates in Stratis.
Users must not reformat or reconfigure XFS file systems that are managed by Stratis.
NOTE
Stratis uses many Device Mapper devices, which show up in dmsetup listings and the
/proc/partitions file. Similarly, the lsblk command output reflects the internal workings
and layers of Stratis.
Prerequisites
You have created a Stratis file system. See Creating a Stratis filesystem .
Procedure
# umount /dev/stratis/my-pool/my-fs
Additional resources
26.5.3. Deleting a file system from a Stratis pool by using the web console
413
Red Hat Enterprise Linux 8 System Design Guide
You can use the web console to delete a file system from an existing Stratis pool.
NOTE
Deleting a Stratis pool file system erases all the data it contains.
Prerequisites
Stratis is installed.
The web console detects and installs Stratis by default. However, for manually installing Stratis,
see Installing Stratis.
Procedure
2. Click Storage.
3. In the Storage table, click the Stratis pool from which you want to delete a file system.
4. On the Stratis pool page, scroll to the Stratis filesystems section and click the menu button ⋮
next to the file system you want to delete.
414
CHAPTER 26. MANAGING STORAGE DEVICES
Prerequisites
Procedure
# umount /dev/stratis/my-pool/my-fs-1 \
/dev/stratis/my-pool/my-fs-2 \
/dev/stratis/my-pool/my-fs-n
415
Red Hat Enterprise Linux 8 System Design Guide
Additional resources
NOTE
Prerequisites
Procedure
2. Click Storage.
3. In the Storage table, click the menu button, ⋮, next to the Stratis pool you want to delete.
416
CHAPTER 26. MANAGING STORAGE DEVICES
Swap space is located on hard drives, which have a slower access time than physical memory. Swap
space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions
and swap files.
In years past, the recommended amount of swap space increased linearly with the amount of RAM in the
system. However, modern systems often include hundreds of gigabytes of RAM. As a consequence,
recommended swap space is considered a function of system memory workload, not system memory.
The following recommendations are especially important on systems with low memory, such as 1 GB or
less. Failure to allocate sufficient swap space on these systems can cause issues, such as instability or
even render the installed system unbootable.
417
Red Hat Enterprise Linux 8 System Design Guide
Amount of RAM in the system Recommended swap space Recommended swap space if
allowing for hibernation
For border values such as 2 GB, 8 GB, or 64 GB of system RAM, choose swap size based on your needs
or preference. If your system resources allow for it, increasing the swap space can lead to better
performance.
Note that distributing swap space over multiple storage devices also improves swap space performance,
particularly on systems with fast drives, controllers, and interfaces.
IMPORTANT
File systems and LVM2 volumes assigned as swap space should not be in use when being
modified. Any attempts to modify swap fail if a system process or the kernel is using swap
space. Use the free and cat /proc/swaps commands to verify how much and where swap
is in use.
Resizing swap space requires temporarily removing it from the system. This can be
problematic if running applications rely on the additional swap space and might run into
low-memory situations. Preferably, perform swap resizing from rescue mode, see Debug
boot options. When prompted to mount the file system, select Skip.
Prerequisites
Procedure
# mkswap /dev/VolGroup00/LogVol02
418
CHAPTER 26. MANAGING STORAGE DEVICES
4. Regenerate mount units so that your system registers the new configuration:
# systemctl daemon-reload
# swapon -v /dev/VolGroup00/LogVol02
Verification
To test if the swap logical volume was successfully created and activated, inspect active swap
space by using the following command:
# cat /proc/swaps
total used free shared buff/cache available
Mem: 30Gi 1.2Gi 28Gi 12Mi 994Mi 28Gi
Swap: 22Gi 0B 22Gi
# free -h
total used free shared buff/cache available
Mem: 30Gi 1.2Gi 28Gi 12Mi 995Mi 28Gi
Swap: 17Gi 0B 17Gi
Prerequisites
Procedure
1. Determine the size of the new swap file in megabytes and multiply by 1024 to determine the
number of blocks. For example, the block size of a 64 MB swap file is 65536.
Replace 65536 with the value equal to the required block size.
# mkswap /swapfile
419
Red Hat Enterprise Linux 8 System Design Guide
5. Edit the /etc/fstab file with the following entries to enable the swap file at boot time:
The next time the system boots, it activates the new swap file.
6. Regenerate mount units so that your system registers the new /etc/fstab configuration:
# systemctl daemon-reload
# swapon /swapfile
Verification
To test if the new swap file was successfully created and activated, inspect active swap space by
using the following command:
$ cat /proc/swaps
$ free -h
Prerequisites
Procedure
# swapoff -v /dev/VolGroup00/LogVol01
# mkswap /dev/VolGroup00/LogVol01
# swapon -v /dev/VolGroup00/LogVol01
Verification
To test if the swap logical volume was successfully extended and activated, inspect active swap
420
CHAPTER 26. MANAGING STORAGE DEVICES
To test if the swap logical volume was successfully extended and activated, inspect active swap
space:
# cat /proc/swaps
Filename Type Size Used Priority
/dev/dm-1 partition 16322556 0 -2
/dev/dm-4 partition 7340028 0 -3
# free -h
total used free shared buff/cache available
Mem: 30Gi 1.2Gi 28Gi 12Mi 994Mi 28Gi
Swap: 22Gi 0B 22Gi
Procedure
# swapoff -v /dev/VolGroup00/LogVol01
# wipefs -a /dev/VolGroup00/LogVol01
# mkswap /dev/VolGroup00/LogVol01
# swapon -v /dev/VolGroup00/LogVol01
Verification
To test if the swap logical volume was successfully reduced, inspect active swap space by using
the following command:
$ cat /proc/swaps
$ free -h
You can remove an LVM2 logical volume for swap. Assuming /dev/VolGroup00/LogVol02 is the swap
421
Red Hat Enterprise Linux 8 System Design Guide
You can remove an LVM2 logical volume for swap. Assuming /dev/VolGroup00/LogVol02 is the swap
volume you want to remove.
Procedure
# swapoff -v /dev/VolGroup00/LogVol02
# lvremove /dev/VolGroup00/LogVol02
# systemctl daemon-reload
Verification
Test if the logical volume was successfully removed, inspect active swap space by using the
following command:
$ cat /proc/swaps
$ free -h
Procedure
# swapoff -v /swapfile
3. Regenerate mount units so that your system registers the new configuration:
# systemctl daemon-reload
# rm /swapfile
To manage LVM and local file systems (FS) by using Ansible, you can use the storage role, which is one
422
CHAPTER 26. MANAGING STORAGE DEVICES
To manage LVM and local file systems (FS) by using Ansible, you can use the storage role, which is one
of the RHEL system roles available in RHEL 8.
Using the storage role enables you to automate administration of file systems on disks and logical
volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7.
For more information about RHEL system roles and how to apply them, see Introduction to
RHEL system roles.
26.7.1. Creating an XFS file system on a block device by using the storage RHEL
system role
The example Ansible playbook applies the storage role to create an XFS file system on a block device
using the default parameters.
NOTE
The storage role can create a file system only on an unpartitioned, whole disk or a logical
volume (LV). It cannot create the file system on a partition.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
The volume name (barefs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
You can omit the fs_type: xfs line because XFS is the default file system in RHEL 8.
To create the file system on an LV, provide the LVM setup under the disks: attribute,
including the enclosing volume group. For details, see Creating or resizing a logical volume
by using the storage RHEL system role.
Do not provide the path to the LV device.
423
Red Hat Enterprise Linux 8 System Design Guide
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
26.7.2. Persistently mounting a file system by using the storage RHEL system role
The example Ansible applies the storage role to immediately and persistently mount an XFS file system.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
mount_user: somebody
mount_group: somegroup
mount_mode: 0755
This playbook adds the file system to the /etc/fstab file, and mounts the file system
immediately.
If the file system on the /dev/sdb device or the mount point directory do not exist, the
playbook creates them.
424
CHAPTER 26. MANAGING STORAGE DEVICES
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
26.7.3. Creating or resizing a logical volume by using the storage RHEL system role
If the volume group does not exist, the role creates it. If a logical volume exists in the volume group, it is
resized if the size does not match what is specified in the playbook.
If you are reducing a logical volume, to prevent data loss you must ensure that the file system on that
logical volume is not using the space in the logical volume that is being reduced.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Create logical volume
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks:
425
Red Hat Enterprise Linux 8 System Design Guide
- sda
- sdb
- sdc
volumes:
- name: mylv
size: 2G
fs_type: ext4
mount_point: /mnt/data
size: <size>
You must specify the size by using units (for example, GiB) or percentage (for example,
60%).
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Verify that specified volume has been created or resized to the requested size:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
26.7.4. Enabling online block discard by using the storage RHEL system role
You can mount an XFS file system with the online block discard option to automatically discard unused
blocks.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
426
CHAPTER 26. MANAGING STORAGE DEVICES
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Enable online block discard
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
mount_options: discard
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
26.7.5. Creating and mounting an Ext4 file system by using the storage RHEL system
role
The example Ansible playbook applies the storage role to create and mount an Ext4 file system.
Prerequisites
427
Red Hat Enterprise Linux 8 System Design Guide
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext4
fs_label: label-name
mount_point: /mnt/data
The playbook persistently mounts the file system at the /mnt/data directory.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
26.7.6. Creating and mounting an Ext3 file system by using the storage RHEL system
role
The example Ansible playbook applies the storage role to create and mount an Ext3 file system.
Prerequisites
428
CHAPTER 26. MANAGING STORAGE DEVICES
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: all
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext3
fs_label: label-name
mount_point: /mnt/data
mount_user: somebody
mount_group: somegroup
mount_mode: 0755
The playbook persistently mounts the file system at the /mnt/data directory.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
26.7.7. Creating a swap volume by using the storage RHEL system role
This section provides an example Ansible playbook. This playbook applies the storage role to create a
swap volume, if it does not exist, or to modify the swap volume, if it already exist, on a block device by
using the default parameters.
429
Red Hat Enterprise Linux 8 System Design Guide
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create a disk device with swap
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: swap_fs
type: disk
disks:
- /dev/sdb
size: 15 GiB
fs_type: swap
The volume name (swap_fs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
26.7.8. Configuring a RAID volume by using the storage RHEL system role
With the storage system role, you can configure a RAID volume on RHEL by using Red Hat Ansible
Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a
RAID volume to suit your requirements.
430
CHAPTER 26. MANAGING STORAGE DEVICES
WARNING
Device names might change in certain circumstances, for example, when you add a
new disk to a system. Therefore, to prevent data loss, use persistent naming
attributes in the playbook. For more information about persistent naming attributes,
see Overview of persistent naming attributes.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Create a RAID on sdd, sde, sdf, and sdg
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_volumes:
- name: data
type: raid
disks: [sdd, sde, sdf, sdg]
raid_level: raid0
raid_chunk_size: 32 KiB
mount_point: /mnt/data
state: present
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
431
Red Hat Enterprise Linux 8 System Design Guide
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Managing RAID
26.7.9. Configuring an LVM pool with RAID by using the storage RHEL system role
With the storage system role, you can configure an LVM pool with RAID on RHEL by using Red Hat
Ansible Automation Platform. You can set up an Ansible playbook with the available parameters to
configure an LVM pool with RAID.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Configure LVM pool with RAID
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_pools:
- name: my_pool
type: lvm
disks: [sdh, sdi]
raid_level: raid1
volumes:
- name: my_volume
size: "1 GiB"
mount_point: "/mnt/app/shared"
fs_type: xfs
state: present
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
432
CHAPTER 26. MANAGING STORAGE DEVICES
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Managing RAID
26.7.10. Configuring a stripe size for RAID LVM volumes by using the storage RHEL
system role
With the storage system role, you can configure a stripe size for RAID LVM volumes on RHEL by using
Red Hat Ansible Automation Platform. You can set up an Ansible playbook with the available parameters
to configure an LVM pool with RAID.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Configure stripe size for RAID LVM volumes
ansible.builtin.include_role:
name: rhel-system-roles.storage
433
Red Hat Enterprise Linux 8 System Design Guide
vars:
storage_safe_mode: false
storage_pools:
- name: my_pool
type: lvm
disks: [sdh, sdi]
volumes:
- name: my_volume
size: "1 GiB"
mount_point: "/mnt/app/shared"
fs_type: xfs
raid_level: raid0
raid_stripe_size: "256 KiB"
state: present
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Managing RAID
26.7.11. Configuring an LVM-VDO volume by using the storage RHEL system role
You can use the storage RHEL system role to create a VDO volume on LVM (LVM-VDO) with enabled
compression and deduplication.
NOTE
Because of the storage system role use of LVM-VDO, only one volume can be created
per pool.
434
CHAPTER 26. MANAGING STORAGE DEVICES
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Create LVM-VDO volume under volume group 'myvg'
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks:
- /dev/sdb
volumes:
- name: mylv1
compression: true
deduplication: true
vdo_pool_size: 10 GiB
size: 30 GiB
mount_point: /mnt/app/shared
vdo_pool_size: <size>
The actual size that the volume takes on the device. You can specify the size in human-
readable format, such as 10 GiB. If you do not specify a unit, it defaults to bytes.
size: <size>
The virtual size of VDO volume.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
435
Red Hat Enterprise Linux 8 System Design Guide
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
26.7.12. Creating a LUKS2 encrypted volume by using the storage RHEL system role
You can use the storage role to create and configure a volume encrypted with LUKS by running an
Ansible playbook.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
luks_password: <password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
vars_files:
- vault.yml
436
CHAPTER 26. MANAGING STORAGE DEVICES
tasks:
- name: Create and configure a volume encrypted with LUKS
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
fs_label: <label>
mount_point: /mnt/data
encryption: true
encryption_password: "{{ luks_password }}"
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Verification
4e4e7970-1822-470e-b55a-e91efe5d0f5c
437
Red Hat Enterprise Linux 8 System Design Guide
Data segments:
0: crypt
offset: 16777216 [bytes]
length: (whole device)
cipher: aes-xts-plain64
sector: 512 [bytes]
...
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Ansible vault
26.7.13. Creating shared LVM devices using the storage RHEL system role
You can use the storage RHEL system role to create shared LVM devices if you want your multiple
systems to access the same storage at the same time.
Resource sharing
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
lvmlockd is configured on the managed node. For more information, see Configuring LVM to
share SAN disks among multiple machines.
Procedure
438
CHAPTER 26. MANAGING STORAGE DEVICES
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
become: true
tasks:
- name: Create shared LVM device
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: /dev/vdb
type: lvm
shared: true
state: present
volumes:
- name: lv1
size: 4g
mount_point: /opt/test1
storage_safe_mode: false
storage_use_partitions: true
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
439
Red Hat Enterprise Linux 8 System Design Guide
When hosting active VMs or containers, Red Hat recommends provisioning storage at a 10:1
logical to physical ratio: that is, if you are utilizing 1 TB of physical storage, you would present it
as 10 TB of logical storage.
For object storage, such as the type provided by Ceph, Red Hat recommends using a 3:1 logical
to physical ratio: that is, 1 TB of physical storage would present as 3 TB logical storage.
In either case, you can simply put a file system on top of the logical device presented by VDO and then
use it directly or as part of a distributed cloud storage architecture.
Because VDO is thinly provisioned, the file system and applications only see the logical space in use and
are not aware of the actual physical space available. Use scripting to monitor the actual available space
and generate an alert if use exceeds a threshold: for example, when the VDO volume is 80% full.
Because VDO exposes its deduplicated storage as a standard Linux block device, you can use it with
standard file systems, iSCSI and FC target drivers, or as unified storage.
NOTE
Deployment of VDO volumes on top of Ceph RADOS Block Device (RBD) is currently
supported. However, the deployment of Red Hat Ceph Storage cluster components on
top of VDO volumes is currently not supported.
KVM
You can deploy VDO on a KVM server configured with Direct Attached Storage.
440
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
File systems
You can create file systems on top of VDO and expose them to NFS or CIFS users with the NFS server
or Samba.
When creating a VDO volume on iSCSI, you can place the VDO volume above or below the iSCSI layer.
Although there are many considerations to be made, some guidelines are provided here to help you
select the method that best suits your environment.
When placing the VDO volume on the iSCSI server (target) below the iSCSI layer:
The VDO volume is transparent to the initiator, similar to other iSCSI LUNs. Hiding the thin
provisioning and space savings from the client makes the appearance of the LUN easier to
monitor and maintain.
441
Red Hat Enterprise Linux 8 System Design Guide
There is decreased network traffic because there are no VDO metadata reads or writes, and
read verification for the dedupe advice does not occur across the network.
The memory and CPU resources being used on the iSCSI target can result in better
performance. For example, the ability to host an increased number of hypervisors because the
volume reduction is happening on the iSCSI target.
If the client implements encryption on the initiator and there is a VDO volume below the target,
you will not realize any space savings.
When placing the VDO volume on the iSCSI client (initiator) above the iSCSI layer:
There is a potential for lower network traffic across the network in ASYNC mode if achieving
high rates of space savings.
You can directly view and control the space savings and monitor usage.
If you want to encrypt the data, for example, using dm-crypt, you can implement VDO on top of
the crypt and take advantage of space efficiency.
LVM
On more feature-rich systems, you can use LVM to provide multiple logical unit numbers (LUNs) that
are all backed by the same deduplicated storage pool.
In the following diagram, the VDO target is registered as a physical volume so that it can be managed by
LVM. Multiple logical volumes (LV1 to LV4) are created out of the deduplicated storage pool. In this way,
VDO can support multiprotocol unified block or file access to the underlying deduplicated storage pool.
Deduplicated unified storage design enables for multiple file systems to collectively use the same
deduplication domain through the LVM tools. Also, file systems can take advantage of LVM snapshot,
copy-on-write, and shrink or grow features, all on top of VDO.
Encryption
Device Mapper (DM) mechanisms such as DM Crypt are compatible with VDO. Encrypting VDO volumes
helps ensure data security, and any file systems above VDO are still deduplicated.
442
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
IMPORTANT
Applying the encryption layer above VDO results in little if any data deduplication.
Encryption makes duplicate blocks different before VDO can deduplicate them.
kvdo
A kernel module that loads into the Linux Device Mapper layer provides a deduplicated, compressed,
and thinly provisioned block storage volume.
The kvdo module exposes a block device. You can access this block device directly for block storage
or present it through a Linux file system, such as XFS or ext4.
When kvdo receives a request to read a logical block of data from a VDO volume, it maps the
requested logical block to the underlying physical block and then reads and returns the requested
data.
When kvdo receives a request to write a block of data to a VDO volume, it first checks whether the
request is a DISCARD or TRIM request or whether the data is uniformly zero. If either of these
conditions is true, kvdo updates its block map and acknowledges the request. Otherwise, VDO
processes and optimizes the data.
443
Red Hat Enterprise Linux 8 System Design Guide
uds
A kernel module that communicates with the Universal Deduplication Service (UDS) index on the
volume and analyzes data for duplicates. For each new piece of data, UDS quickly determines if that
piece is identical to any previously stored piece of data. If the index finds a match, the storage system
can then internally reference the existing item to avoid storing the same information more than once.
The UDS index runs inside the kernel as the uds kernel module.
Physical size
This is the same size as the underlying block device. VDO uses this storage for:
Logical Size
This is the provisioned size that the VDO volume presents to applications. It is usually larger than the
available physical size. If the --vdoLogicalSize option is not specified, then the provisioning of the
logical volume is now provisioned to a 1:1 ratio. For example, if a VDO volume is put on top of a 20
GB block device, then 2.5 GB is reserved for the UDS index (if the default index size is used). The
remaining 17.5 GB is provided for the VDO metadata and user data. As a result, the available storage
to consume is not more than 17.5 GB, and can be less due to metadata that makes up the actual VDO
volume.
VDO currently supports any logical size up to 254 times the size of the physical volume with an
absolute maximum logical size of 4PB.
In this figure, the VDO deduplicated storage target sits completely on top of the block device, meaning
the physical size of the VDO volume is the same size as the underlying block device.
Additional resources
For more information about how much storage VDO metadata requires on block devices of
444
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
For more information about how much storage VDO metadata requires on block devices of
different sizes, see Section 27.1.6.4, “Examples of VDO requirements by physical size” .
The default slab size is 2 GB to facilitate evaluating VDO on smaller test systems. A single VDO volume
can have up to 8192 slabs. Therefore, in the default configuration with 2 GB slabs, the maximum allowed
physical storage is 16 TB. When using 32 GB slabs, the maximum allowed physical storage is 256 TB.
VDO always reserves at least one entire slab for metadata, and therefore, the reserved slab cannot be
used for storing user data.
10–99 GB 1 GB
100 GB – 1 TB 2 GB
2–256 TB 32 GB
The minimal disk usage for a VDO volume using default settings of 2 GB slab size and 0.25 dense index,
requires approx 4.7 GB. This provides slightly less than 2 GB of physical data to write at 0%
deduplication or compression.
Here, the minimal disk usage is the sum of the default slab size and dense index.
You can control the slab size by providing the --vdosettings 'vdo_slab_size_mb=size-in-megabytes'
option to the lvcreate command.
1.15 MB of RAM for each 1 MB of configured block map cache size. The block map cache
requires a minimum of 150MB RAM.
445
Red Hat Enterprise Linux 8 System Design Guide
NOTE
The minimal disk usage for a VDO volume using default settings of 2 GB slab size and
0.25 dense index, requires approx 4.7 GB. This provides slightly less than 2 GB of
physical data to write at 0% deduplication or compression.
Here, the minimal disk usage is the sum of the default slab size and dense index.
The UDS Sparse Indexing feature is the recommended mode for VDO. It relies on the temporal
locality of data and attempts to retain only the most relevant index entries in memory. With the
sparse index, UDS can maintain a deduplication window that is ten times larger than with dense, while
using the same amount of memory.
Although the sparse index provides the greatest coverage, the dense index provides more
deduplication advice. For most workloads, given the same amount of memory, the difference in
deduplication rates between dense and sparse indexes is negligible.
Additional resources
You can configure a VDO volume to use up to 256 TB of physical storage. Only a certain part of the
physical storage is usable to store data.
VDO requires storage for two types of VDO metadata and for the UDS index. Use the following
calculations to determine the usable size of a VDO-managed volume:
The first type of VDO metadata uses approximately 1 MB for each 4 GB of physical storage plus
an additional 1 MB per slab.
The second type of VDO metadata consumes approximately 1.25 MB for each 1 GB of logical
storage, rounded up to the nearest slab.
The amount of storage required for the UDS index depends on the type of index and the
446
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
The amount of storage required for the UDS index depends on the type of index and the
amount of RAM allocated to the index. For each 1 GB of RAM, a dense UDS index uses 17 GB of
storage, and a sparse UDS index will use 170 GB of storage.
Additional resources
Place storage layers either above, or under the Virtual Data Optimizer (VDO), to fit the placement
requirements.
A VDO volume is a thin-provisioned block device. You can prevent running out of physical space by
placing the volume above a storage layer that you can expand at a later time. Examples of such
expandable storage are Logical Volume Manager (LVM) volumes, or Multiple Device Redundant Array
Inexpensive or Independent Disks (MD RAID) arrays.
You can place thick provisioned layers above VDO. There are two aspects of thick provisioned layers
that you must consider:
Writing new data to unused logical space on a thick device. When using VDO, or other thin-
provisioned storage, the device can report that it is out of space during this kind of write.
Overwriting used logical space on a thick device with new data. When using VDO, overwriting
data can also result in a report of the device being out of space.
These limitations affect all layers above the VDO layer. If you do not monitor the VDO device, you can
unexpectedly run out of physical space on the thick-provisioned volumes above VDO.
See the following examples of supported and unsupported VDO volume configurations.
Additional resources
For more information about stacking VDO with LVM layers, see the Stacking LVM volumes
article.
The following tables provide approximate system requirements of VDO based on the physical size of
the underlying volume. Each table lists requirements appropriate to the intended deployment, such as
primary storage or backup storage.
Physical size RAM usage: UDS RAM usage: VDO Disk usage Index type
448
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
Physical size RAM usage: UDS RAM usage: VDO Disk usage Index type
Physical size RAM usage: UDS RAM usage: VDO Disk usage Index type
Procedure
Prerequisites
Use expandable storage as the backing block device. For more information, see Section 27.1.6.3,
“Placement of VDO in the storage stack”.
Procedure
In all the following steps, replace vdo-name with the identifier you want to use for your VDO volume; for
449
Red Hat Enterprise Linux 8 System Design Guide
In all the following steps, replace vdo-name with the identifier you want to use for your VDO volume; for
example, vdo1. You must use a different name and device for each instance of VDO on the system.
1. Find a persistent name for the block device where you want to create the VDO volume. For
more information about persistent names, see Chapter 17, Overview of persistent naming
attributes.
If you use a non-persistent device name, then VDO might fail to start properly in the future if
the device name changes.
# vdo create \
--name=vdo-name \
--device=block-device \
--vdoLogicalSize=logical-size
Replace block-device with the persistent name of the block device where you want to
create the VDO volume. For example, /dev/disk/by-id/scsi-
3600508b1001c264ad2af21e903ad031f.
Replace logical-size with the amount of logical storage that the VDO volume should
present:
For active VMs or container storage, use logical size that is ten times the physical size
of your block device. For example, if your block device is 1TB in size, use 10T here.
For object storage, use logical size that is three times the physical size of your block
device. For example, if your block device is 1TB in size, use 3T here.
If the physical block device is larger than 16TiB, add the --vdoSlabSize=32G option to
increase the slab size on the volume to 32GiB.
Using the default slab size of 2GiB on block devices larger than 16TiB results in the vdo
create command failing with the following error:
For example, to create a VDO volume for container storage on a 1TB block device, you might
use:
# vdo create \
--name=vdo1 \
--device=/dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f \
--vdoLogicalSize=10T
IMPORTANT
If a failure occurs when creating the VDO volume, remove the volume to clean up.
See Removing an unsuccessfully created VDO volume for details.
450
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
# mkfs.xfs -K /dev/mapper/vdo-name
NOTE
4. Use the following command to wait for the system to register the new device node:
# udevadm settle
Next steps
1. Mount the file system. See Section 27.1.9, “Mounting a VDO volume” for details.
2. Enable the discard feature for the file system on your VDO device. See Section 27.1.10,
“Enabling periodic block discard” for details.
Additional resources
Prerequisites
A VDO volume has been created on your system. For instructions, see Section 27.1.8, “Creating
a VDO volume”.
Procedure
To configure the file system to mount automatically at boot, add a line to the /etc/fstab file:
451
Red Hat Enterprise Linux 8 System Design Guide
If the VDO volume is located on a block device that requires network, such as iSCSI, add the
_netdev mount option.
Additional resources
For iSCSI and other block devices requiring network, see the systemd.mount(5) man page for
information about the _netdev mount option.
Procedure
Verification
Prerequisites
Procedure
# vdostats --human-readable
452
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
Additional resources
Prerequisites
VDO utilizes physical, available physical, and logical size in the following ways:
Physical size
This is the same size as the underlying block device. VDO uses this storage for:
Logical Size
This is the provisioned size that the VDO volume presents to applications. It is usually larger than the
available physical size. If the --vdoLogicalSize option is not specified, then the provisioning of the
logical volume is now provisioned to a 1:1 ratio. For example, if a VDO volume is put on top of a 20
GB block device, then 2.5 GB is reserved for the UDS index (if the default index size is used). The
remaining 17.5 GB is provided for the VDO metadata and user data. As a result, the available storage
to consume is not more than 17.5 GB, and can be less due to metadata that makes up the actual VDO
volume.
VDO currently supports any logical size up to 254 times the size of the physical volume with an
absolute maximum logical size of 4PB.
In this figure, the VDO deduplicated storage target sits completely on top of the block device, meaning
the physical size of the VDO volume is the same size as the underlying block device.
Additional resources
For more information about how much storage VDO metadata requires on block devices of
different sizes, see Section 27.1.6.4, “Examples of VDO requirements by physical size” .
VDO is a thinly provisioned block storage target. The amount of physical space that a VDO volume uses
might differ from the size of the volume that is presented to users of the storage. You can make use of
this disparity to save on storage costs.
Out-of-space conditions
Take care to avoid unexpectedly running out of storage space, if the data written does not achieve the
expected rate of optimization.
Whenever the number of logical blocks (virtual storage) exceeds the number of physical blocks (actual
storage), it becomes possible for file systems and applications to unexpectedly run out of space. For
that reason, storage systems using VDO must provide you with a way of monitoring the size of the free
pool on the VDO volume.
You can determine the size of this free pool by using the vdostats utility. The default output of this
utility lists information for all running VDO volumes in a format similar to the Linux df utility. For example:
When the physical storage capacity of a VDO volume is almost full, VDO reports a warning in the system
log, similar to the following:
NOTE
These warning messages appear only when the lvm2-monitor service is running. It is
enabled by default.
454
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
If the size of free pool drops below a certain level, you can take action by:
Deleting data. This reclaims space whenever the deleted data is not duplicated. Deleting data
frees the space only after discards are issued.
IMPORTANT
With the discard mount option, the file systems can send these commands whenever a block is
deleted.
You can send the commands in a controlled manner by using utilities such as fstrim. These
utilities tell the file system to detect which logical blocks are unused and send the information to
the storage system in the form of a TRIM or DISCARD command.
The need to use TRIM or DISCARD on unused blocks is not unique to VDO. Any thinly provisioned
storage system has the same challenge.
This procedure describes how to obtain usage and efficiency information from a VDO volume.
Prerequisites
Procedure
# vdostats --human-readable
Additional resources
455
Red Hat Enterprise Linux 8 System Design Guide
This procedure reclaims storage space on a VDO volume that hosts a file system.
VDO cannot reclaim space unless file systems communicate that blocks are free using the DISCARD,
TRIM, or UNMAP commands.
Procedure
If the file system on your VDO volume supports discard operations, enable them. See
Discarding unused blocks .
For file systems that do not use DISCARD, TRIM, or UNMAP, you can manually reclaim free
space. Store a file consisting of binary zeros to fill the free space and then delete that file.
This procedure reclaims storage space on a VDO volume that is used as a block storage target without a
file system.
Procedure
LVM supports the REQ_DISCARD command and forwards the requests to VDO at the
appropriate logical block addresses in order to free the space. If you use other volume
managers, they also need to support REQ_DISCARD, or equivalently, UNMAP for SCSI devices
or TRIM for ATA devices.
Additional resources
This procedure reclaims storage space on VDO volumes (or portions of volumes) that are provisioned to
hosts on a Fibre Channel storage fabric or an Ethernet network using SCSI target frameworks such as
LIO or SCST.
Procedure
SCSI initiators can use the UNMAP command to free space on thinly provisioned storage
targets, but the SCSI target framework needs to be configured to advertise support for this
command. This is typically done by enabling thin provisioning on these volumes.
Verify support for UNMAP on Linux-based SCSI initiators by running the following command:
In the output, verify that the Maximum unmap LBA count value is greater than zero.
456
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
During the system boot, the vdo systemd unit automatically starts all VDO devices that are configured
as activated.
The vdo systemd unit is installed and enabled by default when the vdo package is installed. This unit
automatically runs the vdo start --all command at system startup to bring up all activated VDO
volumes.
You can also create a VDO volume that does not start automatically by adding the --activate=disabled
option to the vdo create command.
1. The lower layer of LVM must start first. In most systems, starting this layer is configured
automatically when the LVM package is installed.
3. Finally, additional scripts must run in order to start LVM volumes or other services on top of the
running VDO volumes.
The volume always writes around 1GiB for every 1GiB of the UDS index.
The volume additionally writes the amount of data equal to the block map cache size plus up to
8MiB per slab.
This procedure starts a given VDO volume or all VDO volumes on your system.
Procedure
Additional resources
457
Red Hat Enterprise Linux 8 System Design Guide
This procedure stops a given VDO volume or all VDO volumes on your system.
Procedure
Additional resources
If restarted after an unclean shutdown, VDO performs a rebuild to verify the consistency of its
metadata and repairs it if necessary. See Section 27.2.5, “Recovering a VDO volume after an
unclean shutdown” for more information about the rebuild process.
During the system boot, the vdo systemd unit automatically starts all VDO devices that are configured
as activated.
The vdo systemd unit is installed and enabled by default when the vdo package is installed. This unit
automatically runs the vdo start --all command at system startup to bring up all activated VDO
volumes.
You can also create a VDO volume that does not start automatically by adding the --activate=disabled
option to the vdo create command.
1. The lower layer of LVM must start first. In most systems, starting this layer is configured
automatically when the LVM package is installed.
3. Finally, additional scripts must run in order to start LVM volumes or other services on top of the
running VDO volumes.
458
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
The volume always writes around 1GiB for every 1GiB of the UDS index.
The volume additionally writes the amount of data equal to the block map cache size plus up to
8MiB per slab.
Procedure
Additional resources
Procedure
Additional resources
459
Red Hat Enterprise Linux 8 System Design Guide
sync
When VDO is in sync mode, the layers above it assume that a write command writes data to
persistent storage. As a result, it is not necessary for the file system or application, for example, to
issue FLUSH or force unit access (FUA) requests to cause the data to become persistent at critical
points.
VDO must be set to sync mode only when the underlying storage guarantees that data is written to
persistent storage when the write command completes. That is, the storage must either have no
volatile write cache, or have a write through cache.
async
When VDO is in async mode, VDO does not guarantee that the data is written to persistent storage
when a write command is acknowledged. The file system or application must issue FLUSH or FUA
requests to ensure data persistence at critical points in each transaction.
VDO must be set to async mode if the underlying storage does not guarantee that data is written to
persistent storage when the write command completes; that is, when the storage has a volatile write
back cache.
async-unsafe
This mode has the same properties as async but it is not compliant with Atomicity, Consistency,
Isolation, Durability (ACID). Compared to async, async-unsafe has a better performance.
WARNING
auto
The auto mode automatically selects sync or async based on the characteristics of each device.
This is the default option.
The write modes for VDO are sync and async. The following information describes the operations of
these modes.
1. It temporarily writes the data in the request to the allocated block and then acknowledges the
request.
3. If the VDO index contains an entry for a block with the same signature, kvdo reads the
460
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
3. If the VDO index contains an entry for a block with the same signature, kvdo reads the
indicated block and does a byte-by-byte comparison of the two blocks to verify that they are
identical.
4. If they are indeed identical, then kvdo updates its block map so that the logical block points to
the corresponding physical block and releases the allocated physical block.
5. If the VDO index did not contain an entry for the signature of the block being written, or the
indicated block does not actually contain the same data, kvdo updates its block map to make
the temporary physical block permanent.
2. It will then attempt to deduplicate the block in same manner as described above.
3. If the block turns out to be a duplicate, kvdo updates its block map and releases the allocated
block. Otherwise, it writes the data in the request to the allocated block and updates the block
map to make the physical block permanent.
This procedure lists the active write mode on a selected VDO volume.
Procedure
Use the following command to see the write mode used by a VDO volume:
The configured write policy, which is the option selected from sync, async, or auto
The write policy, which is the particular write mode that VDO applied, that is either sync or
async
This procedure determines if a block device has a volatile cache or not. You can use the information to
choose between the sync and async VDO write modes.
Procedure
1. Use either of the following methods to determine if a device has a writeback cache:
$ cat '/sys/block/sda/device/scsi_disk/7:0:0:0/cache_type'
write back
461
Red Hat Enterprise Linux 8 System Design Guide
$ cat '/sys/block/sdb/device/scsi_disk/1:2:0:0/cache_type'
None
Alternatively, you can find whether the above mentioned devices have a write cache or not
in the kernel boot log:
sd 7:0:0:0: [sda] Write cache: enabled, read cache: enabled, does not support DPO or
FUA
sd 1:2:0:0: [sdb] Write cache: disabled, read cache: disabled, supports DPO and FUA
Device sda indicates that it has a writeback cache. Use async mode for it.
Device sdb indicates that it does not have a writeback cache. Use sync mode for it.
You should configure VDO to use the sync write mode if the cache_type value is None or
write through.
This procedure sets a write mode for a VDO volume, either for an existing one or when creating a new
volume.
IMPORTANT
Using an incorrect write mode might result in data loss after a power failure, a system
crash, or any unexpected loss of contact with the disk.
Prerequisites
Determine which write mode is correct for your device. See Section 27.2.4.4, “Checking for a
volatile cache”.
Procedure
You can set a write mode either on an existing VDO volume or when creating a new volume:
462
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
sync
When VDO is in sync mode, the layers above it assume that a write command writes data to
persistent storage. As a result, it is not necessary for the file system or application, for example, to
issue FLUSH or force unit access (FUA) requests to cause the data to become persistent at critical
points.
VDO must be set to sync mode only when the underlying storage guarantees that data is written to
persistent storage when the write command completes. That is, the storage must either have no
volatile write cache, or have a write through cache.
async
When VDO is in async mode, VDO does not guarantee that the data is written to persistent storage
when a write command is acknowledged. The file system or application must issue FLUSH or FUA
requests to ensure data persistence at critical points in each transaction.
VDO must be set to async mode if the underlying storage does not guarantee that data is written to
persistent storage when the write command completes; that is, when the storage has a volatile write
back cache.
async-unsafe
This mode has the same properties as async but it is not compliant with Atomicity, Consistency,
Isolation, Durability (ACID). Compared to async, async-unsafe has a better performance.
WARNING
auto
The auto mode automatically selects sync or async based on the characteristics of each device.
This is the default option.
When a VDO volume restarts after an unclean shutdown, VDO performs the following actions:
VDO might rebuild different writes depending on the active write mode:
sync
If VDO was running on synchronous storage and write policy was set to sync, all data written to the
463
Red Hat Enterprise Linux 8 System Design Guide
If VDO was running on synchronous storage and write policy was set to sync, all data written to the
volume are fully recovered.
async
If the write policy was async, some writes might not be recovered if they were not made durable. This
is done by sending VDO a FLUSH command or a write I/O tagged with the FUA (force unit access)
flag. You can accomplish this from user mode by invoking a data integrity operation like fsync,
fdatasync, sync, or umount.
In either mode, some writes that were either unacknowledged or not followed by a flush might also be
rebuilt.
If VDO cannot recover a VDO volume successfully, it places the volume in read-only operating mode
that persists across volume restarts. You need to fix the problem manually by forcing a rebuild.
Additional resources
For more information about automatic and manual recovery and VDO operating modes, see
Section 27.2.5.3, “VDO operating modes” .
This section describes the modes that indicate whether a VDO volume is operating normally or is
recovering from an error.
You can display the current operating mode of a VDO volume using the vdostats --verbose device
command. See the Operating mode attribute in the output.
normal
This is the default operating mode. VDO volumes are always in normal mode, unless either of the
following states forces a different mode. A newly created VDO volume starts in normal mode.
recovering
When a VDO volume does not save all of its metadata before shutting down, it automatically enters
recovering mode the next time that it starts up. The typical reasons for entering this mode are
sudden power loss or a problem from the underlying storage device.
In recovering mode, VDO is fixing the references counts for each physical block of data on the
device. Recovery usually does not take very long. The time depends on how large the VDO volume is,
how fast the underlying storage device is, and how many other requests VDO is handling
simultaneously. The VDO volume functions normally with the following exceptions:
Initially, the amount of space available for write requests on the volume might be limited. As
more of the metadata is recovered, more free space becomes available.
Data written while the VDO volume is recovering might fail to deduplicate against data
written before the crash if that data is in a portion of the volume that has not yet been
recovered. VDO can compress data while recovering the volume. You can still read or
overwrite compressed blocks.
During an online recovery, certain statistics are unavailable: for example, blocks in use and
blocks free . These statistics become available when the rebuild is complete.
Response times for reads and writes might be slower than usual due to the ongoing recovery
464
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
Response times for reads and writes might be slower than usual due to the ongoing recovery
work
You can safely shut down the VDO volume in recovering mode. If the recovery does not finish
before shutting down, the device enters recovering mode again the next time that it starts up.
The VDO volume automatically exits recovering mode and moves to normal mode when it has fixed
all the reference counts. No administrator action is necessary. For details, see Section 27.2.5.4,
“Recovering a VDO volume online”.
read-only
When a VDO volume encounters a fatal internal error, it enters read-only mode. Events that might
cause read-only mode include metadata corruption or the backing storage device becoming read-
only. This mode is an error state.
In read-only mode, data reads work normally but data writes always fail. The VDO volume stays in
read-only mode until an administrator fixes the problem.
You can safely shut down a VDO volume in read-only mode. The mode usually persists after the
VDO volume is restarted. In rare cases, the VDO volume is not able to record the read-only state to
the backing storage device. In these cases, VDO attempts to do a recovery instead.
Once a volume is in read-only mode, there is no guarantee that data on the volume has not been lost
or corrupted. In such cases, Red Hat recommends copying the data out of the read-only volume and
possibly restoring the volume from backup.
If the risk of data corruption is acceptable, it is possible to force an offline rebuild of the VDO volume
metadata so the volume can be brought back online and made available. The integrity of the rebuilt
data cannot be guaranteed. For details, see Section 27.2.5.5, “Forcing an offline rebuild of a VDO
volume metadata”.
This procedure performs an online recovery on a VDO volume to recover metadata after an unclean
shutdown.
Procedure
2. If you rely on volume statistics like blocks in use and blocks free , wait until they are available.
This procedure performs a forced offline rebuild of a VDO volume metadata to recover after an unclean
shutdown.
465
Red Hat Enterprise Linux 8 System Design Guide
WARNING
Prerequisites
Procedure
1. Check if the volume is in read-only mode. See the operating mode attribute in the command
output:
If the volume is not in read-only mode, it is not necessary to force an offline rebuild. Perform an
online recovery as described in Section 27.2.5.4, “Recovering a VDO volume online” .
This procedure cleans up a VDO volume in an intermediate state. A volume is left in an intermediate
state if a failure occurs when creating the volume. This might happen when, for example:
Power fails
Procedure
To clean up, remove the unsuccessfully created volume with the --force option:
The --force option is required because the administrator might have caused a conflict by
changing the system configuration since the volume was unsuccessfully created.
Without the --force option, the vdo remove command fails with the following message:
[...]
466
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
IMPORTANT
You cannot change the properties of the UDS index after creating the VDO volume.
VDO uses a block device as a backing store, which can include an aggregation of physical storage
consisting of one or more disks, partitions, or even flat files. When a storage management tool creates a
VDO volume, VDO reserves volume space for the UDS index and VDO volume. The UDS index and the
VDO volume interact together to provide deduplicated block storage.
kvdo
A kernel module that loads into the Linux Device Mapper layer provides a deduplicated, compressed,
and thinly provisioned block storage volume.
The kvdo module exposes a block device. You can access this block device directly for block storage
or present it through a Linux file system, such as XFS or ext4.
When kvdo receives a request to read a logical block of data from a VDO volume, it maps the
requested logical block to the underlying physical block and then reads and returns the requested
data.
When kvdo receives a request to write a block of data to a VDO volume, it first checks whether the
request is a DISCARD or TRIM request or whether the data is uniformly zero. If either of these
conditions is true, kvdo updates its block map and acknowledges the request. Otherwise, VDO
processes and optimizes the data.
uds
A kernel module that communicates with the Universal Deduplication Service (UDS) index on the
volume and analyzes data for duplicates. For each new piece of data, UDS quickly determines if that
467
Red Hat Enterprise Linux 8 System Design Guide
piece is identical to any previously stored piece of data. If the index finds a match, the storage system
can then internally reference the existing item to avoid storing the same information more than once.
The UDS index runs inside the kernel as the uds kernel module.
VDO uses a high-performance deduplication index called UDS to detect duplicate blocks of data as they
are being stored.
The UDS index provides the foundation of the VDO product. For each new piece of data, it quickly
determines if that piece is identical to any previously stored piece of data. If the index finds match, the
storage system can then internally reference the existing item to avoid storing the same information
more than once.
The UDS index runs inside the kernel as the uds kernel module.
The deduplication window is the number of previously written blocks that the index remembers. The size
of the deduplication window is configurable. For a given window size, the index requires a specific
amount of RAM and a specific amount of disk space. The size of the window is usually determined by
specifying the size of the index memory using the --indexMem=size option. VDO then determines the
amount of disk space to use automatically.
A compact representation is used in memory that contains at most one entry per unique block.
An on-disk component that records the associated block names presented to the index as they
occur, in order.
The on-disk component maintains a bounded history of data passed to UDS. UDS provides
deduplication advice for data that falls within this deduplication window, containing the names of the
most recently seen blocks. The deduplication window allows UDS to index data as efficiently as possible
while limiting the amount of memory required to index large data repositories. Despite the bounded
nature of the deduplication window, most datasets which have high levels of deduplication also exhibit a
high degree of temporal locality — in other words, most deduplication occurs among sets of blocks that
were written at about the same time. Furthermore, in general, data being written is more likely to
duplicate data that was recently written than data that was written a long time ago. Therefore, for a
given workload over a given time interval, deduplication rates will often be the same whether UDS
indexes only the most recent data or all the data.
Because duplicate data tends to exhibit temporal locality, it is rarely necessary to index every block in
the storage system. Were this not so, the cost of index memory would outstrip the savings of reduced
storage costs from deduplication. Index size requirements are more closely related to the rate of data
ingestion. For example, consider a storage system with 100 TB of total capacity but with an ingestion
rate of 1 TB per week. With a deduplication window of 4 TB, UDS can detect most redundancy among the
data written within the last month.
This section describes the recommended options to use with the UDS index, based on your intended use
468
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
This section describes the recommended options to use with the UDS index, based on your intended use
case.
In general, Red Hat recommends using a sparse UDS index for all production use cases. This is an
extremely efficient indexing data structure, requiring approximately one-tenth of a byte of RAM per
block in its deduplication window. On disk, it requires approximately 72 bytes of disk space per block.
The minimum configuration of this index uses 256 MB of RAM and approximately 25 GB of space on
disk.
To use this configuration, specify the --sparseIndex=enabled --indexMem=0.25 options to the vdo
create command. This configuration results in a deduplication window of 2.5 TB (meaning it will
remember a history of 2.5 TB). For most use cases, a deduplication window of 2.5 TB is appropriate for
deduplicating storage pools that are up to 10 TB in size.
The default configuration of the index, however, is to use a dense index. This index is considerably less
efficient (by a factor of 10) in RAM, but it has much lower (also by a factor of 10) minimum required disk
space, making it more convenient for evaluation in constrained environments.
In general, a deduplication window that is one quarter of the physical size of a VDO volume is a
recommended configuration. However, this is not an actual requirement. Even small deduplication
windows (compared to the amount of physical storage) can find significant amounts of duplicate data in
many use cases. Larger windows may also be used, but it in most cases, there will be little additional
benefit to doing so.
Additional resources
Speak with your Red Hat Technical Account Manager representative for additional guidelines on
tuning this important system parameter.
Deduplication is a technique for reducing the consumption of storage resources by eliminating multiple
copies of duplicate blocks.
Instead of writing the same data more than once, VDO detects each duplicate block and records it as a
reference to the original block. VDO maintains a mapping from logical block addresses, which are used
by the storage layer above VDO, to physical block addresses, which are used by the storage layer under
VDO.
After deduplication, multiple logical block addresses can be mapped to the same physical block address.
These are called shared blocks. Block sharing is invisible to users of the storage, who read and write
blocks as they would if VDO were not present.
When a shared block is overwritten, VDO allocates a new physical block for storing the new block data to
ensure that other logical block addresses that are mapped to the shared physical block are not modified.
This procedure restarts the associated UDS index and informs the VDO volume that deduplication is
469
Red Hat Enterprise Linux 8 System Design Guide
This procedure restarts the associated UDS index and informs the VDO volume that deduplication is
active again.
NOTE
Procedure
This procedure stops the associated UDS index and informs the VDO volume that deduplication is no
longer active.
Procedure
You can also disable deduplication when creating a new VDO volume by adding the --
deduplication=disabled option to the vdo create command.
In addition to block-level deduplication, VDO also provides inline block-level compression using the
HIOPS Compression™ technology.
While deduplication is the optimal solution for virtual machine environments and backup applications,
compression works very well with structured and unstructured file formats that do not typically exhibit
block-level redundancy, such as log files and databases.
Compression operates on blocks that have not been identified as duplicates. When VDO sees unique
data for the first time, it compresses the data. Subsequent copies of data that have already been stored
are deduplicated without requiring an additional compression step.
The compression feature is based on a parallelized packaging algorithm that enables it to handle many
compression operations at once. After first storing the block and responding to the requestor, a best-fit
packing algorithm finds multiple blocks that, when compressed, can fit into a single physical block. After
it is determined that a particular physical block is unlikely to hold additional compressed blocks, it is
written to storage and the uncompressed blocks are freed and reused.
By performing the compression and packaging operations after having already responded to the
470
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
By performing the compression and packaging operations after having already responded to the
requestor, using compression imposes a minimal latency penalty.
NOTE
Procedure
This procedure stops compression on a VDO volume to maximize performance or to speed processing
of data that is unlikely to compress.
Procedure
VDO utilizes physical, available physical, and logical size in the following ways:
Physical size
This is the same size as the underlying block device. VDO uses this storage for:
471
Red Hat Enterprise Linux 8 System Design Guide
Logical Size
This is the provisioned size that the VDO volume presents to applications. It is usually larger than the
available physical size. If the --vdoLogicalSize option is not specified, then the provisioning of the
logical volume is now provisioned to a 1:1 ratio. For example, if a VDO volume is put on top of a 20
GB block device, then 2.5 GB is reserved for the UDS index (if the default index size is used). The
remaining 17.5 GB is provided for the VDO metadata and user data. As a result, the available storage
to consume is not more than 17.5 GB, and can be less due to metadata that makes up the actual VDO
volume.
VDO currently supports any logical size up to 254 times the size of the physical volume with an
absolute maximum logical size of 4PB.
In this figure, the VDO deduplicated storage target sits completely on top of the block device, meaning
the physical size of the VDO volume is the same size as the underlying block device.
Additional resources
For more information about how much storage VDO metadata requires on block devices of
different sizes, see Section 27.1.6.4, “Examples of VDO requirements by physical size” .
VDO is a thinly provisioned block storage target. The amount of physical space that a VDO volume uses
might differ from the size of the volume that is presented to users of the storage. You can make use of
this disparity to save on storage costs.
Out-of-space conditions
Take care to avoid unexpectedly running out of storage space, if the data written does not achieve the
expected rate of optimization.
Whenever the number of logical blocks (virtual storage) exceeds the number of physical blocks (actual
storage), it becomes possible for file systems and applications to unexpectedly run out of space. For
that reason, storage systems using VDO must provide you with a way of monitoring the size of the free
pool on the VDO volume.
You can determine the size of this free pool by using the vdostats utility. The default output of this
utility lists information for all running VDO volumes in a format similar to the Linux df utility. For example:
When the physical storage capacity of a VDO volume is almost full, VDO reports a warning in the system
log, similar to the following:
472
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
NOTE
These warning messages appear only when the lvm2-monitor service is running. It is
enabled by default.
Deleting data. This reclaims space whenever the deleted data is not duplicated. Deleting data
frees the space only after discards are issued.
IMPORTANT
With the discard mount option, the file systems can send these commands whenever a block is
deleted.
You can send the commands in a controlled manner by using utilities such as fstrim. These
utilities tell the file system to detect which logical blocks are unused and send the information to
the storage system in the form of a TRIM or DISCARD command.
The need to use TRIM or DISCARD on unused blocks is not unique to VDO. Any thinly provisioned
storage system has the same challenge.
This procedure increases the logical size of a given VDO volume. It enables you to initially create VDO
volumes that have a logical size small enough to be safe from running out of space. After some period of
time, you can evaluate the actual rate of data reduction, and if sufficient, you can grow the logical size of
the VDO volume to take advantage of the space savings.
Procedure
473
Red Hat Enterprise Linux 8 System Design Guide
When the logical size increases, VDO informs any devices or file systems on top of the volume
of the new size.
This procedure increases the amount of physical storage available to a VDO volume.
Prerequisites
The underlying block device has a larger capacity than the current physical size of the VDO
volume.
If it does not, you can attempt to increase the size of the device. The exact procedure depends
on the type of the device. For example, to resize an MBR or GPT partition, see the Resizing a
partition section in the Managing storage devices guide.
Procedure
This procedure removes a VDO volume and its associated UDS index.
Procedure
1. Unmount the file systems and stop the applications that are using the storage on the VDO
volume.
This procedure cleans up a VDO volume in an intermediate state. A volume is left in an intermediate
state if a failure occurs when creating the volume. This might happen when, for example:
Power fails
474
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
Procedure
To clean up, remove the unsuccessfully created volume with the --force option:
The --force option is required because the administrator might have caused a conflict by
changing the system configuration since the volume was unsuccessfully created.
Without the --force option, the vdo remove command fails with the following message:
[...]
A previous operation failed.
Recovery from the failure either failed or was interrupted.
Add '--force' to 'remove' to perform the following cleanup.
Steps to clean up VDO my-vdo:
umount -f /dev/mapper/my-vdo
udevadm settle
dmsetup remove my-vdo
vdo: ERROR - VDO volume my-vdo previous operation (create) is incomplete
Requirements
The block device underlying the file system must support physical discard operations.
Physical discard operations are supported if the value in the
/sys/block/<device>/queue/discard_max_bytes file is not zero.
Batch discard
Is triggered explicitly by the user and discards all unused blocks in the selected file systems.
Online discard
Is specified at mount time and triggers in real time without user intervention. Online discard
475
Red Hat Enterprise Linux 8 System Design Guide
Is specified at mount time and triggers in real time without user intervention. Online discard
operations discard only blocks that are transitioning from the used to the free state.
Periodic discard
Are batch operations that are run regularly by a systemd service.
All types are supported by the XFS and ext4 file systems.
Recommendations
Red Hat recommends that you use batch or periodic discard.
Prerequisites
The block device underlying the file system supports physical discard operations.
Procedure
# fstrim mount-point
# fstrim --all
a logical device (LVM or MD) composed of multiple devices, where any one of the device does
not support discard operations,
# fstrim /mnt/non_discard
Additional resources
476
CHAPTER 27. DEDUPLICATING AND COMPRESSING STORAGE
Procedure
When mounting a file system manually, add the -o discard mount option:
When mounting a file system persistently, add the discard option to the mount entry in the
/etc/fstab file.
Additional resources
Procedure
Verification
477
Red Hat Enterprise Linux 8 System Design Guide
478
CHAPTER 28. AUDITING THE SYSTEM
The following list summarizes some of the information that Audit is capable of recording in its log files:
Association of an event with the identity of the user who triggered the event
All modifications to Audit configuration and attempts to access Audit log files
Include or exclude events based on user identity, subject and object labels, and other attributes
The use of the Audit system is also a requirement for a number of security-related certifications. Audit is
designed to meet or exceed the requirements of the following certifications or compliance guides:
Evaluated by National Information Assurance Partnership (NIAP) and Best Security Industries
(BSI)
479
Red Hat Enterprise Linux 8 System Design Guide
Use Cases
NOTE
After a system call passes the exclude filter, it is sent through one of the aforementioned filters, which,
based on the Audit rule configuration, sends it to the Audit daemon for further processing.
The user-space Audit daemon collects the information from the kernel and creates entries in a log file.
480
CHAPTER 28. AUDITING THE SYSTEM
The user-space Audit daemon collects the information from the kernel and creates entries in a log file.
Other Audit user-space utilities interact with the Audit daemon, the kernel Audit component, or the
Audit log files:
The auditctl Audit control utility interacts with the kernel Audit component to manage rules and
to control many settings and parameters of the event generation process.
The remaining Audit utilities take the contents of the Audit log files as input and generate
output based on user’s requirements. For example, the aureport utility generates a report of all
recorded events.
In RHEL 8, the Audit dispatcher daemon (audisp) functionality is integrated in the Audit daemon
(auditd). Configuration files of plugins for the interaction of real-time analytical programs with Audit
events are located in the /etc/audit/plugins.d/ directory by default.
log_file
The directory that holds the Audit log files (usually /var/log/audit/) should reside on a separate
mount point. This prevents other processes from consuming space in this directory and provides
accurate detection of the remaining space for the Audit daemon.
max_log_file
Specifies the maximum size of a single Audit log file, must be set to make full use of the available
space on the partition that holds the Audit log files. The max_log_file` parameter specifies the
maximum file size in megabytes. The value given must be numeric.
max_log_file_action
Decides what action is taken once the limit set in max_log_file is reached, should be set to
keep_logs to prevent Audit log files from being overwritten.
space_left
Specifies the amount of free space left on the disk for which an action that is set in the
space_left_action parameter is triggered. Must be set to a number that gives the administrator
enough time to respond and free up disk space. The space_left value depends on the rate at which
the Audit log files are generated. If the value of space_left is specified as a whole number, it is
interpreted as an absolute size in megabytes (MiB). If the value is specified as a number between 1
and 99 followed by a percentage sign (for example, 5%), the Audit daemon calculates the absolute
size in megabytes based on the size of the file system containing log_file.
space_left_action
It is recommended to set the space_left_action parameter to email or exec with an appropriate
notification method.
admin_space_left
Specifies the absolute minimum amount of free space for which an action that is set in the
admin_space_left_action parameter is triggered, must be set to a value that leaves enough space
to log actions performed by the administrator. The numeric value for this parameter should be lower
than the number for space_left. You can also append a percent sign (for example, 1%) to the number
to have the audit daemon calculate the number based on the disk partition size.
admin_space_left_action
Should be set to single to put the system into single-user mode and allow the administrator to free
481
Red Hat Enterprise Linux 8 System Design Guide
Should be set to single to put the system into single-user mode and allow the administrator to free
up some disk space.
disk_full_action
Specifies an action that is triggered when no free space is available on the partition that holds the
Audit log files, must be set to halt or single. This ensures that the system is either shut down or
operating in single-user mode when Audit can no longer log events.
disk_error_action
Specifies an action that is triggered in case an error is detected on the partition that holds the Audit
log files, must be set to syslog, single, or halt, depending on your local security policies regarding
the handling of hardware malfunctions.
flush
Should be set to incremental_async. It works in combination with the freq parameter, which
determines how many records can be sent to the disk before forcing a hard synchronization with the
hard drive. The freq parameter should be set to 100. These parameters assure that Audit event data
is synchronized with the log files on the disk while keeping good performance for bursts of activity.
The remaining configuration options should be set according to your local security policy.
You can temporarily disable auditd with the # auditctl -e 0 command and re-enable it with # auditctl -e
1.
You can perform other actions on auditd by using the service auditd <action> command, where
<action> can be one of the following:
stop
Stops auditd.
restart
Restarts auditd.
reload or force-reload
Reloads the configuration of auditd from the /etc/audit/auditd.conf file.
rotate
Rotates the log files in the /var/log/audit/ directory.
resume
Resumes logging of Audit events after it has been previously suspended, for example, when there is
not enough free space on the disk partition that holds the Audit log files.
condrestart or try-restart
Restarts auditd only if it is already running.
status
482
CHAPTER 28. AUDITING THE SYSTEM
NOTE
The service command is the only way to correctly interact with the auditd daemon. You
need to use the service command so that the auid value is properly recorded. You can
use the systemctl command only for two actions: enable and status.
Add the following Audit rule to log every attempt to read or modify the /etc/ssh/sshd_config file:
If the auditd daemon is running, for example, using the following command creates a new event in the
Audit log file:
$ cat /etc/ssh/sshd_config
The above event consists of four records, which share the same time stamp and serial number. Records
always start with the type= keyword. Each record consists of several name=value pairs separated by a
white space or a comma. A detailed analysis of the above event follows:
First Record
type=SYSCALL
The type field contains the type of the record. In this example, the SYSCALL value specifies that this
record was triggered by a system call to the kernel.
msg=audit(1364481363.243:24287):
The msg field records:
A time stamp and a unique ID of the record in the form audit(time_stamp:ID). Multiple
records can share the same time stamp and ID if they were generated as part of the same
Audit event. The time stamp is using the Unix time format - seconds since 00:00:00 UTC on
1 January 1970.
483
Red Hat Enterprise Linux 8 System Design Guide
arch=c000003e
The arch field contains information about the CPU architecture of the system. The value, c000003e,
is encoded in hexadecimal notation. When searching Audit records with the ausearch command, use
the -i or --interpret option to automatically convert hexadecimal values into their human-readable
equivalents. The c000003e value is interpreted as x86_64.
syscall=2
The syscall field records the type of the system call that was sent to the kernel. The value, 2, can be
matched with its human-readable equivalent in the /usr/include/asm/unistd_64.h file. In this case, 2
is the open system call. Note that the ausyscall utility allows you to convert system call numbers to
their human-readable equivalents. Use the ausyscall --dump command to display a listing of all
system calls along with their numbers. For more information, see the ausyscall(8) man page.
success=no
The success field records whether the system call recorded in that particular event succeeded or
failed. In this case, the call did not succeed.
exit=-13
The exit field contains a value that specifies the exit code returned by the system call. This value
varies for a different system call. You can interpret the value to its human-readable equivalent with
the following command:
Note that the previous example assumes that your Audit log contains an event that failed with exit
code -13.
484
CHAPTER 28. AUDITING THE SYSTEM
The euid field records the effective user ID of the user who started the analyzed process.
suid=1000
The suid field records the set user ID of the user who started the analyzed process.
fsuid=1000
The fsuid field records the file system user ID of the user who started the analyzed process.
egid=1000
The egid field records the effective group ID of the user who started the analyzed process.
sgid=1000
The sgid field records the set group ID of the user who started the analyzed process.
fsgid=1000
The fsgid field records the file system group ID of the user who started the analyzed process.
tty=pts0
The tty field records the terminal from which the analyzed process was invoked.
ses=1
The ses field records the session ID of the session from which the analyzed process was invoked.
comm="cat"
The comm field records the command-line name of the command that was used to invoke the
analyzed process. In this case, the cat command was used to trigger this Audit event.
exe="/bin/cat"
The exe field records the path to the executable that was used to invoke the analyzed process.
subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
The subj field records the SELinux context with which the analyzed process was labeled at the time
of execution.
key="sshd_config"
The key field records the administrator-defined string associated with the rule that generated this
event in the Audit log.
Second Record
type=CWD
In the second record, the type field value is CWD — current working directory. This type is used to
record the working directory from which the process that invoked the system call specified in the
first record was executed.
The purpose of this record is to record the current process’s location in case a relative path winds up
being captured in the associated PATH record. This way the absolute path can be reconstructed.
msg=audit(1364481363.243:24287)
The msg field holds the same time stamp and ID value as the value in the first record. The time
stamp is using the Unix time format - seconds since 00:00:00 UTC on 1 January 1970.
cwd="/home/user_name"
The cwd field contains the path to the directory in which the system call was invoked.
Third Record
type=PATH
In the third record, the type field value is PATH. An Audit event contains a PATH-type record for
485
Red Hat Enterprise Linux 8 System Design Guide
In the third record, the type field value is PATH. An Audit event contains a PATH-type record for
every path that is passed to the system call as an argument. In this Audit event, only one path
(/etc/ssh/sshd_config) was used as an argument.
msg=audit(1364481363.243:24287):
The msg field holds the same time stamp and ID value as the value in the first and second record.
item=0
The item field indicates which item, of the total number of items referenced in the SYSCALL type
record, the current record is. This number is zero-based; a value of 0 means it is the first item.
name="/etc/ssh/sshd_config"
The name field records the path of the file or directory that was passed to the system call as an
argument. In this case, it was the /etc/ssh/sshd_config file.
inode=409248
The inode field contains the inode number associated with the file or directory recorded in this
event. The following command displays the file or directory that is associated with the 409248 inode
number:
dev=fd:00
The dev field specifies the minor and major ID of the device that contains the file or directory
recorded in this event. In this case, the value represents the /dev/fd/0 device.
mode=0100600
The mode field records the file or directory permissions, encoded in numerical notation as returned
by the stat command in the st_mode field. See the stat(2) man page for more information. In this
case, 0100600 can be interpreted as -rw-------, meaning that only the root user has read and write
permissions to the /etc/ssh/sshd_config file.
ouid=0
The ouid field records the object owner’s user ID.
ogid=0
The ogid field records the object owner’s group ID.
rdev=00:00
The rdev field contains a recorded device identifier for special files only. In this case, it is not used as
the recorded file is a regular file.
obj=system_u:object_r:etc_t:s0
The obj field records the SELinux context with which the recorded file or directory was labeled at the
time of execution.
nametype=NORMAL
The nametype field records the intent of each path record’s operation in the context of a given
syscall.
cap_fp=none
The cap_fp field records data related to the setting of a permitted file system-based capability of
the file or directory object.
cap_fi=none
The cap_fi field records data related to the setting of an inherited file system-based capability of
the file or directory object.
486
CHAPTER 28. AUDITING THE SYSTEM
cap_fe=0
The cap_fe field records the setting of the effective bit of the file system-based capability of the
file or directory object.
cap_fver=0
The cap_fver field records the version of the file system-based capability of the file or directory
object.
Fourth Record
type=PROCTITLE
The type field contains the type of the record. In this example, the PROCTITLE value specifies that
this record gives the full command-line that triggered this Audit event, triggered by a system call to
the kernel.
proctitle=636174002F6574632F7373682F737368645F636F6E666967
The proctitle field records the full command-line of the command that was used to invoke the
analyzed process. The field is encoded in hexadecimal notation to not allow the user to influence the
Audit log parser. The text decodes to the command that triggered this Audit event. When searching
Audit records with the ausearch command, use the -i or --interpret option to automatically convert
hexadecimal values into their human-readable equivalents. The
636174002F6574632F7373682F737368645F636F6E666967 value is interpreted as cat
/etc/ssh/sshd_config.
The auditctl command enables you to control the basic functionality of the Audit system and to define
rules that decide which Audit events are logged.
1. To define a rule that logs all write access to, and every attribute change of, the /etc/passwd file:
2. To define a rule that logs all write access to, and every attribute change of, all the files in the
/etc/selinux/ directory:
1. To define a rule that creates a log entry every time the adjtimex or settimeofday system calls
are used by a program, and the system uses the 64-bit architecture:
2. To define a rule that creates a log entry every time a file is deleted or renamed by a system user
whose ID is 1000 or larger:
487
Red Hat Enterprise Linux 8 System Design Guide
Note that the -F auid!=4294967295 option is used to exclude users whose login UID is not set.
Executable-file rules
To define a rule that logs all execution of the /bin/id program, execute the following command:
Additional resources
Note that the /etc/audit/audit.rules file is generated whenever the auditd service starts. Files in
/etc/audit/rules.d/ use the same auditctl command-line syntax to specify the rules. Empty lines and text
following a hash sign (#) are ignored.
Furthermore, you can use the auditctl command to read rules from a specified file using the -R option,
for example:
# auditctl -R /usr/share/audit/sample-rules/30-stig.rules
WARNING
The Audit sample rules in the sample-rules directory are not exhaustive nor up to
date because security standards are dynamic and subject to change. These rules are
provided only to demonstrate how Audit rules can be structured and written. They
do not ensure immediate compliance with the latest security standards. To bring
your system into compliance with the latest security standards according to specific
security guidelines, use the SCAP-based security compliance tools .
30-nispom.rules
Audit rule configuration that meets the requirements specified in the Information System Security
488
CHAPTER 28. AUDITING THE SYSTEM
Audit rule configuration that meets the requirements specified in the Information System Security
chapter of the National Industrial Security Program Operating Manual.
30-ospp-v42*.rules
Audit rule configuration that meets the requirements defined in the OSPP (Protection Profile for
General Purpose Operating Systems) profile version 4.2.
30-pci-dss-v31.rules
Audit rule configuration that meets the requirements set by Payment Card Industry Data Security
Standard (PCI DSS) v3.1.
30-stig.rules
Audit rule configuration that meets the requirements set by Security Technical Implementation
Guides (STIG).
To use these configuration files, copy them to the /etc/audit/rules.d/ directory and use the augenrules
--load command, for example:
# cd /usr/share/audit/sample-rules/
# cp 10-base-config.rules 30-stig.rules 31-privileged.rules 99-finalize.rules /etc/audit/rules.d/
# augenrules --load
You can order Audit rules using a numbering scheme. See the /usr/share/audit/sample-rules/README-
rules file for more information.
Additional resources
10
Kernel and auditctl configuration
20
Rules that could match general rules but you want a different match
30
Main rules
40
Optional rules
50
Server-specific rules
70
System local rules
90
Finalize (immutable)
The rules are not meant to be used all at once. They are pieces of a policy that should be thought out
489
Red Hat Enterprise Linux 8 System Design Guide
The rules are not meant to be used all at once. They are pieces of a policy that should be thought out
and individual files copied to /etc/audit/rules.d/. For example, to set a system up in the STIG
configuration, copy rules 10-base-config, 30-stig, 31-privileged, and 99-finalize.
Once you have the rules in the /etc/audit/rules.d/ directory, load them by running the augenrules script
with the --load directive:
# augenrules --load
/sbin/augenrules: No change
No rules
enabled 1
failure 1
pid 742
rate_limit 0
...
Additional resources
Procedure
# cp -f /usr/lib/systemd/system/auditd.service /etc/systemd/system/
2. Edit the /etc/systemd/system/auditd.service file in a text editor of your choice, for example:
# vi /etc/systemd/system/auditd.service
3. Comment out the line containing augenrules, and uncomment the line containing the auditctl -
R command:
#ExecStartPost=-/sbin/augenrules --load
ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules
# systemctl daemon-reload
Additional resources
490
CHAPTER 28. AUDITING THE SYSTEM
dnf [2]
yum
pip
npm
cpan
gem
luarocks
By default, rpm already provides audit SOFTWARE_UPDATE events when it installs or updates a
package. You can list them by entering ausearch -m SOFTWARE_UPDATE on the command line.
In RHEL 8.5 and earlier versions, you can manually add rules to monitor utilities that install software into
a .rules file within the /etc/audit/rules.d/ directory.
NOTE
Pre-configured rule files cannot be used on systems with the ppc64le and aarch64
architectures.
Prerequisites
auditd is configured in accordance with the settings provided in Configuring auditd for a secure
environment .
Procedure
1. On RHEL 8.6 and later, copy the pre-configured rule file 44-installers.rules from the
/usr/share/audit/sample-rules/ directory to the /etc/audit/rules.d/ directory:
# cp /usr/share/audit/sample-rules/44-installers.rules /etc/audit/rules.d/
On RHEL 8.5 and earlier, create a new file in the /etc/audit/rules.d/ directory named 44-
installers.rules, and insert the following rules:
You can add additional rules for other utilities that install software, for example pip and npm,
using the same syntax.
491
Red Hat Enterprise Linux 8 System Design Guide
# augenrules --load
Verification
# auditctl -l
-p x-w /usr/bin/dnf-3 -k software-installer
-p x-w /usr/bin/yum -k software-installer
-p x-w /usr/bin/pip -k software-installer
-p x-w /usr/bin/npm -k software-installer
-p x-w /usr/bin/cpan -k software-installer
-p x-w /usr/bin/gem -k software-installer
-p x-w /usr/bin/luarocks -k software-installer
3. Search the Audit log for recent installation events, for example:
Prerequisites
492
CHAPTER 28. AUDITING THE SYSTEM
Prerequisites
auditd is configured in accordance with the settings provided in Configuring auditd for a secure
environment .
Procedure
To display user log in times, use any one of the following commands:
You can specify the date and time with the -ts option. If you do not use this option,
ausearch provides results from today, and if you omit time, ausearch provides results from
midnight.
You can use the -sv yes option to filter out successful login attempts and -sv no for
unsuccessful login attempts.
Pipe the raw output of the ausearch command into the aulast utility, which displays the output
in a format similar to the output of the last command. For example:
Display the list of login events by using the aureport command with the --login -i options.
# aureport --login -i
Login Report
============================================
# date time auid host term exe success event
============================================
1. 11/16/2021 13:11:30 root 10.40.192.190 ssh /usr/sbin/sshd yes 6920
2. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6925
3. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6930
4. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6935
5. 11/16/2021 13:11:33 root 10.40.192.190 ssh /usr/sbin/sshd yes 6940
6. 11/16/2021 13:11:33 root 10.40.192.190 /dev/pts/0 /usr/sbin/sshd yes 6945
Additional resources
493
Red Hat Enterprise Linux 8 System Design Guide
[2] Because dnf is a symlink in RHEL, the path in thednf Audit rule must include the target of the symlink. To
receive correct Audit events, modify the 44-installers.rules file by changing the path=/usr/bin/dnf path to
/usr/bin/dnf-3 .
494
PART VI. DESIGN OF KERNEL
495
Red Hat Enterprise Linux 8 System Design Guide
The Red Hat kernel is a custom-built kernel based on the upstream Linux mainline kernel that
Red Hat engineers further develop and harden with a focus on stability and compatibility with the latest
technologies and hardware.
Before Red Hat releases a new kernel version, the kernel needs to pass a set of rigorous quality
assurance tests.
The Red Hat kernels are packaged in the RPM format so that they are easily upgraded and verified by
the YUM package manager.
WARNING
Kernels that are not compiled by Red Hat are not supported by Red Hat.
GPG signature
The GPG signature is used to verify the integrity of the package.
Header (package metadata)
The RPM package manager uses this metadata to determine package dependencies, where to install
files, and other information.
Payload
The payload is a cpio archive that contains files to install to the system.
There are two types of RPM packages. Both types share the file format and tooling, but have different
contents and serve different purposes:
Binary RPM
A binary RPM contains the binaries built from the sources and patches.
496
CHAPTER 29. THE LINUX KERNEL
kernel-core
Provides the binary image of the kernel, all initramfs-related objects to bootstrap the system, and a
minimal number of kernel modules to ensure core functionality. This sub-package alone could be
used in virtualized and cloud environments to provide a Red Hat Enterprise Linux 8 kernel with a
quick boot time and a small disk size footprint.
kernel-modules
Provides the remaining kernel modules that are not present in kernel-core.
The small set of kernel sub-packages above aims to provide a reduced maintenance surface to system
administrators especially in virtualized and cloud environments.
kernel-modules-extra
Provides kernel modules for rare hardware. Loading of the module is disabled by default.
kernel-debug
Provides a kernel with many debugging options enabled for kernel diagnosis, at the expense of
reduced performance.
kernel-tools
Provides tools for manipulating the Linux kernel and supporting documentation.
kernel-devel
Provides the kernel headers and makefiles that are enough to build modules against the kernel
package.
kernel-abi-stablelists
Provides information pertaining to the RHEL kernel ABI, including a list of kernel symbols required by
external Linux kernel modules and a yum plug-in to aid enforcement.
kernel-headers
Includes the C header files that specify the interface between the Linux kernel and user-space
libraries and programs. The header files define structures and constants required for building most
standard programs.
497