Red Hat 7-Security
Red Hat 7-Security
Security Guide
Martin Prpič
Red Hat Engineering Co ntent Services
[email protected] m
To máš Čapek
Red Hat Engineering Co ntent Services
[email protected] m
Stephen Wadeley
Red Hat Engineering Co ntent Services
[email protected] m
Yo ana Ruseva
Red Hat Engineering Co ntent Services
[email protected] m
Ro bert Krátký
Red Hat Engineering Co ntent Services
[email protected] m
Legal Notice
T his document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported
License. If you distribute this document, or a modified version of it, you must provide attribution to Red
Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be
removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section
4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo,
and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux ® is the registered trademark of Linus T orvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other
countries.
Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or
endorsed by the official Joyent Node.js open source or commercial project.
T he OpenStack ® Word Mark and OpenStack Logo are either registered trademarks/service marks or
trademarks/service marks of the OpenStack Foundation, in the United States and other countries and
are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.
Table of Contents
C
. .hapter
. . . . . . 1.
. . Overview
. . . . . . . . . of
. . .Security
. . . . . . . .T.opics
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . .
1.1. What is Computer Security? 3
1.2. Security Controls 4
1.3. Vulnerability Assessment 5
1.4. Security T hreats 8
1.5. Common Exploits and Attacks 11
C
. .hapter
. . . . . . 2.
. . Security
........T. .ips
. . .for
. . .Installation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
..........
2.1. Securing BIOS 15
2.2. Partitioning the Disk 15
2.3. Installing the Minimum Amount of Packages Required 16
2.4. Post-installation Procedures 16
2.5. Additional Resources 17
C
. .hapter
. . . . . . 3.
. . Keeping
. . . . . . . . Your
. . . . .System
. . . . . . . Up-to-Date
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
..........
3.1. Maintaining Installed Software 18
3.2. Using the Red Hat Customer Portal 22
3.3. Additional Resources 23
C
. .hapter . . . . . . 4. . .Hardening
. . . . . . . . . Your
. . . . . System
. . . . . . . with
....T. .ools
. . . .and
. . . .Services
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
..........
4 .1. Desktop Security 24
4 .2. Controlling Root Access 31
4 .3. Securing Services 37
4 .4. Securing Network Access 52
4 .5. Using Firewalls 57
4 .6. Securing DNS T raffic with DNSSEC 80
4 .7. Securing Virtual Private Networks (VPNs) 89
4 .8. Using OpenSSL 98
4 .9. Encryption 103
C
. .hapter . . . . . . 5.
. . System
. . . . . . . .Auditing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111
...........
Use Cases 111
5.1. Audit System Architecture 112
5.2. Installing the audit Packages 113
5.3. Configuring the audit Service 113
5.4. Starting the audit Service 114
5.5. Defining Audit Rules 115
5.6. Understanding Audit Log Files 120
5.7. Searching the Audit Log Files 124
5.8. Creating Audit Reports 125
5.9. Additional Resources 126
C
. .hapter . . . . . . 6.
. . Compliance
. . . . . . . . . . . and
. . . . Vulnerability
. . . . . . . . . . . . Scanning
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128
...........
6.1. Security Compliance in Red Hat Enterprise Linux 128
6.2. Defining Compliance Policy 128
6.3. Using SCAP Workbench 136
6.4. Using oscap 143
6.5. Using OpenSCAP with Red Hat Satellite 149
6.6. Practical Examples 149
6.7. Additional Resources 150
C
. .hapter
. . . . . . 7.
. . Federal
. . . . . . . .Standards
. . . . . . . . . .and
. . . .Regulations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152
...........
7.1. Federal Information Processing Standard (FIPS) 152
7.2. National Industrial Security Program Operating Manual (NISPOM) 154
7.3. Payment Card Industry Data Security Standard (PCI DSS) 154
7.4. Security T echnical Implementation Guide 154
1
Red Hat Enterprise Linux 7 Security Guide
.Encryption
. . . . . . . . . .Standards
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155
...........
A.1. Synchronous Encryption 155
A.2. Public-key Encryption 155
.Audit
. . . . .System
. . . . . . .Reference
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .159
...........
B.1. Audit Event Fields 159
B.2. Audit Record T ypes 162
. . . . . . . . .History
Revision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
...........
2
C hapter 1. Overview of Security T opics
Unfortunately, many organizations (as well as individual users) regard security as more of an afterthought,
a process that is overlooked in favor of increased power, productivity, convenience, ease of use, and
budgetary concerns. Proper security implementation is often enacted postmortem — after an unauthorized
intrusion has already occurred. T aking the correct measures prior to connecting a site to an untrusted
network, such as the Internet, is an effective means of thwarting many attempts at intrusion.
Note
T his document makes several references to files in the /lib directory. When using 64-bit systems,
some of the files mentioned may instead be located in /lib64 .
Integrity — Information should not be altered in ways that render it incomplete or incorrect.
Unauthorized users should be restricted from the ability to modify or destroy sensitive information.
Availability — Information should be accessible to authorized users any time that it is needed.
Availability is a warranty that information can be obtained with an agreed-upon frequency and
timeliness. T his is often measured in terms of percentages and agreed to formally in Service Level
Agreements (SLAs) used by network service providers and their enterprise clients.
3
Red Hat Enterprise Linux 7 Security Guide
Physical
T echnical
Administrative
T hese three broad categories define the main objectives of proper security implementation. Within these
controls are sub-categories that further detail the controls and how to implement them.
Security guards
Picture IDs
Biometrics (includes fingerprint, voice, face, iris, handwriting, and other automated methods used to
recognize individuals)
Encryption
Smart cards
Network authentication
4
C hapter 1. Overview of Security T opics
T he expertise of the staff responsible for configuring, monitoring, and maintaining the technologies.
T he ability to patch and update services and kernels quickly and efficiently.
Given the dynamic state of data systems and technologies, securing corporate resources can be quite
complex. Due to this complexity, it is often difficult to find expert resources for all of your systems. While it
is possible to have personnel knowledgeable in many areas of information security at a high level, it is
difficult to retain staff who are experts in more than a few subject areas. T his is mainly because each
subject area of information security requires constant attention and focus. Information security does not
stand still.
A vulnerability assessment is an internal audit of your network and system security; the results of which
indicate the confidentiality, integrity, and availability of your network (as explained in Section 1.1.1,
“Standardizing Security”). T ypically, vulnerability assessment starts with a reconnaissance phase, during
which important data regarding the target systems and resources is gathered. T his phase leads to the
system readiness phase, whereby the target is essentially checked for all known vulnerabilities. T he
readiness phase culminates in the reporting phase, where the findings are classified into categories of
high, medium, and low risk; and methods for improving the security (or mitigating the risk of vulnerability) of
the target are discussed
If you were to perform a vulnerability assessment of your home, you would likely check each door to your
home to see if they are closed and locked. You would also check every window, making sure that they
closed completely and latch correctly. T his same concept applies to systems, networks, and electronic
data. Malicious users are the thieves and vandals of your data. Focus on their tools, mentality, and
motivations, and you can then react swiftly to their actions.
When performing an outside-looking-in vulnerability assessment, you are attempting to compromise your
systems from the outside. Being external to your company provides you with the cracker's viewpoint. You
see what a cracker sees — publicly-routable IP addresses, systems on your DMZ, external interfaces of
your firewall, and more. DMZ stands for "demilitarized zone", which corresponds to a computer or small
subnetwork that sits between a trusted internal network, such as a corporate private LAN, and an
untrusted external network, such as the public Internet. T ypically, the DMZ contains devices accessible to
Internet traffic, such as Web (HT T P) servers, FT P servers, SMT P (e-mail) servers and DNS servers.
When you perform an inside-looking-around vulnerability assessment, you are at an advantage since you
are internal and your status is elevated to trusted. T his is the viewpoint you and your co-workers have
once logged on to your systems. You see print servers, file servers, databases, and other resources.
5
Red Hat Enterprise Linux 7 Security Guide
T here are striking distinctions between the two types of vulnerability assessments. Being internal to your
company gives you more privileges than an outsider. In most organizations, security is configured to keep
intruders out. Very little is done to secure the internals of the organization (such as departmental firewalls,
user-level access controls, and authentication procedures for internal resources). T ypically, there are
many more resources when looking around inside as most systems are internal to a company. Once you
are outside the company, your status is untrusted. T he systems and resources available to you externally
are usually very limited.
Consider the difference between vulnerability assessments and penetration tests. T hink of a vulnerability
assessment as the first step to a penetration test. T he information gleaned from the assessment is used
for testing. Whereas the assessment is undertaken to check for holes and potential vulnerabilities, the
penetration testing actually attempts to exploit the findings.
Assessing network infrastructure is a dynamic process. Security, both information and physical, is
dynamic. Performing an assessment shows an overview, which can turn up false positives and false
negatives. A false positive is a result, where the tool finds vulnerabilities which in reality do not exist. A
false negative is when it omits actual vulnerabilities.
Security administrators are only as good as the tools they use and the knowledge they retain. T ake any of
the assessment tools currently available, run them against your system, and it is almost a guarantee that
there are some false positives. Whether by program fault or user error, the result is the same. T he tool
may find false positives, or, even worse, false negatives.
Now that the difference between a vulnerability assessment and a penetration test is defined, take the
findings of the assessment and review them carefully before conducting a penetration test as part of your
new best practices approach.
Warning
Do not attempt to exploit vulnerabilities on production systems. Doing so can have adverse effects
on productivity and efficiency of your systems and network.
What is the target? Are we looking at one server, or are we looking at our entire network and everything
within the network? Are we external or internal to the company? T he answers to these questions are
important as they help determine not only which tools to select but also the manner in which they are used.
6
C hapter 1. Overview of Security T opics
Just as in any aspect of everyday life, there are many different tools that perform the same job. T his
concept applies to performing vulnerability assessments as well. T here are tools specific to operating
systems, applications, and even networks (based on the protocols used). Some tools are free; others are
not. Some tools are intuitive and easy to use, while others are cryptic and poorly documented but have
features that other tools do not.
Finding the right tools may be a daunting task and, in the end, experience counts. If possible, set up a test
lab and try out as many tools as you can, noting the strengths and weaknesses of each. Review the
README file or man page for the tools. Additionally, look to the Internet for more information, such as
articles, step-by-step guides, or even mailing lists specific to the tools.
T he tools discussed below are just a small sampling of the available tools.
Nmap is a popular tool that can be used to determine the layout of a network. Nmap has been available
for many years and is probably the most often used tool when gathering information. An excellent manual
page is included that provides detailed descriptions of its options and usage. Administrators can use
Nmap on a network to find host systems and open ports on those systems.
Nmap is a competent first step in vulnerability assessment. You can map out all the hosts within your
network and even pass an option that allows Nmap to attempt to identify the operating system running on
a particular host. Nmap is a good foundation for establishing a policy of using secure services and
restricting unused services.
T o install Nmap, run the yum install nm ap command as the root user.
Nmap can be run from a shell prompt by typing the nm ap command followed by the hostname or IP
address of the machine to scan:
nmap <hostname>
For example, to scan a machine with hostname foo.exam ple.com , type the following at a shell prompt:
T he results of a basic scan (which could take up to a few minutes, depending on where the host is located
and other network conditions) look similar to the following:
7
Red Hat Enterprise Linux 7 Security Guide
Nmap tests the most common network communication ports for listening or waiting services. T his
knowledge can be helpful to an administrator who wants to close unnecessary or unused services.
For more information about using Nmap, see the official homepage at the following URL:
http://www.insecure.org/
1.3.3.2. Nessus
Nessus is a full-service security scanner. T he plug-in architecture of Nessus allows users to customize it
for their systems and networks. As with any scanner, Nessus is only as good as the signature database it
relies upon. Fortunately, Nessus is frequently updated and features full reporting, host scanning, and real-
time vulnerability searches. Remember that there could be false positives and false negatives, even in a
tool as powerful and as frequently updated as Nessus.
Note
T he Nessus client and server software requires a subscription to use. It has been included in this
document as a reference to users who may be interested in using this popular application.
For more information about Nessus, see the official website at the following URL:
http://www.nessus.org/
1.3.3.3. OpenVAS
OpenVAS (Open Vulnerability Assessment System) is a set of tools and services that can be used to
scan for vulnerabilities and for a comprehensive vulnerability management. T he OpenVAS framework
offers a number of web-based, desktop, and command line tools for controlling the various components of
the solution. T he core functionality of OpenVAS is provided by a security scanner, which makes use of
over 33 thousand daily-updated Network Vulnerability T ests (NVT ). Unlike Nessus (see Section 1.3.3.2,
“Nessus”), OpenVAS does not require any subscription.
For more information about OpenVAS, see the official website at the following URL:
http://www.openvas.org/
1.3.3.4 . Nikto
Nikto is an excellent common gateway interface (CGI) script scanner. Nikto not only checks for CGI
vulnerabilities but does so in an evasive manner, so as to elude intrusion-detection systems. It comes with
thorough documentation which should be carefully reviewed prior to running the program. If you have web
servers serving CGI scripts, Nikto can be an excellent resource for checking the security of these
servers.
http://cirt.net/nikto2
8
C hapter 1. Overview of Security T opics
Insecure Architectures
A misconfigured network is a primary entry point for unauthorized users. Leaving a trust-based, open local
network vulnerable to the highly-insecure Internet is much like leaving a door ajar in a crime-ridden
neighborhood — nothing may happen for an arbitrary amount of time, but someone exploits the opportunity
eventually.
Broadcast Networks
System administrators often fail to realize the importance of networking hardware in their security
schemes. Simple hardware, such as hubs and routers, relies on the broadcast or non-switched principle;
that is, whenever a node transmits data across the network to a recipient node, the hub or router sends a
broadcast of the data packets until the recipient node receives and processes the data. T his method is
the most vulnerable to address resolution protocol (ARP) or media access control (MAC) address
spoofing by both outside intruders and unauthorized users on local hosts.
Centralized Servers
Another potential networking pitfall is the use of centralized computing. A common cost-cutting measure for
many businesses is to consolidate all services to a single powerful machine. T his can be convenient as it
is easier to manage and costs considerably less than multiple-server configurations. However, a
centralized server introduces a single point of failure on the network. If the central server is compromised,
it may render the network completely useless or worse, prone to data manipulation or theft. In these
situations, a central server becomes an open door that allows access to the entire network.
A full installation of Red Hat Enterprise Linux 7 contains more than 1000 application and library packages.
However, most server administrators do not opt to install every single package in the distribution,
preferring instead to install a base installation of packages, including several server applications. See
Section 2.3, “Installing the Minimum Amount of Packages Required” for an explanation of the reasons to
limit the number of installed packages and for additional resources.
A common occurrence among system administrators is to install the operating system without paying
attention to what programs are actually being installed. T his can be problematic because unneeded
services may be installed, configured with the default settings, and possibly turned on. T his can cause
unwanted services, such as T elnet, DHCP, or DNS, to run on a server or workstation without the
administrator realizing it, which in turn can cause unwanted traffic to the server or even a potential
pathway into the system for crackers. See Section 4.3, “Securing Services” for information on closing ports
and disabling unused services.
Unpatched Services
Most server applications that are included in a default installation are solid, thoroughly tested pieces of
software. Having been in use in production environments for many years, their code has been thoroughly
refined and many of the bugs have been found and fixed.
However, there is no such thing as perfect software and there is always room for further refinement.
Moreover, newer software is often not as rigorously tested as one might expect, because of its recent
arrival to production environments or because it may not be as popular as other server software.
9
Red Hat Enterprise Linux 7 Security Guide
Developers and system administrators often find exploitable bugs in server applications and publish the
information on bug tracking and security-related websites such as the Bugtraq mailing list
(http://www.securityfocus.com) or the Computer Emergency Response T eam (CERT ) website
(http://www.cert.org). Although these mechanisms are an effective way of alerting the community to security
vulnerabilities, it is up to system administrators to patch their systems promptly. T his is particularly true
because crackers have access to these same vulnerability tracking services and will use the information
to crack unpatched systems whenever they can. Good system administration requires vigilance, constant
bug tracking, and proper system maintenance to ensure a more secure computing environment.
See Chapter 3, Keeping Your System Up-to-Date for more information about keeping a system up-to-date.
Inattentive Administration
Administrators who fail to patch their systems are one of the greatest threats to server security. According
to the SysAdmin, Audit, Network, Security Institute (SANS), the primary cause of computer security
vulnerability is "assigning untrained people to maintain security and providing neither the training nor the
time to make it possible to learn and do the job." [1] T his applies as much to inexperienced administrators
as it does to overconfident or amotivated administrators.
Some administrators fail to patch their servers and workstations, while others fail to watch log messages
from the system kernel or network traffic. Another common error is when default passwords or keys to
services are left unchanged. For example, some databases have default administration passwords
because the database developers assume that the system administrator changes these passwords
immediately after installation. If a database administrator fails to change this password, even an
inexperienced cracker can use a widely-known default password to gain administrative privileges to the
database. T hese are only a few examples of how inattentive administration can lead to compromised
servers.
Even the most vigilant organization can fall victim to vulnerabilities if the network services they choose are
inherently insecure. For instance, there are many services developed under the assumption that they are
used over trusted networks; however, this assumption fails as soon as the service becomes available
over the Internet — which is itself inherently untrusted.
One category of insecure network services are those that require unencrypted usernames and passwords
for authentication. T elnet and FT P are two such services. If packet sniffing software is monitoring traffic
between the remote user and such a service usernames and passwords can be easily intercepted.
Inherently, such services can also more easily fall prey to what the security industry terms the man-in-the-
middle attack. In this type of attack, a cracker redirects network traffic by tricking a cracked name server on
the network to point to his machine instead of the intended server. Once someone opens a remote
session to the server, the attacker's machine acts as an invisible conduit, sitting quietly between the
remote service and the unsuspecting user capturing information. In this way a cracker can gather
administrative passwords and raw data without the server or the user realizing it.
Another category of insecure services include network file systems and information services such as NFS
or NIS, which are developed explicitly for LAN usage but are, unfortunately, extended to include WANs (for
remote users). NFS does not, by default, have any authentication or security mechanisms configured to
prevent a cracker from mounting the NFS share and accessing anything contained therein. NIS, as well,
has vital information that must be known by every computer on a network, including passwords and file
permissions, within a plain text ASCII or DBM (ASCII-derived) database. A cracker who gains access to this
database can then access every user account on a network, including the administrator's account.
By default, Red Hat Enterprise Linux 7 is released with all such services turned off. However, since
administrators often find themselves forced to use these services, careful configuration is critical. See
Section 4.3, “Securing Services” for more information about setting up services in a safe manner.
10
C hapter 1. Overview of Security T opics
Bad Passwords
Bad passwords are one of the easiest ways for an attacker to gain access to a system. For more on how
to avoid common pitfalls when creating a password, see Section 4.1.1, “Password Security”.
Although an administrator may have a fully secure and patched server, that does not mean remote users
are secure when accessing it. For instance, if the server offers T elnet or FT P services over a public
network, an attacker can capture the plain text usernames and passwords as they pass over the network,
and then use the account information to access the remote user's workstation.
Even when using secure protocols, such as SSH, a remote user may be vulnerable to certain attacks if
they do not keep their client applications updated. For instance, v.1 SSH clients are vulnerable to an X-
forwarding attack from malicious SSH servers. Once connected to the server, the attacker can quietly
capture any keystrokes and mouse clicks made by the client over the network. T his problem was fixed in
the v.2 SSH protocol, but it is up to the user to keep track of what applications have such vulnerabilities
and update them as necessary.
Section 4.1, “Desktop Security” discusses in more detail what steps administrators and home users
should take to limit the vulnerability of computer workstations.
11
Red Hat Enterprise Linux 7 Security Guide
Eavesdropping Collecting data that passes between T his type of attack works mostly with
two active nodes on a network by plain text transmission protocols such
eavesdropping on the connection as T elnet, FT P, and HT T P transfers.
between the two nodes.
Remote attacker must have access to a
compromised system on a LAN in order
to perform such an attack; usually the
cracker has used an active attack (such
as IP spoofing or man-in-the-middle) to
compromise a system on the LAN.
12
C hapter 1. Overview of Security T opics
Application Attackers find faults in desktop and Workstations and desktops are more
Vulnerabilities workstation applications (such as email prone to exploitation as workers do not
clients) and execute arbitrary code, have the expertise or experience to
implant T rojan horses for future prevent or detect a compromise; it is
compromise, or crash systems. Further imperative to inform individuals of the
exploitation can occur if the risks they are taking when they install
compromised workstation has unauthorized software or open
administrative privileges on the rest of unsolicited email attachments.
the network.
Safeguards can be implemented such
that email client software does not
automatically open or execute
attachments. Additionally, the automatic
update of workstation software via Red
Hat Network; or other system
management services can alleviate the
burdens of multi-seat security
deployments.
13
Red Hat Enterprise Linux 7 Security Guide
14
C hapter 2. Security T ips for Installation
For example, if a machine is used in a trade show and contains no sensitive information, then it may not be
critical to prevent such attacks. However, if an employee's laptop with private, unencrypted SSH keys for
the corporate network is left unattended at that same trade show, it could lead to a major security breach
with ramifications for the entire company.
If the workstation is located in a place where only authorized or trusted people have access, however, then
securing the BIOS or the boot loader may not be necessary.
T he two primary reasons for password protecting the BIOS of a computer are [2] :
1. Preventing Changes to BIOS Settings — If an intruder has access to the BIOS, they can set it to boot
from a CD-ROM or a flash drive. T his makes it possible for them to enter rescue mode or single
user mode, which in turn allows them to start arbitrary processes on the system or copy sensitive
data.
2. Preventing System Booting — Some BIOSes allow password protection of the boot process. When
activated, an attacker is forced to enter a password before the BIOS launches the boot loader.
Because the methods for setting a BIOS password vary between computer manufacturers, consult the
computer's manual for specific instructions.
If you forget the BIOS password, it can either be reset with jumpers on the motherboard or by
disconnecting the CMOS battery. For this reason, it is good practice to lock the computer case if possible.
However, consult the manual for the computer or motherboard before attempting to disconnect the CMOS
battery.
Other architectures use different programs to perform low-level tasks roughly equivalent to those of the
BIOS on x86 systems. For instance, Intel® Itanium™ computers use the Extensible Firmware Interface
(EFI) shell.
For instructions on password protecting BIOS-like programs on other architectures, see the
manufacturer's instructions.
15
Red Hat Enterprise Linux 7 Security Guide
/boot
T his partition is the first partition that is read by the system during boot up. T he boot loader and
kernel images that are used to boot your system into Red Hat Enterprise Linux 7 are stored in
this partition. T his partition should not be encrypted. If this partition is included in / and that
partition is encrypted or otherwise becomes unavailable then your system will not be able to boot.
/hom e
When user data (/hom e) is stored in / instead of in a separate partition, the partition can fill up
causing the operating system to become unstable. Also, when upgrading your system to the next
version of Red Hat Enterprise Linux 7 it is a lot easier when you can keep your data in the /hom e
partition as it will not be overwritten during installation. If the root partition (/) becomes corrupt
your data could be lost forever. By using a separate partition there is slightly more protection
against data loss. You can also target this partition for frequent backups.
Both the /tm p and /var/tm p directories are used to store data that does not need to be stored
for a long period of time. However, if a lot of data floods one of these directories it can consume
all of your storage space. If this happens and these directories are stored within / then your
system could become unstable and crash. For this reason, moving these directories into their own
partitions is a good idea.
Note
During the installation process, an option to encrypt partitions is presented to you. T he user must
supply a passphrase. T his passphrase will be used as a key to unlock the bulk encryption key,
which is used to secure the partition's data. For more information on LUKS, see Section 4.9.1,
“Using LUKS Disk Encryption”.
A minimal installation can also be performed via a Kickstart file using the --nobase option. For more
information about Kickstart installations and the Minim al install environment, see the Red Hat
Enterprise Linux 7 Installation Guide.
16
C hapter 2. Security T ips for Installation
2. Even though the firewall service, firewalld, is automatically enabled with the installation of Red
Hat Enterprise Linux, there are scenarios where it might be explicitly disabled, for example in the
kickstart configuration. In such a case, it is recommended to consider re-enabling the firewall.
3. T o enhance security, disable services you do not need. For example, if there are no printers
installed on your computer, disable the cups service using the following command:
[2] Sinc e s ys tem BIO Ses d iffer b etween manufac turers , s o me may no t s up p o rt p as s wo rd p ro tec tio n o f either typ e, while o thers may
s up p o rt o ne typ e b ut no t the o ther.
17
Red Hat Enterprise Linux 7 Security Guide
Often, announcements about a given security exploit are accompanied with a patch (or source code) that
fixes the problem. T his patch is then applied to the Red Hat Enterprise Linux package and tested and
released as an erratum update. However, if an announcement does not include a patch, Red Hat
developers first work with the maintainer of the software to fix the problem. Once the problem is fixed, the
package is tested and released as an erratum update.
If an erratum update is released for software used on your system, it is highly recommended that you
update the affected packages as soon as possible to minimize the amount of time the system is potentially
vulnerable.
T est security updates when they become available and schedule them for installation. Additional controls
need to be used to protect the system during the time between the release of the update and its
installation on the system. T hese controls depend on the exact vulnerability, but may include additional
firewall rules, the use of external firewalls, or changes in software settings.
Bugs in supported packages are fixed using the errata mechanism. An erratum consists of one or more
RPM packages accompanied by a brief explanation of the problem that the particular erratum deals with. All
errata are distributed to customers with active subscriptions through the Red Hat Subscription
Management service. Errata that address security issues are called Red Hat Security Advisories.
For more information on working with security errata, see Section 3.2.1, “Viewing Security Advisories on the
Customer Portal”. For detailed information about the Red Hat Subscription Management service,
including instructions on how to migrate from RHN Classic, see the documentation related to this service:
Red Hat Subscription Management.
T he Yum package manager includes several security-related features that can be used to search, list,
display, and install security errata. T hese features also make it possible to use Yum to install nothing but
security updates.
T o check for security-related updates available for your system, run the following command as root:
18
C hapter 3. Keeping Your System Up-to-Date
Note that the above command runs in a non-interactive mode, so it can be used in scripts for automated
checking whether there are any updates available. T he command returns an exit value of 100 when there
are any security updates available and 0 when there are not. On encountering an error, it returns 1.
Use the updateinfo subcommand to display or act upon information provided by repositories about
available updates. T he updateinfo subcommand itself accepts a number of commands, some of which
pertain to security-related uses. See T able 3.1, “Security-related commands usable with yum updateinfo”
for an overview of these commands.
Command Description
advisory [advisories] Displays information about one or more advisories. Replace advisory
with an advisory number or numbers.
cves Displays the subset of information that pertains to CVE (Common
Vulnerabilities and Exposures).
security or sec Displays all security-related information.
severity or sev Displays information about security-relevant packages of the supplied
severity_level severity_level.
See the Red Hat Enterprise Linux 7 System Administrator's Guide for detailed information on how to use
the Yum package manager.
All Red Hat Enterprise Linux packages are signed with the Red Hat GPG key. GPG stands for GNU
Privacy Guard, or GnuPG, a free software package used for ensuring the authenticity of distributed files.
If the verification of a package signature fails, the package may be altered and therefore cannot be trusted.
T he Yum package manager allows for an automatic verification of all packages it install or upgrades. T his
feature is enabled by default. T o configure this option on your system, make sure the gpgcheck
configuration directive is set to 1 in the /etc/yum .conf configuration file.
Use the following command to manually verify package files on your filesystem:
19
Red Hat Enterprise Linux 7 Security Guide
See the Product Signing (GPG) Keys article on the Red Hat Customer Portal for additional information
about Red Hat package-signing practices.
T o install verified packages (see Section 3.1.2.1, “Verifying Signed Packages” for information on how to
verify packages) from your filesystem, use the yum install command as the root user as follows:
Use a shell glob to install several packages at once. For example, the following commands installs all
.rpm packages in the current directory:
Important
Before installing any security errata, be sure to read any special instructions contained in the
erratum report and execute them accordingly. See Section 3.1.3, “Applying Changes Introduced by
Installed Updates” for general instructions about applying changes made by errata updates.
Note
In general, rebooting the system is the surest way to ensure that the latest version of a software
package is used; however, this option is not always required, nor is it always available to the
system administrator.
Applications
User-space applications are any programs that can be initiated by the user. T ypically, such
applications are used only when the user, a script, or an automated task utility launch them.
Once such a user-space application is updated, halt any instances of the application on the
system, and launch the program again to use the updated version.
Kernel
T he kernel is the core software component for the Red Hat Enterprise Linux 7 operating system. It
manages access to memory, the processor, and peripherals, and it schedules all tasks.
Because of its central role, the kernel cannot be restarted without also rebooting the computer.
T herefore, an updated version of the kernel cannot be used until the system is rebooted.
KVM
20
C hapter 3. Keeping Your System Up-to-Date
When the qemu-kvm and libvirt packages are updated, it is necessary to stop all guest virtual
machines, reload relevant virtualization modules (or reboot the host system), and restart the
virtual machines.
Use the lsm od command to determine which modules from the following are loaded: kvm , kvm -
intel, or kvm -am d. T hen use the m odprove -r command to remove and subsequently the
m odprobe -a command to reload the affected modules. Fox example:
Shared Libraries
Shared libraries are units of code, such as glibc, that are used by a number of applications and
services. Applications utilizing a shared library typically load the shared code when the application
is initialized, so any applications using an updated library must be halted and relaunched.
T o determine which running applications link against a particular library, use the lsof command:
lsof library
For example, to determine which running applications link against the libwrap.so.0 library,
type:
T his command returns a list of all the running programs that use T CP wrappers for host-access
control. T herefore, any program listed must be halted and relaunched when the tcp_wrappers
package is updated.
systemd Services
systemd services are persistent server programs usually launched during the boot process.
Examples of systemd services include sshd or vsftpd.
Because these programs usually persist in memory as long as a machine is running, each
updated systemd service must be halted and relaunched after its package is upgraded. T his can
be done as the root user using the system ctl command:
Replace service_name with the name of the service you wish to restart, such as sshd.
Other Software
21
Red Hat Enterprise Linux 7 Security Guide
Follow the instructions outlined by the resources linked below to correctly update the following
applications.
Red Hat Directory Server — See the Release Notes for the version of the Red Hat
Directory Server in question at the Red Hat Directory Server product documentation page.
Red Hat Enterprise Virtualization Manager — See the Red Hat Enterprise Linux 7
Installation Guide for the version of the Red Hat Enterprise Virtualization in question at the
Red Hat Enterprise Virtualization product documentation page.
T o browse a list of all security updates for all active Red Hat products, go to Security → Security
Updates → Active Products using the navigation menu at the top of the page.
Click on the erratum code in the left part of the table to display more detailed information about the
individual advisories. T he next page contains not only a description of the given erratum, including its
causes, consequences, and required fixes, but also a list of all packages that the particular erratum
updates along with instructions on how to apply the updates. T he page also includes links to relevant
references, such as related CVE.
Click on the CVE code in the left part of the table to display more detailed information about the individual
vulnerabilities. T he next page contains not only a description of the given CVE but also a list of affected
Red Hat products along with links to relevant Red Hat errata.
T ogether, these ratings help you understand the impact of security issues, allowing you to schedule and
prioritize upgrade strategies for your systems. Note that the ratings reflect the potential risk of a given
vulnerability, which is based on a technical analysis of the bug, not the current threat level. T his means
that the security impact rating does not change if an exploit is released for a particular flaw.
22
C hapter 3. Keeping Your System Up-to-Date
T o see a detailed description of the individual levels of severity ratings on the Customer Portal, log into
your account at https://access.redhat.com/ and navigate to Security → Policies → Severity Ratings
using the navigation menu at the top of the page.
Installed Documentation
yum(8) — T he manual page for the Yum package manager provides information about the way Yum
can be used to install, update, and remove packages on your systems.
rpmkeys(8) — T he manual page for the rpm keys utility describes the way this program can be used to
verify the authenticity of downloaded packages.
Online Documentation
Red Hat Enterprise Linux 7 System Administrator's Guide — T he System Administrator's Guide for
Red Hat Enterprise Linux 7 documents the use of the Yum and rpm programs that are used to install,
update, and remove packages on Red Hat Enterprise Linux 7 systems.
Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide — T he SELinux User's and
Administrator's Guide for Red Hat Enterprise Linux 7 documents the configuration of the SELinux
mandatory access control mechanism.
Red Hat Customer Portal — T he main page of the Customer Portal contains links to the most important
resources as well as updates about new content available through the portal.
Security Contacts and Procedures — T he place to find information about the Red Hat Security
Response T eam and instructions on when to contact it.
Red Hat Security Blog — Articles about latest security-related issues from Red Hat security
professionals.
See Also
Chapter 2, Security Tips for Installation describes how to configuring your system securely from the
beginning to make it easier to implement additional security settings later.
Section 4.9.2, “Creating GPG Keys” describes how to create a set of personal GPG keys to
authenticate your communications.
23
Red Hat Enterprise Linux 7 Security Guide
For security purposes, the installation program configures the system to use Secure Hash Algorithm 512
(SHA512) and shadow passwords. It is highly recommended that you do not alter these settings.
If shadow passwords are deselected during installation, all passwords are stored as a one-way hash in
the world-readable /etc/passwd file, which makes the system vulnerable to offline password cracking
attacks. If an intruder can gain access to the machine as a regular user, he can copy the /etc/passwd
file to his own machine and run any number of password cracking programs against it. If there is an
insecure password in the file, it is only a matter of time before the password cracker discovers it.
Shadow passwords eliminate this type of attack by storing the password hashes in the file /etc/shadow,
which is readable only by the root user.
T his forces a potential attacker to attempt password cracking remotely by logging into a network service
on the machine, such as SSH or FT P. T his sort of brute-force attack is much slower and leaves an
obvious trail as hundreds of failed login attempts are written to system files. Of course, if the cracker starts
an attack in the middle of the night on a system with weak passwords, the cracker may have gained
access before dawn and edited the log files to cover his tracks.
In addition to format and storage considerations is the issue of content. T he single most important thing a
user can do to protect his account against a password cracking attack is create a strong password.
When creating a secure password, the user must remember that long passwords are stronger than short
and complex ones. It is not a good idea to create a password of just eight characters, even if it contains
digits, special characters and uppercase letters. Password cracking tools, such as John T he Ripper, are
optimized for breaking such passwords, which are also hard to remember by a person.
In information theory, entropy is the level of uncertainty associated with a random variable and is
presented in bits. T he higher the entropy value, the more secure the password is. According to NIST SP
800-63-1, passwords that are not present in a dictionary comprised of 50000 commonly selected
passwords should have at least 10 bits of entropy. As such, a password that consists of four random
words contains around 40 bits of entropy. A long password consisting of multiple words for added security
is also called a passphrase, for example:
If the system enforces the use of uppercase letters, digits, or special characters, the passphrase that
follows the above recommendation can be modified in a simple way, for example by changing the first
character to uppercase and appending "1!". Note that such a modification does not increase the security
of the passphrase significantly.
Another way to create a password yourself is using a password generator. T he pwmake is a command-
line tool for generating random passwords that consist of all four groups of characters – uppercase,
lowercase, digits and special characters. T he utility allows you to specify the number of entropy bits that
are used to generate the password. T he entropy is pulled from /dev/urandom . T he minimum number of
24
C hapter 4 . Hardening Your System with T ools and Services
bits you can specify is 56, which is enough for passwords on systems and services where brute force
attacks are rare. 64 bits is adequate for applications where the attacker does not have direct access to
the password hash file. For situations when the attacker might obtain the direct access to the password
hash or the password is used as an encryption key, 80 to 128 bits should be used. If you specify an
invalid number of entropy bits, pwmake will use the default of bits. T o create a password of 128 bits, run
the following command:
pwmake 128
While there are different approaches to creating a secure password, always avoid the following bad
practices:
Using a single dictionary word, a word in a foreign language, an inverted word, or only numbers.
Using personal information in a password, such as birth dates, anniversaries, family member names, or
pet names.
While creating secure passwords is imperative, managing them properly is also important, especially for
system administrators within larger organizations. T he following section details good practices for creating
and managing user passwords within an organization.
If an organization has a large number of users, the system administrators have two basic options available
to force the use of strong passwords. T hey can create passwords for the user, or they can let users
create their own passwords while verifying the passwords are of adequate strength.
Creating the passwords for the users ensures that the passwords are good, but it becomes a daunting
task as the organization grows. It also increases the risk of users writing their passwords down, thus
exposing them.
For these reasons, most system administrators prefer to have the users create their own passwords, but
actively verify that these passwords are strong enough. In some cases, administrators may force users to
change their passwords periodically through password aging.
When users are asked to create or change passwords, they can use the passwd command-line utility,
which is PAM-aware (Pluggable Authentication Modules) and checks to see if the password is too short or
otherwise easy to crack. T his checking is performed by the pam _pwquality.so PAM module.
Note
In Red Hat Enterprise Linux 7, the pam _pwquality PAM module replaced pam _cracklib, which
was used in Red Hat Enterprise Linux 6 as a default module for password quality checking. It uses
the same back end as pam _cracklib.
25
Red Hat Enterprise Linux 7 Security Guide
T he pam _pwquality module is used to check a password's strength against a set of rules. Its
procedure consists of two steps: first it checks if the provided password is found in a dictionary. If not, it
continues with a number of additional checks. pam _pwquality is stacked alongside other PAM modules
in the password component of the /etc/pam .d/passwd file, and the custom set of rules is specified in
the /etc/security/pwquality.conf configuration file. For a complete list of these checks, see the
pwquality.conf (8) manual page.
T o enable using pam _quality, add the following line to the password stack in the
/etc/pam .d/passwd file:
Options for the checks are specified one per line. For example, to require a password with a minimum
length of 8 characters, including all four classes of characters, add the following lines to the
/etc/security/pwquality.conf file:
minlen=8
minclass=4
T o set a password strength-check for consecutive or repetitive characters, add the following lines to
/etc/security/pwquality.conf:
maxsequence=3
maxrepeat=3
In this example, the password entered cannot contain more than 3 consecutive characters, such as
"abcd" or "1234 ". Additionally, the number of identical consecutive characters is limited to 3.
Note
As the root user is the one who enforces the rules for password creation, he can set any password
for himself or for a regular user, despite the warning messages.
Password aging is another technique used by system administrators to defend against bad passwords
within an organization. Password aging means that after a specified period (usually 90 days), the user is
prompted to create a new password. T he theory behind this is that if a user is forced to change his
password periodically, a cracked password is only useful to an intruder for a limited amount of time. T he
downside to password aging, however, is that users are more likely to write their passwords down.
T here are two primary programs used to specify password aging under Red Hat Enterprise Linux 7: the
chage command or the graphical User Manager (system -config-users) application.
Important
In Red Hat Enterprise Linux 7, shadow passwords are enabled by default. For more information, see
the Red Hat Enterprise Linux 7 System Administrator's Guide.
26
C hapter 4 . Hardening Your System with T ools and Services
T he -M option of the chage command specifies the maximum number of days the password is valid. For
example, to set a user's password to expire in 90 days, use the following command:
chage -M 90 <username>
In the above command, replace <username> with the name of the user. T o disable password expiration, it
is traditional to use a value of 99999 after the -M option (this equates to a little over 273 years).
For more information on the options available with the chage command, see the table below.
Option Description
-d days Specifies the number of days since January 1, 1970 the password was
changed.
-E date Specifies the date on which the account is locked, in the format YYYY-
MM-DD. Instead of the date, the number of days since January 1, 1970
can also be used.
-I days Specifies the number of inactive days after the password expiration
before locking the account. If the value is 0, the account is not locked
after the password expires.
-l Lists current account aging settings.
-m days Specify the minimum number of days after which the user must change
passwords. If the value is 0, the password does not expire.
-M days Specify the maximum number of days for which the password is valid.
When the number of days specified by this option plus the number of
days specified with the -d option is less than the current day, the user
must change passwords before using the account.
-W days Specifies the number of days before the password expiration date to
warn the user.
You can also use the chage command in interactive mode to modify multiple password aging and account
details. Use the following command to enter interactive mode:
chage <username>
You can configure a password to expire the first time a user logs in. T his forces users to change
passwords immediately.
1. Set up an initial password. T here are two common approaches to this step: you can either assign a
default password, or you can use a null password.
27
Red Hat Enterprise Linux 7 Security Guide
passwd username
passwd -d username
Warning
Using a null password, while convenient, is a highly insecure practice, as any third party can
log in first and access the system using the insecure username. Avoid using null passwords
wherever possible. If it is not possible, always make sure that the user is ready to log in
before unlocking an account with a null password.
chage -d 0 username
T his command sets the value for the date the password was last changed to the epoch (January 1,
1970). T his value forces immediate password expiration no matter what password aging policy, if
any, is in place.
Upon the initial log in, the user is now prompted for a new password.
You can also use the graphical User Manager application to create password aging policies, as follows.
Note: you need Administrator privileges to perform this procedure.
1. Click the System menu on the Panel, point to Administration and then click Users and Groups to
display the User Manager. Alternatively, type the command system -config-users at a shell
prompt.
2. Click the Users tab, and select the required user in the list of users.
3. Click Properties on the toolbar to display the User Properties dialog box (or choose Properties
on the File menu).
4. Click the Password Info tab, and select the check box for Enable password expiration.
5. Enter the required value in the Days before change required field, and click OK.
With the pam _faillock module, failed login attempts are stored in a separate file for each user in the
/var/run/faillock directory.
28
C hapter 4 . Hardening Your System with T ools and Services
Note
T he order of lines in the failed attempt log files is important. Any change in this order can lock all
user accounts, including the root user account when the even_deny_root option is used.
1. T o lock out any non-root user after three unsuccessful attempts and unlock that user after 10
minutes, add the following lines to the auth section of the /etc/pam .d/system -auth and
/etc/pam .d/password-auth files:
2. Add the following line to the account section of both files specified in the previous step:
3. T o apply account locking for the root user as well, add the even_deny_root option to the
pam _faillock entries in the /etc/pam .d/system -auth and /etc/pam .d/password-auth
files:
When user john attempts to log in for the fourth time after failing to log in three times previously, his
account is locked upon the fourth attempt:
T o disable a user from locking out even after multiple failed logins add the below line just above the "first
call of" pam_faillock in both /etc/pam .d/system -auth and /etc/pam .d/password-auth. Also
replace user1, user2, user3 with the actual user names.
T o view the number of failed attempts per user, run, as root, the following command:
29
Red Hat Enterprise Linux 7 Security Guide
When modifying authentication configuration using the authconfig utility, the system -auth and
password-auth files are overwritten with the settings from the authconfig utility. In order to use the
configuration files and authconfig simultaneously, you must configure account locking using the following
steps:
For more information on various pam _faillock configuration options, see the pam _faillock(8) man
page.
Note
T he main advantage of locking the screen instead of logging out is that a lock allows the user's
processes (such as file transfers) to continue running. Logging out would stop these processes.
30
C hapter 4 . Hardening Your System with T ools and Services
Users may also need to lock a virtual console. T his can be done using a utility called vlock. T o install this
utility, execute the following command as root:
After installation, any console session can be locked using the vlock command without any additional
parameters. T his locks the currently active virtual console session while still allowing access to the others.
T o prevent access to all virtual consoles on the workstation, execute the following:
vlock -a
In this case, vlock locks the currently active console and the -a option prevents switching to other virtual
consoles.
Important
T here are several known issues relevant to the version of vlock currently available for Red Hat
Enterprise Linux 7:
T he program does not currently allow unlocking consoles using the root password. Additional
information can be found in BZ #895066.
Locking a console does not clear the screen and scrollback buffer, allowing anyone with
physical access to the workstation to view previously issued commands and any output
displayed in the console. See BZ #807369 for more information.
~]$ ls -l /bin/su
-rwsr-xr-x. 1 root root 34904 Mar 10 2011 /bin/su
Note
T he s may be upper case or lower case. If it appears as upper case, it means that the underlying
permission bit has not been set.
For the system administrators of an organization, however, choices must be made as to how much
administrative access users within the organization should have to their machine. T hrough a PAM module
called pam _console.so, some activities normally reserved only for the root user, such as rebooting and
mounting removable media are allowed for the first user that logs in at the physical console. However,
31
Red Hat Enterprise Linux 7 Security Guide
other important system administration tasks, such as altering network settings, configuring a new mouse,
or mounting network devices, are not possible without administrative privileges. As a result, system
administrators must decide how much access the users on their network should receive.
T he following are four different ways that an administrator can further ensure that root logins are
disallowed:
T o prevent users from logging in directly as root, the system administrator can set the root
account's shell to /sbin/nologin in the /etc/passwd file.
login sudo
gdm FT P clients
kdm Email clients
xdm
su
ssh
scp
sftp
T o further limit access to the root account, administrators can disable root logins at the console
by editing the /etc/securetty file. T his file lists all devices the root user is allowed to log into. If
the file does not exist at all, the root user can log in through any communication device on the
system, whether via the console or a raw network interface. T his is dangerous, because a user
can log in to their machine as root via T elnet, which transmits the password in plain text over the
network.
By default, Red Hat Enterprise Linux 7's /etc/securetty file only allows the root user to log in
at the console physically attached to the machine. T o prevent the root user from logging in,
remove the contents of this file by typing the following command at a shell prompt as root:
T o enable securetty support in the KDM, GDM, and XDM login managers, add the following line:
32
C hapter 4 . Hardening Your System with T ools and Services
/etc/pam .d/gdm
/etc/pam .d/kdm
/etc/pam .d/xdm
Warning
A blank /etc/securetty file does not prevent the root user from logging in remotely
using the OpenSSH suite of tools because the console is not opened until after
authentication.
T o prevent root logins via the SSH protocol, edit the SSH daemon's configuration file,
/etc/ssh/sshd_config, and change the line that reads:
#PermitRootLogin yes
to read as follows:
PermitRootLogin no
33
Red Hat Enterprise Linux 7 Security Guide
ssh
scp
sftp
T he following is an example of how the module is used for the vsftpd FT P server in the
/etc/pam .d/vsftpd PAM configuration file (the \ character at the end of the first line is not
necessary if the directive is on a single line):
T his instructs PAM to consult the /etc/vsftpd.ftpusers file and deny access to the service
for any listed user. T he administrator can change the name of this file, and can keep separate
lists for each service or use one central list to deny access to multiple services.
If the administrator wants to deny access to multiple services, a similar line can be added to the
PAM configuration files, such as /etc/pam .d/pop and /etc/pam .d/im ap for mail clients, or
/etc/pam .d/ssh for SSH clients.
For more information about PAM, see The Linux-PAM System Administrator's Guide, located in the
/usr/share/doc/pam -<version>/htm l/ directory.
login
gdm
kdm
xdm
ssh
scp
sftp
FT P clients
Email clients
Any PAM aware services
34
C hapter 4 . Hardening Your System with T ools and Services
On the other hand, giving root access to individual users can lead to the following issues:
Machine Misconfiguration — Users with root access can misconfigure their machines and require
assistance to resolve issues. Even worse, they might open up security holes without knowing it.
Running Insecure Services — Users with root access might run insecure servers on their machine,
such as FT P or T elnet, potentially putting usernames and passwords at risk. T hese services transmit
this information over the network in plain text.
Running Email Attachments As Root — Although rare, email viruses that affect Linux do exist. T he only
time they are a threat, however, is when they are run by the root user.
Keeping the audit trail intact — Because the root account is often shared by multiple users, so that
multiple system administrators can maintain the system, it is impossible to figure out which of those
users was root at a given time. When using separate logins, the account a user logs in with, as well as
a unique number for session tracking purposes, is put into the task structure, which is inherited by
every process that the user starts. When using concurrent logins, the unique number can be used to
trace actions to specific logins. When an action generates an audit event, it is recorded with the login
account and the session associated with that unique number. Use the aulast command to view these
logins and sessions. T he --proof option of the aulast command can be used suggest a specific
ausearch query to isolate auditable events generated by a particular session. For more information
about the Audit system, see Chapter 5, System Auditing.
1. Make sure the screen package is installed. You can do so by running the following command as
root:
For more information on how to install packages in Red Hat Enterprise Linux 7, see the Red Hat
Enterprise Linux 7 System Administrator's Guide.
2. As root, add the following line at the beginning of the /etc/profile file to make sure the
processing of this file cannot be interrupted:
trap "" 1 2 3 15
35
Red Hat Enterprise Linux 7 Security Guide
3. Add the following lines at the end of the /etc/profile file to start a screen session each time a
user logs in to a virtual console or remotely:
SCREENEXEC="screen"
if [ -w $(tty) ]; then
trap "exec $SCREENEXEC" 1 2 3 15
echo -n 'Starting session in 10 seconds'
sleep 10
exec $SCREENEXEC
fi
Note that each time a new session starts, a message will be displayed and the user will have to wait
ten seconds. T o adjust the time to wait before starting a session, change the value after the sleep
command.
4. Add the following lines to the /etc/screenrc configuration file to close the screen session after
a given period of inactivity:
T his will set the time limit to 120 seconds. T o adjust this limit, change the value after the idle
directive.
Alternatively, you can configure the system to only lock the session by using the following lines
instead:
T he changes take effect the next time a user logs in to the system.
1. Preventing Access to Single User Mode — If attackers can boot the system into single user mode,
they are logged in automatically as root without being prompted for the root password.
Warning
Protecting access to single user mode with a password by editing the SINGLE parameter in
the /etc/sysconfig/init file is not recommended. An attacker can bypass the password
by specifying a custom initial command (using the init= parameter) on the kernel command
line in GRUB 2. It is recommended to password-protect the GRUB 2 boot loader, as
described in the Red Hat Enterprise Linux 7 System Administrator's Guide.
2. Preventing Access to the GRUB 2 Console — If the machine uses GRUB 2 as its boot loader, an
attacker can use the GRUB 2 editor interface to change its configuration or to gather information
using the cat command.
36
C hapter 4 . Hardening Your System with T ools and Services
Red Hat Enterprise Linux 7 ships with the GRUB 2 boot loader on the Intel 64 and AMD64 platform. For a
detailed look at GRUB 2, see the Red Hat Enterprise Linux 7 System Administrator's Guide.
Pressing the I key at the beginning of the boot sequence allows you to start up your system interactively.
During an interactive startup, the system prompts you to start up each service one by one. However, this
may allow an attacker who gains physical access to your system to disable the security-related services
and gain access to the system.
T o prevent users from starting up the system interactively, as root, disable the PROMPT parameter in the
/etc/sysconfig/init file:
PROMPT=no
Many services under Red Hat Enterprise Linux 7 are network servers. If a network service is running on a
machine, then a server application (called a daemon), is listening for connections on one or more network
ports. Each of these servers should be treated as a potential avenue of attack.
Denial of Service Attacks (DoS) — By flooding a service with requests, a denial of service attack can
render a system unusable as it tries to log and answer each request.
Distributed Denial of Service Attack (DDoS) — A type of DoS attack which uses multiple compromised
machines (often numbering in the thousands or more) to direct a coordinated attack on a service,
flooding it with requests and making it unusable.
Script Vulnerability Attacks — If a server is using scripts to execute server-side actions, as Web
servers commonly do, an attacker can target improperly written scripts. T hese script vulnerability
attacks can lead to a buffer overflow condition or allow the attacker to alter files on the system.
Buffer Overflow Attacks — Services that connect to privileged ports, ports under 1023, must run as an
administrative user. If the application has an exploitable buffer overflow, an attacker could gain access
to the system as the user running the daemon. Because exploitable buffer overflows exist, crackers
use automated tools to identify systems with vulnerabilities, and once they have gained access, they
use automated rootkits to maintain their access to the system.
37
Red Hat Enterprise Linux 7 Security Guide
Note
Execshield also includes support for No eXecute (NX) technology on AMD64 platforms and eXecute
Disable (XD) technology on Itanium and Intel® 64 systems. T hese technologies work in conjunction
with ExecShield to prevent malicious code from running in the executable portion of virtual memory
with a granularity of 4KB of executable code, lowering the risk of attack from buffer overflow
exploits.
Important
T o limit exposure to attacks over the network, all services that are unused should be turned off.
xinetd — A super server that controls connections to a range of subordinate servers, such as
gssftp and telnet.
When determining whether to leave these services running, it is best to use common sense and avoid
taking any risks. For example, if a printer is not available, do not leave cups running. T he same is true for
portreserve. If you do not mount NFSv3 volumes or use NIS (the ypbind service), then rpcbind
should be disabled. Checking which network services are available to start at boot time is not sufficient. It
is recommended to also check which ports are open and listening. Refer to Section 4.4.2, “Verifying Which
Ports Are Listening” for more information.
Some network protocols are inherently more insecure than others. T hese include any services that:
Transmit Usernames and Passwords Over a Network Unencrypted — Many older protocols, such as
T elnet and FT P, do not encrypt the authentication session and should be avoided whenever possible.
38
C hapter 4 . Hardening Your System with T ools and Services
Transmit Sensitive Data Over a Network Unencrypted — Many protocols transmit data over the network
unencrypted. T hese protocols include T elnet, FT P, HT T P, and SMT P. Many network file systems, such
as NFS and SMB, also transmit information over the network unencrypted. It is the user's responsibility
when using these protocols to limit what type of data is transmitted.
Examples of inherently insecure services include rlogin, rsh, and telnet, and vsftpd.
All remote login and shell programs (rlogin, rsh, and telnet) should be avoided in favor of SSH. See
Section 4.3.10, “Securing SSH” for more information about sshd.
FT P is not as inherently dangerous to the security of the system as remote shells, but FT P servers must
be carefully configured and monitored to avoid problems. See Section 4.3.8, “Securing FT P” for more
information about securing FT P servers.
auth
nfs-server
yppasswdd
ypserv
ypxfrd
More information on securing network services is available in Section 4.4, “Securing Network Access”.
Note
Securing rpcbind only affects NFSv2 and NFSv3 implementations, since NFSv4 no longer
requires it. If you plan to implement an NFSv2 or NFSv3 server, then rpcbind is required, and the
following section applies.
It is important to use T CP Wrappers to limit which networks or hosts have access to the rpcbind service
since it has no built-in form of authentication.
Further, use only IP addresses when limiting access to the service. Avoid using hostnames, as they can
be forged by DNS poisoning and other methods.
T o further restrict access to the rpcbind service, it is a good idea to add firewalld rules to the server
and restrict access to specific networks.
39
Red Hat Enterprise Linux 7 Security Guide
Below are two example firewalld Rich T ext commands. T he first allows T CP connections to the port
111 (used by the rpcbind service) from the 192.168.0.0/24 network. T he second allows T CP
connections to the same port from the localhost. All other packets are dropped.
Note
Add --perm anent to the firewalld Rich T ext commands to make the settings permanent. See
Section 4.5, “Using Firewalls” for more information about implementing firewalls.
/usr/sbin/rpc.yppasswdd — Also called the yppasswdd service, this daemon allows users to
change their NIS passwords.
/usr/sbin/rpc.ypxfrd — Also called the ypxfrd service, this daemon is responsible for NIS map
transfers over the network.
NIS is somewhat insecure by today's standards. It has no host authentication mechanisms and transmits
all of its information over the network unencrypted, including password hashes. As a result, extreme care
must be taken when setting up a network that uses NIS. T his is further complicated by the fact that the
default configuration of NIS is inherently insecure.
It is recommended that anyone planning to implement a NIS server first secure the rpcbind service as
outlined in Section 4.3.4, “Securing rpcbind”, then address the following issues, such as network planning.
Because NIS transmits sensitive information unencrypted over the network, it is important the service be
run behind a firewall and on a segmented and secure network. Whenever NIS information is transmitted
over an insecure network, it risks being intercepted. Careful network design can help prevent severe
security breaches.
Any machine within a NIS domain can use commands to extract information from the server without
authentication, as long as the user knows the NIS server's DNS hostname and NIS domain name.
40
C hapter 4 . Hardening Your System with T ools and Services
For instance, if someone either connects a laptop computer into the network or breaks into the network
from outside (and manages to spoof an internal IP address), the following command reveals the
/etc/passwd map:
If this attacker is a root user, they can obtain the /etc/shadow file by typing the following command:
Note
If Kerberos is used, the /etc/shadow file is not stored within a NIS map.
T o make access to NIS maps harder for an attacker, create a random string for the DNS hostname, such
as o7hfawtgm hwg.dom ain.com . Similarly, create a different randomized NIS domain name. T his makes
it much more difficult for an attacker to access the NIS server.
If the /var/yp/securenets file is blank or does not exist (as is the case after a default installation), NIS
listens to all networks. One of the first things to do is to put netmask/network pairs in the file so that
ypserv only responds to requests from the appropriate network.
255.255.255.0 192.168.0.0
Warning
Never start a NIS server for the first time without creating the /var/yp/securenets file.
T his technique does not provide protection from an IP spoofing attack, but it does at least place limits on
what networks the NIS server services.
All of the servers related to NIS can be assigned specific ports except for rpc.yppasswdd — the daemon
that allows users to change their login passwords. Assigning ports to the other two NIS server daemons,
rpc.ypxfrd and ypserv, allows for the creation of firewall rules to further protect the NIS server
daemons from intruders.
YPSERV_ARGS="-p 834"
YPXFRD_ARGS="-p 835"
T he following rich text firewalld rules can then be used to enforce which network the server listens to
for these ports:
41
Red Hat Enterprise Linux 7 Security Guide
T his means that the server only allows connections to ports 834 and 835 if the requests come from the
192.168.0.0/24 network. T he first rule is for T CP and the second for UDP.
Note
See Section 4.5, “Using Firewalls” for more information about implementing firewalls with iptables
commands.
One of the issues to consider when NIS is used for authentication is that whenever a user logs into a
machine, a password hash from the /etc/shadow map is sent over the network. If an intruder gains
access to a NIS domain and sniffs network traffic, they can collect usernames and password hashes. With
enough time, a password cracking program can guess weak passwords, and an attacker can gain access
to a valid account on the network.
Since Kerberos uses secret-key cryptography, no password hashes are ever sent over the network,
making the system far more secure. See the Linux Domain Identity, Authentication, and Policy Guide for
more information about Kerberos.
Important
NFS traffic can be sent using T CP in all versions, it should be used with NFSv3, rather than UDP,
and is required when using NFSv4. All versions of NFS support Kerberos user and group
authentication, as part of the RPCSEC_GSS kernel module. Information on rpcbind is still included,
since Red Hat Enterprise Linux 7 supports NFSv3 which utilizes rpcbind.
NFSv2 and NFSv3 traditionally passed data insecurely. All versions of NFS now have the ability to
authenticate (and optionally encrypt) ordinary file system operations using Kerberos. Under NFSv4 all
operations can use Kerberos; under v2 or v3, file locking and mounting still do not use it. When using
NFSv4.0, delegations may be turned off if the clients are behind NAT or a firewall. For information on the
use of NFSv4.1 to allow delegations to operate through NAT and firewalls, see the Red Hat
Enterprise Linux 7 Storage Administration Guide.
T he use of the m ount command in the /etc/fstab file is explained in the Red Hat Enterprise Linux 7
Storage Administration Guide. From a security administration point of view it is worthwhile to note that the
NFS mount options can also be specified in /etc/nfsm ount.conf, which can be used to set custom
default options.
42
C hapter 4 . Hardening Your System with T ools and Services
Warning
Only export entire file systems. Exporting a subdirectory of a file system can be a security issue. It
is possible in some cases for a client to "break out" of the exported part of the file system and get
to unexported parts (see the section on subtree checking in the exports(5) man page.
Use the ro option to export the file system as read-only whenever possible to reduce the number of users
able to write to the mounted file system. Only use the rw option when specifically required. See the man
exports(5) page for more information. Allowing write access increases the risk from symlink attacks for
example. T his includes temporary directories such as /tm p and /usr/tm p.
Where directories must be mounted with the rw option avoid making them world-writable whenever
possible to reduce risk. Exporting home directories is also viewed as a risk as some applications store
passwords in clear text or weakly encrypted. T his risk is being reduced as application code is reviewed
and improved. Some users do not set passwords on their SSH keys so this too means home directories
present a risk. Enforcing the use of passwords or using Kerberos would mitigate that risk.
Restrict exports only to clients that need access. Use the showm ount -e command on an NFS server to
review what the server is exporting. Do not export anything that is not specifically required.
Do not use the no_root_squash option and review existing installations to make sure it is not used. See
Section 4.3.6.4, “Do Not Use the no_root_squash Option” for more information.
T he secure option is the server-side export option used to restrict exports to “reserved” ports. By default,
the server allows client communication only from “reserved” ports (ports numbered less than 1024),
because traditionally clients have only allowed “trusted” code (such as in-kernel NFS clients) to use those
ports. However, on many networks it is not difficult for anyone to become root on some client, so it is rarely
safe for the server to assume that communication from a reserved port is privileged. T herefore the
restriction to reserved ports is of limited value; it is better to rely on Kerberos, firewalls, and restriction of
exports to particular clients.
Most clients still do use reserved ports when possible. However, reserved ports are a limited resource, so
clients (especially those with a large number of NFS mounts) may choose to use higher-numbered ports
as well. Linux clients may do this using the “noresvport” mount option. If you wish to allow this on an
export, you may do so with the “insecure” export option.
It is good practice not to allow users to login to a server. While reviewing the above settings on an NFS
server conduct a review of who and what can access the server.
Use the nosuid option to disallow the use of a setuid program. T he nosuid option disables the set-
user-identifier or set-group-identifier bits. T his prevents remote users from gaining higher
privileges by running a setuid program. Use this option on the client and the server side.
T he noexec option disables all executable files on the client. Use this to prevent users from inadvertently
executing files placed in the file system being shared. T he nosuid and noexec options are standard
options for most, if not all, file systems.
Use the nodev option to prevent “device-files” from being processed as a hardware device by the client.
T he resvport option is a client-side mount option and secure is the corresponding server-side export
option (see explanation above). It restricts communication to a "reserved port". T he reserved or "well
known" ports are reserved for privileged users and processes such as the root user. Setting this option
causes the client to use a reserved source port to communicate with the server.
43
Red Hat Enterprise Linux 7 Security Guide
All versions of NFS now support mounting with Kerberos authentication. T he mount option to enable this
is: sec=krb5.
NFSv4 supports mounting with Kerberos using krb5i for integrity and krb5p for privacy protection.
T hese are used when mounting with sec=krb5, but need to be configured on the NFS server. See the
man page on exports (m an 5 exports) for more information.
T he NFS man page (m an 5 nfs) has a “SECURIT Y CONSIDERAT IONS” section which explains the
security enhancements in NFSv4 and contains all the NFS specific mount options.
T he NFS server determines which file systems to export and which hosts to export these directories to by
consulting the /etc/exports file. Be careful not to add extraneous spaces when editing this file.
For instance, the following line in the /etc/exports file shares the directory /tm p/nfs/ to the host
bob.exam ple.com with read/write permissions.
/tmp/nfs/ bob.example.com(rw)
T he following line in the /etc/exports file, on the other hand, shares the same directory to the host
bob.exam ple.com with read-only permissions and shares it to the world with read/write permissions
due to a single space character after the hostname.
It is good practice to check any configured NFS shares by using the showm ount command to verify what
is being shared:
showmount -e <hostname>
By default, NFS shares change the root user to the nfsnobody user, an unprivileged user account. T his
changes the owner of all root-created files to nfsnobody, which prevents uploading of programs with the
setuid bit set.
If no_root_squash is used, remote root users are able to change any file on the shared file system and
leave applications infected by T rojans for other users to inadvertently execute.
NFSv4 is the default version of NFS for Red Hat Enterprise Linux 7 and it only requires port 2049 to be
open for T CP. If using NFSv3 then four additional ports are required as explained below.
T he ports used for NFS are assigned dynamically by rpcbind, which can cause problems when creating
firewall rules. T o simplify this process, use the /etc/sysconfig/nfs file to specify which ports are to be used:
44
C hapter 4 . Hardening Your System with T ools and Services
Port numbers specified must not be used by any other service. Configure your firewall to allow the port
numbers specified, as well as T CP and UDP port 2049 (NFS).
Run the rpcinfo -p command on the NFS server to see which ports and RPC programs are being used.
Always verify that any scripts running on the system work as intended before putting them into production.
Also, ensure that only the root user has write permissions to any directory containing scripts or CGIs. T o
do this, run the following commands as the root user:
System administrators should be careful when using the following configuration options (configured in
/etc/httpd/conf/httpd.conf):
FollowSym Links
T his directive is enabled by default, so be sure to use caution when creating symbolic links to the
document root of the Web server. For instance, it is a bad idea to provide a symbolic link to /.
Indexes
T his directive is enabled by default, but may not be desirable. T o prevent visitors from browsing
files on the server, remove this directive.
UserDir
T he UserDir directive is disabled by default because it can confirm the presence of a user
account on the system. T o enable user directory browsing on the server, use the following
directives:
UserDir enabled
UserDir disabled root
T hese directives activate user directory browsing for all user directories other than /root/. T o
add users to the list of disabled accounts, add a space-delimited list of users on the UserDir
disabled line.
ServerT okens
T he ServerT okens directive controls the server response header field which is sent back to
clients. It includes various information which can be customized using the following parameters:
45
Red Hat Enterprise Linux 7 Security Guide
ServerT okens Full (default option) — provides all available information (OS type and
used modules), for example:
Apache
Apache/2
Apache/2.0
ServerT okens Min or ServerT okens Minim al — provides the following information:
Apache/2.0.41
Apache/2.0.41 (Unix)
It is recommended to use the ServerT okens Prod option so that a possible attacker does not
gain any valuable information about your system.
Important
Do not remove the IncludesNoExec directive. By default, the Server-Side Includes (SSI) module
cannot execute commands. It is recommended that you do not change this setting unless absolutely
necessary, as it could, potentially, enable an attacker to execute commands on the system.
In certain scenarios, it is beneficial to remove certain httpd modules to limit the functionality of the HT T P
Server. T o do so, simply comment out the entire line which loads the module you wish to remove in the
/etc/httpd/conf/httpd.conf file. For example, to remove the proxy module, comment out the
following line by prepending it with a hash sign:
Note that the /etc/httpd/conf.d/ directory contains configuration files which are used to load
modules as well.
For information, see the Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide.
46
C hapter 4 . Hardening Your System with T ools and Services
Red Hat Content Accelerator (tux) — A kernel-space Web server with FT P capabilities.
Before submitting a username and password, all users are presented with a greeting banner. By default,
this banner includes version information useful to crackers trying to identify weaknesses in a system.
T o change the greeting banner for vsftpd, add the following directive to the
/etc/vsftpd/vsftpd.conf file:
ftpd_banner=<insert_greeting_here>
Replace <insert_greeting_here> in the above directive with the text of the greeting message.
For mutli-line banners, it is best to use a banner file. T o simplify management of multiple banners, place all
banners in a new directory called /etc/banners/. T he banner file for FT P connections in this example is
/etc/banners/ftp.m sg. Below is an example of what such a file may look like:
Note
It is not necessary to begin each line of the file with 220 as specified in Section 4.4.1, “Securing
Services With T CP Wrappers and xinetd”.
T o reference this greeting banner file for vsftpd, add the following directive to the
/etc/vsftpd/vsftpd.conf file:
banner_file=/etc/banners/ftp.msg
It also is possible to send additional banners to incoming connections using T CP Wrappers as described
in Section 4.4.1.1, “T CP Wrappers and Connection Banners”.
T he easiest way to create this directory is to install the vsftpd package. T his package establishes a
directory tree for anonymous users and configures the permissions on directories to read-only for
anonymous users.
47
Red Hat Enterprise Linux 7 Security Guide
Warning
T o allow anonymous users to upload files, it is recommended that a write-only directory be created within
/var/ftp/pub/. T o do this, run the following command as root:
Next, change the permissions so that anonymous users cannot view the contents of the directory:
Administrators who allow anonymous users to read and write in directories often find that their servers
become a repository of stolen software.
Additionally, under vsftpd, add the following line to the /etc/vsftpd/vsftpd.conf file:
anon_upload_enable=YES
Because FT P transmits unencrypted usernames and passwords over insecure networks for
authentication, it is a good idea to deny system users access to the server from their user accounts.
T o disable all user accounts in vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf:
local_enable=NO
T o disable FT P access for specific accounts or specific groups of accounts, such as the root user and
those with sudo privileges, the easiest way is to use a PAM list file as described in Section 4.2.1,
“Disallowing Root Access”. T he PAM configuration file for vsftpd is /etc/pam .d/vsftpd.
Use T CP Wrappers to control access to either FT P daemon as outlined in Section 4.4.1, “Securing
Services With T CP Wrappers and xinetd”.
48
C hapter 4 . Hardening Your System with T ools and Services
Postfix is a Mail T ransfer Agent (MT A) that uses the Simple Mail T ransfer Protocol (SMT P) to deliver
electronic messages between other MT As and to email clients or delivery agents. Although many MT As
are capable of encrypting traffic between one another, most do not, so sending email over any public
networks is considered an inherently insecure form of communication. Postfix replaces Sendmail as the
default MT A in Red Hat Enterprise Linux 7.
It is recommended that anyone planning to implement a Postfix server address the following issues.
Because of the nature of email, a determined attacker can flood the server with mail fairly easily and cause
a denial of service. T he effectiveness of such attacks can be limited by setting limits of the directives in the
/etc/postfix/m ain.cf file. You can change the value of the directives which are already there or you
can add the directives you need with the value you want in the following format:
<directive> = <value>
. T he following is a list of directives that can be used for limiting a denial of service attack:
anvil_rate_tim e_unit — T his time unit is used for rate limit calculations. T he default value is 60
seconds.
sm tpd_client_event_lim it_exceptions — Clients that are excluded from the connection and
rate limit commands. By default, clients in trusted networks are excluded.
queue_m infree — T he minimum amount of free space in bytes in the queue file system that is
needed to receive mail. T his is currently used by the Postfix SMT P server to decide if it will accept any
mail at all. By default, the Postfix SMT P server rejects MAIL FROM commands when the amount of free
space is less than 1.5 times the message_size_limit. T o specify a higher minimum free space limit,
specify a queue_minfree value that is at least 1.5 times the message_size_limit. By default the
queue_minfree value is 0.
Never put the mail spool directory, /var/spool/postfix/, on an NFS shared volume. Because NFSv2
and NFSv3 do not maintain control over user and group IDs, two or more users can have the same UID,
and receive and read each other's mail.
49
Red Hat Enterprise Linux 7 Security Guide
Note
With NFSv4 using Kerberos, this is not the case, since the SECRPC_GSS kernel module does not
utilize UID-based authentication. However, it is still considered good practice not to put the mail
spool directory on NFS shared volumes.
T o help prevent local user exploits on the Postfix server, it is best for mail users to only access the Postfix
server using an email program. Shell accounts on the mail server should not be allowed and all user shells
in the /etc/passwd file should be set to /sbin/nologin (with the possible exception of the root user).
By default, Postfix is set up to only listen to the local loopback address. You can verify this by viewing the
file /etc/postfix/m ain.cf.
View the file /etc/postfix/m ain.cf to ensure that only the following inet_interfaces line appears:
inet_interfaces = localhost
T his ensures that Postfix only accepts mail messages (such as cron job reports) from the local system
and not from the network. T his is the default setting and protects Postfix from a network attack.
For removal of the localhost restriction and allowing Postfix to listen on all interfaces the
inet_interfaces = all setting can be used.
Important
T his section draws attention to the most common ways of securing an SSH setup. By no means
should this list of suggested measures be considered exhaustive or definitive. See
sshd_config(5) for a description of all configuration directives available for modifying the
behavior of the sshd daemon and to ssh(1) for an explanation of basic SSH concepts.
SSH supports the use of cryptographic keys for logging in to computers. T his is much more secure than
using only a password. If you combine this method with other authentication methods, it can be considered
a multi-factor authentication. See Section 4.3.10.2, “Multiple Authentication Methods” for more information
about using multiple authentication methods.
50
C hapter 4 . Hardening Your System with T ools and Services
In order to enable the use of cryptographic keys for authentication, the PubkeyAuthentication
configuration directive in the /etc/ssh/sshd_config file needs to be set to yes. Note that this is the
default setting. Set the PasswordAuthentication directive to no to disable the possibility of using
passwords for logging in.
SSH keys can be generated using the ssh-keygen command. If invoked without additional arguments, it
creates a 2048-bit RSA key set. T he keys are stored, by default, in the ~/.ssh directory. You can utilize
the -b switch to modify the bit-strength of the key. Using 2048-bit keys is normally sufficient. T he Red Hat
Enterprise Linux 7 System Administrator's Guide includes detailed information about generating key pairs.
You should see the two keys in your ~/.ssh directory. If you accepted the defaults when running the ssh-
keygen command, then the generated files are named id_rsa and id_rsa.pub and contain the private
and public key respectively. You should always protect the private key from exposure by making it
unreadable by anyone else but the file's owner. T he public key, however, needs to be transferred to the
system you are going to log in to. You can use the ssh-copy-id command to transfer the key to the
server:
T his command will also automatically append the public key to the ~/.ssh/authorized_key file on the
server. T he sshd daemon will check this file when you attempt to log in to the server.
Similarly to passwords and any other authentication mechanism, you should change your SSH keys
regularly. When you do, make sure you remove any unused keys from the authorized_key file.
Using multiple authentication methods, or multi-factor authentication, increases the level of protection
against unauthorized access, and as such should be considered when hardening a system to prevent it
from being compromised. Users attempting to log in to a system that uses multi-factor authentication must
successfully complete all specified authentication methods in order to be granted access.
An sshd daemon configured using the above AuthenticationMethods directive only grants access if
the user attempting to log in successfully completes either publickey authentication followed by
gssapi-with-m ic or by keyboard-interactive authentication. Note that each of the requested
authentication methods needs to be explicitly enabled using a corresponding configuration directive (such
as PubkeyAuthentication) in the /etc/ssh/sshd_config file. See the AUTHENTICATION section
of ssh(1) for a general list of available authentication methods.
Protocol Version
51
Red Hat Enterprise Linux 7 Security Guide
Even though the implementation of the SSH protocol supplied with Red Hat Enterprise Linux 7 supports
both the SSH-1 and SSH-2 versions of the protocol, only the latter should be used whenever possible. T he
SSH-2 version contains a number of improvements over the older SSH-1, and the majority of advanced
configuration options is only available when using SSH-2.
Users are encouraged to make use of SSH-2 in order to maximize the extent to which the SSH protocol
protects the authentication and communication for which it is used. T he version or versions of the protocol
supported by the sshd daemon can be specified using the Protocol configuration directive in the
/etc/ssh/sshd_config file. T he default setting is 2.
Key T ypes
While the ssh-keygen command generates a pair of SSH-2 RSA keys by default, using the -t option, it
can be instructed to generate DSA or ECDSA keys as well. T he ECDSA (Elliptic Curve Digital Signature
Algorithm) offers better performance at the same equivalent symmetric key length. It also generates
shorter keys.
Non-Default Port
By default, the sshd daemon listens on the 22 network port. Changing the port reduces the exposure of
the system to attacks based on automated network scanning, thus increasing security through obscurity.
T he port can be specified using the Port directive in the /etc/ssh/sshd_config configuration file.
Note also that the default SELinux policy must be changed to allow for the use of a non-default port. You
can do this by modifying the ssh_port_t SELinux type by typing the following command as root:
In the above command, replace port_number with the new port number specified using the Port directive.
No Root Login
Provided that your particular use case does not require the possibility of logging in as the root user, you
should consider setting the Perm itRootLogin configuration directive to no in the
/etc/ssh/sshd_config file. By disabling the possibility of logging in as the root user, the
administrator can audit which user runs what privileged command after they log in as regular users and
then gain root rights.
Displaying a suitable banner when users connect to a service is a good way to let potential attackers know
that the system administrator is being vigilant. You can also control what information about the system is
presented to users. T o implement a T CP Wrappers banner for a service, use the banner option.
52
C hapter 4 . Hardening Your System with T ools and Services
T his example implements a banner for vsftpd. T o begin, create a banner file. It can be anywhere on the
system, but it must have same name as the daemon. For this example, the file is called
/etc/banners/vsftpd and contains the following lines:
220-Hello, %c
220-All activity on ftp.example.com is logged.
220-Inappropriate use will result in your access privileges being removed.
T he %c token supplies a variety of client information, such as the username and hostname, or the
username and IP address to make the connection even more intimidating.
For this banner to be displayed to incoming connections, add the following line to the /etc/hosts.allow
file:
If a particular host or network has been detected attacking the server, T CP Wrappers can be used to warn
the administrator of subsequent attacks from that host or network using the spawn directive.
In this example, assume that a cracker from the 206.182.68.0/24 network has been detected attempting to
attack the server. Place the following line in the /etc/hosts.deny file to deny any connection attempts
from that network, and to log the attempts to a special file:
T he %d token supplies the name of the service that the attacker was trying to access.
T o allow the connection and log it, place the spawn directive in the /etc/hosts.allow file.
Note
Because the spawn directive executes any shell command, it is a good idea to create a special
script to notify the administrator or execute a chain of commands in the event that a particular client
attempts to connect to the server.
If certain types of connections are of more concern than others, the log level can be elevated for that
service using the severity option.
For this example, assume that anyone attempting to connect to port 23 (the T elnet port) on an FT P server
is a cracker. T o denote this, place an em erg flag in the log files instead of the default flag, info, and
deny the connection.
T his uses the default authpriv logging facility, but elevates the priority from the default value of info to
em erg, which posts log messages directly to the console.
53
Red Hat Enterprise Linux 7 Security Guide
Issue the following command, as root, from the console to determine which ports are listening for
connections from the network:
Note that at time of writing the -l option does not list SCT P servers.
Review the output of the command with the services needed on the system, turn off what is not specifically
54
C hapter 4 . Hardening Your System with T ools and Services
required or authorized, repeat the check. Proceed then to make external checks using nmap from another
system connected via the network to the first system. T his can be used verify the rules in iptables. Make
a scan for every IP address shown in the ss output (except for localhost 127.0.0.0 or ::1 range) from an
external system. Use the -6 option for scanning an IPv6 address. See m an nm ap(1) for more
information.
T he following is an example of the command to be issued from the console of another system to determine
which ports are listening for T CP connections from the network:
See the man pages for ss, nm ap, and services for more information.
T he accept_source_route option causes network interfaces to accept packets with the Strict Source
Route (SSR) or Loose Source Routing (LSR) option set. T he acceptance of source routed packets is
controlled by sysctl settings. Issue the following command as root to drop packets with the SSR or LSR
option set:
Disabling the forwarding of packets should also be done in conjunction with the above when possible
(disabling forwarding may interfere with virtualization). Issue the commands listed below as root:
T hese commands disable forwarding of IPv4 and IPv6 packets on all interfaces.
Accepting ICMP redirects has few legitimate uses. Disable the acceptance and sending of ICMP redirected
packets unless specifically required.
T hese commands disable acceptance of all ICMP redirected packets on all interfaces.
T his command disables acceptance of secure ICMP redirected packets on all interfaces.
55
Red Hat Enterprise Linux 7 Security Guide
T his command disables acceptance of all IPv4 ICMP redirected packets on all interfaces.
T here is only a directive to disable sending of IPv4 redirected packets. See RFC4294 for an explanation of
“IPv6 Node Requirements” which resulted in this difference between IPv4 and IPv6.
See the sysctl man page, sysctl(8), for more information. See RFC791 for an explanation of the Internet
options related to source based routing and its variants.
Warning
Ethernet networks provide additional ways to redirect traffic, such as ARP or MAC address
spoofing, unauthorized DHCP servers, and IPv6 router or neighbor advertisements. In addition,
unicast traffic is occasionally broadcast, causing information leaks. T hese weaknesses can only be
addressed by specific countermeasures implemented by the network operator. Host-based
countermeasures are not fully effective.
Note
Red Hat Enterprise Linux 7 defaults to using Strict Reverse Path filtering following the “Strict
Reverse Path” recommendation from RFC 3704, Ingress Filtering for Multihomed Networks. T his
currently only applies to IPv4.
Warning
If forwarding is enabled, then Reverse Path Filtering should only be disabled if there are other
means for source address validation (such as iptables rules for example).
rp_filter
Reverse Path Filter is enabled by means of the rp_filter directive. T he rp_filter option is
used to direct the kernel to select from one of three modes.
56
C hapter 4 . Hardening Your System with T ools and Services
0 — No source validation.
T he following are resources which explain more about Reverse Path Filtering.
Useful Websites
T o use the graphical firewall-config tool, press the Super key to enter the Activities Overview, type
firewall and then press Enter. T he firewall-config tool appears. You will be prompted for an
administrator password.
T he firewall-config tool has a drop-down selection menu labeled Configuration. T his enables
selecting between Runtim e and Perm anent mode. Notice that if you select Perm anent, an additional
row of icons will appear in the left hand corner. T hese icons only appear in permanent configuration mode
because a service's parameters cannot be changed in run time mode.
T he firewall service provided by firewalld is dynamic rather than static because changes to the
configuration can be made at anytime and are immediately implemented, there is no need to save or apply
the changes. No unintended disruption of existing network connections occurs as no part of the firewall
has to be reloaded.
57
Red Hat Enterprise Linux 7 Security Guide
A command line client, firewall-cmd, is provided. It can be used to make permanent and non-permanent
run-time changes as explained in m an firewall-cm d(1). Permanent changes need to be made as
explained in the firewalld(1) man page. Note that the firewall-cm d command can be run by the
root user and also by an administrative user, in other words, a member of the wheel group. In the latter
case the command will be authorized via the polkit mechanism.
With the iptables service, every single change means flushing all the old rules and reading all the
new rules from /etc/sysconfig/iptables while with firewalld there is no re-creating of all the
rules; only the differences are applied. Consequently, firewalld can change the settings during run
time without existing connections being lost.
58
C hapter 4 . Hardening Your System with T ools and Services
T he zone settings in /etc/firewalld/ are a range of preset settings which can be quickly applied to a
network interface. T hey are listed here with a brief explanation:
drop
Any incoming network packets are dropped, there is no reply. Only outgoing network connections
are possible.
block
Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4
and icmp6-adm-prohibited for IPv6. Only network connections initiated from within the system are
possible.
59
Red Hat Enterprise Linux 7 Security Guide
public
For use in public areas. You do not trust the other computers on the network to not harm your
computer. Only selected incoming connections are accepted.
external
For use on external networks with masquerading enabled especially for routers. You do not trust
the other computers on the network to not harm your computer. Only selected incoming
connections are accepted.
dm z
For computers in your demilitarized zone that are publicly-accessible with limited access to your
internal network. Only selected incoming connections are accepted.
work
For use in work areas. You mostly trust the other computers on networks to not harm your
computer. Only selected incoming connections are accepted.
hom e
For use in home areas. You mostly trust the other computers on networks to not harm your
computer. Only selected incoming connections are accepted.
internal
For use on internal networks. You mostly trust the other computers on the networks to not harm
your computer. Only selected incoming connections are accepted.
trusted
It is possible to designate one of these zones to be the default zone. When interface connections are
added to NetworkManager, they are assigned to the default zone. On installation, the default zone in
firewalld is set to be the public zone.
60
C hapter 4 . Hardening Your System with T ools and Services
T o view the list of services using the graphical firewall-config tool, press the Super key to enter the
Activities Overview, type firewall and then press Enter. T he firewall-config tool appears. You will be
prompted for an administrator password. You can now view the list of services under the Services tab.
T o list the default predefined services available using the command line, issue the following command as
root:
~]# ls /usr/lib/firewalld/services/
T o list the system or user created services, issue the following command as root:
~]# ls /etc/firewalld/services/
Services can be added and removed using the graphical firewall-config tool and by editing the XML files
in /etc/firewalld/services/. If a service has not been added or changed by the user, then no
corresponding XML file will be found in /etc/firewalld/services/. T he files
/usr/lib/firewalld/services/ can be used as templates if you wish to add or change a service.
As root, issue a command in the following format:
~]# cp /usr/lib/firewalld/services/[service].xml
/etc/firewalld/services/[service].xml
You may then edit the newly created file. firewalld will prefer files in /etc/firewalld/services/
but will fall back to /usr/lib/firewalld/services/ should a file be deleted, but only after a reload.
T he direct interface mode is intended for services or applications to add specific firewall rules during run
time. T he rules are not permanent and need to be applied every time after receiving the start, restart or
reload message from firewalld using D-BUS.
61
Red Hat Enterprise Linux 7 Security Guide
T hen install the iptables-services package by entering the following command as root:
T he iptables-services package contains the iptables service and the ip6tables service.
T hen, to start the iptables and ip6tables services, run the following commands as root:
In addition, check if firewall-cmd can connect to the daemon by entering the following command:
T o install the graphical user interface tool firewall-config, run the following command as root:
62
C hapter 4 . Hardening Your System with T ools and Services
T o start the graphical firewall-config tool, press the Super key to enter the Activities Overview, type
firewall and then press Enter. T he firewall-config tool appears. You will be prompted for an
administrator password.
T o start the graphical firewall configuration tool using the command line, enter the following command as
root user:
~]# firewall-config
T he Firewall Configuration window opens. Note, this command can be run as normal user but you
will then be prompted for an administrator password from time to time.
63
Red Hat Enterprise Linux 7 Security Guide
Look for the word “Connected” in the lower left corner. T his indicates that the firewall-config tool is
connected to the user space daemon, firewalld. Note that the ICMP T ypes, Direct
Configuration, and Lockdown Whitlist tabs are only visible after being selected from the View
drop-down menu.
T o immediately change the current firewall settings, ensure the current view is set to Runtim e.
Alternatively, to edit the settings to be applied at the next system start, or firewall reload, select
Perm anent from the drop-down list.
Note
When making changes to the firewall settings in Runtim e mode, your selection takes immediate
effect when you set or clear the check box associated with the service. You should keep this in
mind when working on a system that may be in use by other users.
When making changes to the firewall settings in Perm anent mode, your selection will only take
effect when you reload the firewall or the system restarts. You can use the reload icon below the
File menu, or click the Options menu and select Reload Firewall.
You can select zones in the left hand side column. You will notice the zones have some services enabled,
you may need to resize the window or scroll to see the full list. You can customize the settings by
selecting and deselecting a service.
T o add or reassign an interface of a connection to a zone, start firewall-config, select Options from the
menu bar, select Change Zones of Connections from the drop-down menu, the Connections list
is displayed. Select the connection to be reassigned. T he Select Zone for Connection window
appears. Select the new firewall zone from the drop-down menu and click OK.
T o set the default zone that new interfaces will be assigned to, start firewall-config, select Options
from the menu bar, select Change Default Zone from the drop-down menu. T he Default Zone
window appears. Select the zone form the list that you want to be used as the default zone and click OK.
T o enable or disable a predefined or custom service, start the firewall-config tool and select the network
zone whose services are to be configured. Select the Services tab and select the check box for each
type of service you want to trust. Clear the check box to block a service.
T o edit a service, start the firewall-config tool and then select Perm anent mode from the drop-down
selection menu labeled Configuration. Additional icons and menu buttons appear at the bottom of the
Services window. Select the service you wish to configure.
T he Ports and Protocols tab enables adding, changing, and removing of ports and protocols for the
selected service. T he modules tab is for configuring Netfilter helper modules. T he Destination tab
enables limiting traffic to a particular destination address and Internet Protocol (IPv4 or IPv6).
64
C hapter 4 . Hardening Your System with T ools and Services
T o permit traffic through the firewall to a certain port, start the firewall-config tool and select the network
zone whose settings you want to change. Select the Ports tab and the click the Add button on the right
hand side. T he Port and Protocol window opens.
Enter the port number or range of ports to permit. Select tcp or udp from the drop-down list.
T o translate IPv4 addresses to a single external address, start the firewall-config tool and select the
network zone whose addresses are to be translated. Select the Masquerading tab and select the check
box to enable the translation of IPv4 addresses to a single address.
T o forward inbound network traffic, or “packets”, for a specific port to an internal address or alternative
port, first enable IP address masquerading, then select the Port Forwarding tab.
Select the protocol of the incoming traffic and the port or range of ports on the upper section of the
window. T he lower section is for setting details about the destination.
T o forward traffic to a local port, that is to say to a port on the same system, select the Local
forwarding check box. Enter the local port or range of ports for the traffic to be sent to.
T o forward traffic to another IPv4 address, select the Forward to another port check box. Enter
the destination IP address and port or port range. T he default is to send to the same port if the port field is
left empty. Click OK to apply the changes.
T o enable or disable an ICMP filter, start the firewall-config tool and select the network zone whose
messages are to be filtered. Select the ICMP Filter tab and select the check box for each type of ICMP
message you want to filter. Clear the check box to disable a filter. T his setting is per direction and the
default allows everything.
T o edit an ICMP type, start the firewall-config tool and then select Perm anent mode from the drop-
down selection menu labeled Configuration. Additional icons appear at the bottom of the Services
window.
4 .5.14 .2. Configuring the Firewall Using the Command Line T ool, firewall-cmd
T he command line tool firewall-cmd is part of the firewalld application which is installed by default.
You can verify that it is installed by checking the version or displaying the help output. Enter the following
command to check the version:
We list a selection of commands below, for a full list please see the man page, m an firewall-cm d(1).
65
Red Hat Enterprise Linux 7 Security Guide
Note
In order to make a command permanent or persistent, add the --perm anent option to all
commands apart from the --direct commands (which are by their nature temporary). Note that
this not only means the change will be permanent but that the change will only take effect after
firewall reload, service restart, or after system reboot. Settings made with firewall-cmd without the
--perm anent option take effect immediately, but are only valid till next firewall reload, system boot,
or firewalld service restart. Reloading the firewall does not in itself break connections, but be
aware you are discarding temporary changes by doing so.
4 .5.14 .3. View the Firewall Settings Using the Command Line Interface (CLI)
T o get a text display of the state of firewalld, enter the following command:
T o view the list of active zones, with a list of the interfaces currently assigned to them, enter the following
command:
T o find out the zone that an interface, for example em1, is currently assigned to, enter the following
command:
T o find out all the interfaces assigned to a zone, for example the public zone, enter the following command
as root:
T his information is obtained from NetworkManager and only shows interfaces not connections.
T o find out all the settings of a zone, for example the public zone, enter the following command as root:
T o view the network zones currently active, enter the following command as root:
66
C hapter 4 . Hardening Your System with T ools and Services
T his will list the names of the services in /usr/lib/firewalld/services/. Note that the
configuration files themselves are named service-name.xm l.
T o view the network zones that will be active after the next firewall reload, enter the following command as
root:
4 .5.14 .4 . Change the Firewall Settings Using the Command Line Interface (CLI)
T o start dropping all incoming and outgoing packets, enter the following command as root:
All incoming and outgoing packets will be dropped. Active connections will be terminated after a period of
inactivity; the time taken depends on the individual session time out values.
T o start passing incoming and outgoing packets again, enter the following command as root:
After disabling panic mode, established connections might work again if panic mode was enabled for a
short period of time.
T o find out if panic mode is enabled or disabled, enter the following command:
Prints yes with exit status 0, if enabled, prints no with exit status 1 otherwise.
4 .5.14 .4 .2. Reload the Firewall Using the Command Line Interface (CLI)
T o reload the firewall with out interrupting user connections, that is to say, with out losing state information,
enter the following command as root:
T o reload the firewall and interrupt user connections, that is to say, to discard state information, enter the
following command as root:
T his command should normally only be used in case of severe firewall problems. For example, if there are
state information problems and no connection can be established but the firewall rules are correct.
4 .5.14 .4 .3. Add an Interface to a Z one Using the Command Line Interface (CLI)
T o add an interface to a zone, for example to add em1 to the public zone, enter the following command as
root:
T o make this setting permanent, add the --perm anent option and reload the firewall.
67
Red Hat Enterprise Linux 7 Security Guide
T o add an interface to a zone by editing the ifcfg-em 1 configuration file, for example to add em1 to the
work zone, as root use an editor to add the following line to ifcfg-em 1:
ZONE=work
Note that if you omit the ZONE option, or use ZONE=, or ZONE='', then the default zone will be used.
NetworkManager will automatically reconnect and the zone will be set accordingly.
4 .5.14 .4 .5. Configure the Default Z one by Editing the firewalld Configuration File
# default zone
# The default zone used if an empty zone string is used.
# Default: public
DefaultZone=home
T his will reload the firewall without losing state information (T CP sessions will not be interrupted).
4 .5.14 .4 .6. Set the Default Z one by Using the Command Line Interface (CLI)
T o set the default zone, for example to public, enter the following command as root:
T his change will take immediate effect and in this case it is not necessary to reload the firewall.
4 .5.14 .4 .7. Open Ports in the Firewall Using the Command Line Interface (CLI)
List all open ports for a zone, for example dmz, by entering the following command as root:
T o add a port to a zone, for example to allow T CP traffic to port 8080 to the dmz zone, enter the following
command as root:
T o make this setting permanent, add the --perm anent option and reload the firewall.
T o add a range of ports to a zone, for example to allow the ports from 5060 to 5061 to the public zone,
enter the following command as root:
T o make this setting permanent, add the --perm anent option and reload the firewall.
4 .5.14 .4 .8. Add a Service to a Z one Using the Command Line Interface (CLI)
68
C hapter 4 . Hardening Your System with T ools and Services
T o add a service to a zone, for example to allow SMT P to the work zone, enter the following command as
root:
T o make this setting permanent, add the --perm anent option and reload the firewall.
4 .5.14 .4 .9. Remove a Service from a Z one Using the Command Line Interface (CLI)
T o remove a service from a zone, for example to remove SMT P from the work zone, enter the following
command as root:
Add the --perm anent option to make the change persist after system boot. If using this option and you
wish to make the change immediate, reload the firewall, by entering the following command as root:
Note, this will not break established connections. If that is your intention, you could use the --com plete-
reload option but this will break all established connections not just for the service you have removed.
T o view the default zone files, enter the following command as root:
~]# ls /usr/lib/firewalld/zones/
block.xml drop.xml home.xml public.xml work.xml
dmz.xml external.xml internal.xml trusted.xml
T hese files must not be edited. T hey are used by default if no equivalent file exists in the
/etc/firewalld/zones/ directory.
T o view the zone files that have been changed from the default, enter the following command as root:
~]# ls /etc/firewalld/zones/
external.xml public.xml public.xml.old
In the example shown above, the work zone file does not exist. T o add the work zone file, enter the
following command as root:
You can now edit the file in the /etc/firewalld/zones/ directory. If you delete the file, firewalld
will fall back to using the default file in /usr/lib/firewalld/zones/.
T o add a service to a zone, for example to allow SMT P to the work zone, use an editor with root
privileges to edit the /etc/firewalld/zones/work.xm l file to include the following line:
<service name="smtp"/>
69
Red Hat Enterprise Linux 7 Security Guide
An editor running with root privileges is required to edit the XML zone files. T o view the files for
previously configured zones, enter the following command as root:
~]# ls /etc/firewalld/zones/
external.xml public.xml work.xml
T o remove a service from a zone, for example to remove SMT P from the work zone, use an editor with
root privileges to edit the /etc/firewalld/zones/work.xm l file to remove the following line:
<service name="smtp"/>
If no other changes have been made to the work.xm l file, it can be removed and firewalld will use the
default /usr/lib/firewalld/zones/work.xm l configuration file after the next reload or system boot.
T o check if IP masquerading is enabled, for example for the external zone, enter the following command as
root:
Prints yes with exit status 0, if enabled, prints no with exit status 1 otherwise. If zone is omitted, the
default zone will be used.
T o make this setting permanent, add the --perm anent option and reload the firewall.
T o make this setting permanent, add the --perm anent option and reload the firewall.
4 .5.14 .4 .13. Configure Port Forwarding Using the Command Line Interface (CLI)
T o forward inbound network packets from one port to an alternative port or address, first enable IP
address masquerading for a zone, for example external, by entering the following command as root:
T o forward packets to a local port, that is to say to a port on the same system, enter the following
command as root:
In this example, the packets intended for port 22 are now forwarded to port 3753. T he original destination
port is specified with the port option. T his option can be a port, or port range, together with a protocol.
T he protocol, if specified, must be one of either tcp or udp. T he new local port, the port or range of ports
to which the traffic is being forwarded to, is specified with the toport option. T o make this setting
permanent, add the --perm anent option and reload the firewall.
70
C hapter 4 . Hardening Your System with T ools and Services
T o forward packets to another IPv4 address, usually an internal address, without changing the
destination port, enter the following command as root:
In this example, the packets intended for port 22 are now forwarded to the same port at the address given
with the toaddr. T he original destination port is specified with the port. T his option can be a port, or port
range, together with a protocol. T he protocol, if specified, must be one of either tcp or udp. T he new
destination port, the port or range of ports to which the traffic is being forwarded to, is specified with the
toport. T o make this setting permanent, add the --perm anent option and reload the firewall.
T o forward packets to another port at another IPv4 address, usually an internal address, enter the
following command as root:
In this example, the packets intended for port 22 are now forwarded to port 2055 at the address given with
the toaddr option. T he original destination port is specified with the port option. T his option can be a
port, or port range, together with a protocol. T he protocol, if specified, must be one of either tcp or udp.
T he new destination port, the port or range of ports to which the traffic is being forwarded to, is specified
with the toport. T o make this setting permanent, add the --perm anent option and reload the firewall.
T he configuration settings for firewalld are stored in XML files in the /etc/firewalld/ directory. Do
not edit the files in the /usr/lib/firewalld/ directory, they are for the default settings. You will need
root user permissions to view and edit the XML files. T he XML files are explained in three man pages:
firewalld.icm ptype(5) man page — Describes XML configuration files for ICMP filtering.
firewalld.service(5) man page — Describes XML configuration files for firewalld service.
firewalld.zone(5) man page — Describes XML configuration files for firewalld zone
configuration.
T he XML files can be created and edited directly or created indirectly using the graphical and command
line tools. Organizations can distribute them in RPM files which can make management and version control
easier. T ools such as Puppet can distribute such configuration files.
It is possible to add and remove chains during runtime by using the --direct option with the firewall-
cmd tool. A few examples are presented here, please see the firewall-cm d(1) man page for more
information.
It is dangerous to use the direct interface if you are not very familiar with iptables as you could
inadvertently cause a breach in the firewall.
T he direct interface mode is intended for services or applications to add specific firewall rules during run
time. T he rules are not permanent and need to be applied every time after receiving the start, restart or
reload message from firewalld using D-BUS.
71
Red Hat Enterprise Linux 7 Security Guide
T o add a custom rule to the chain “IN_public_allow”, issuing a command as root in the following format:
T o remove a custom rule from the chain “IN_public_allow”, issuing a command as root in the following
format:
T o list the rules in the chain “IN_public_allow”, issuing a command as root in the following format:
4.5.15. Configuring Complex Firewall Rules with the "Rich Language" Syntax
With the “rich language” syntax, complex firewall rules can be created in a way that is easier to understand
than the direct interface method. In addition, the settings can be made permanent. T he language uses
keywords with values and is an abstract representation of iptables rules. Z ones can be configured using
this language, the current configuration method will still be supported.
All the commands in this section need to be run as root. T he format of the command to add a rule is as
follows:
T his will add a rich language rule rule for zone zone. T his option can be specified multiple times. If the
zone is omitted, the default zone will be used. If a timeout is supplied, the rule or rules will be active for the
amount of seconds specified and will be removed automatically afterwards.
T o remove a rule:
T his will remove a rich language rule rule for zone zone. T his option can be specified multiple times. If the
zone is omitted, the default zone will be used.
T his will return whether a rich language rule rule has been added for the zone zone. Prints yes with exit
status 0, if enabled, prints no with exit status 1 otherwise. If the zone is omitted, the default zone will be
used.
For information about the rich language representation used in the zone configuration files, see the
firewalld.zone(5) man page.
72
C hapter 4 . Hardening Your System with T ools and Services
A rule is associated with a particular zone. A zone can have several rules. If some rules interact or
contradict, the first rule that matches the packet applies. If the rule family is provided, it can be either ipv4
or ipv6, it limits the rule to IPv4 or IPv6. If the rule family is not provided, the rule will be added for both
IPv4 and IPv6. If source or destination addresses are used in a rule, then the rule family needs to be
provided. T his is also the case for port forwarding.
By specifying the source address the origin of a connection attempt can be limited to the source
address. A source address or address range is either an IP address or a network IP address
with a mask for IPv4 or IPv6. T he network family (IPv4 or IPv6) will be automatically
discovered. For IPv4 , the mask can be a network mask or a plain number. For IPv6 the mask is
a plain number. T he use of host names is not supported. It is possible to invert the sense of the
source address command by adding invert="true" or invert="yes"; all but the supplied
address will match.
destination
By specifying the destination address the target can be limited to the destination address. T he
destination address uses the same syntax as the source address. T he use of source and
destination addresses is optional and the use of a destination addresses is not possible with all
elements. T his depends on the use of destination addresses, for example in service entries. T he
element can be exactly one of the element types: service, port, protocol, m asquerade,
icm p-block and forward-port.
service
T he service name is one of the firewalld provided services. T o get a list of the supported
services, issue the following command: firewall-cm d --get-services. If a service provides
a destination address, it will conflict with a destination address in the rule and will result in an
error. T he services using destination addresses internally are mostly services using multicast.
T he command takes the following form:
service name=service_name
port
T he port can either be a single port number or a port range, for example, 5060-5062. T he
protocol can either be specified as tcp or udp. T he command takes the following form:
73
Red Hat Enterprise Linux 7 Security Guide
protocol
T he protocol value can be either a protocol ID number or a protocol name. For allowed protocol
entries, see /etc/protocols. T he command takes the following form:
protocol value=protocol_name_or_ID
icm p-block
Use this command to block one or more ICMP types. T he ICMP type is one of the ICMP types
firewalld supports. T o get a listing of supported ICMP types, issue the following command:
Specifying an action is not allowed here. icm p-block uses the action reject internally. T he
command takes the following form:
icmp-block name=icmptype_name
m asquerade
T urns on IP masquerading in the rule. A source address can be provided to limit masquerading to
this area, but not a destination address. Specifying an action is not allowed here.
forward-port
Forward packets from a local port with protocol specified as tcp or udp to either another port
locally, to another machine, or to another port on another machine. T he port and to-port can
either be a single port number or a port range. T he destination address is a simple IP address.
Specifying an action is not allowed here. T he forward-port command uses the action accept
internally. T he command takes the following form:
log
Log new connection attempts to the rule with kernel logging, for example in syslog. You can define
a prefix text that will be added to the log message as a prefix. Log level can be one of em erg,
alert, crit, error, warning, notice, info or debug. T he use of log is optional. It is
possible to limit logging as follows:
T he rate is a natural positive number [1, ..], the duration of s, m , h, d. s means seconds, m
minutes, h hours and d days. T he maximum limit value is 1/d which means at maximum one log
entry per day.
audit
Audit provides an alternative way for logging using audit records sent to the service auditd. T he
audit type can be one of ACCEPT , REJECT or DROP but it is not specified after the command
audit as the audit type will be automatically gathered from the rule action. Audit does not have
its own parameters, but limit can be added optionally. T he use of audit is optional.
74
C hapter 4 . Hardening Your System with T ools and Services
accept|reject|drop
An action can be one of accept, reject or drop. T he rule can only contain an element or a
source. If the rule contains an element, then new connections matching the element will be
handled with the action. If the rule contains a source, then everything from the source address will
be handled with the action specified.
With accept all new connection attempts will be granted. With reject they will be rejected and
their source will get a reject message. T he reject type can be set to use another value. With drop
all packets will be dropped immediately and no information is sent to the source.
Logging can be done with the Netfilter log target and also with the audit target. A new chain is added to
all zones with a name in the format “zone_log”, where zone is the zone name. T his is processed before
the deny chain in order to have proper ordering. T he rules or parts of them are placed in separate chains,
according to the action of the rule, as follows:
zone_log
zone_deny
zone_allow
All logging rules will be placed in the “zone_log” chain, which will be parsed first. All reject and drop
rules will be placed in the “zone_deny” chain, which will be parsed after the log chain. All accept rules will
be placed in the “zone_allow” chain, which will be parsed after the deny chain. If a rule contains log and
also deny or allow actions, the parts are placed in the matching chains.
Enable new IPv4 and IPv6 connections for authentication header protocol AH:
Allow new IPv4 and IPv6 connections for protocol FT P and log 1 per minute using audit:
Allow new IPv4 connections from address 192.168.0.0/24 for protocol T FT P and log 1 per minute
using syslog:
New IPv6 connections from 1:2:3:4 :6:: for protocol RADIUS are all rejected and logged at a rate of 3
per minute. New IPv6 connections from other sources are accepted:
75
Red Hat Enterprise Linux 7 Security Guide
Forward IPv6 packets received from 1:2:3:4 :6:: on port 4011 with protocol T CP to 1::2:3:4 :7 on
port 4012.
Using an editor running as root, add the following line to the /etc/firewalld/firewalld.conf file
as follows:
Lockdown=yes
T ry to enable the service im aps in the default zone using the following command as an administrative
user, that is to say, a user in group wheel (usually the first user on system). You will be prompted for the
user password:
76
C hapter 4 . Hardening Your System with T ools and Services
T ry to enable the im aps service again in the default zone by entering the following command as an
administrative user. You will be prompted for the user password:
Prints yes with exit status 0, if lockdown is enabled, prints no with exit status 1 otherwise.
T he lockdown whitelist can contain commands, security contexts, users and user IDs. If a command entry
on the whitelist ends with an asterisk “*”, then all command lines starting with that command will match. If
the “*” is not there then the absolute command including arguments must match.
T he context is the security (SELinux) context of a running application or service. T o get the context of a
running application use the following command:
~]$ ps -e --context
T hat command returns all running applications. Pipe the output through the grep tool to get the application
of interest. For example:
T o list all command lines that are on the whitelist, enter the following command as root:
T o add a command command to the whitelist, enter the following command as root:
T o remove a command command from the whitelist, enter the following command as root:
77
Red Hat Enterprise Linux 7 Security Guide
T o query whether the command command is on the whitelist, enter the following command as root:
Prints yes with exit status 0, if true, prints no with exit status 1 otherwise.
T o list all security contexts that are on the whitelist, enter the following command as root:
T o add a context context to the whitelist, enter the following command as root:
T o remove a context context from the whitelist, enter the following command as root:
T o query whether the context context is on the whitelist, enter the following command root:
Prints yes with exit status 0, if true, prints no with exit status 1 otherwise.
T o list all user IDs that are on the whitelist, enter the following command as root:
T o add a user ID uid to the whitelist, enter the following command as root:
T o remove a user ID uid from the whitelist, enter the following command as root:
T o query whether the user ID uid is on the whitelist, enter the following command:
Prints yes with exit status 0, if true, prints no with exit status 1 otherwise.
T o list all user names that are on the whitelist, enter the following command as root:
78
C hapter 4 . Hardening Your System with T ools and Services
T o add a user name user to the whitelist, enter the following command as root:
T o remove a user name user from the whitelist, enter the following command as root:
T o query whether the user name user is on the whitelist, enter the following command:
Prints yes with exit status 0, if true, prints no with exit status 1 otherwise.
T he default whitelist configuration file contains the NetworkManager context and the default context of
libvirt. Also the user ID 0 is in the list.
Here follows an example whitelist configuration file enabling all commands for the firewall-cm d utility,
for a user called user whose user ID is 815:
In this example we have shown both user id and user nam e but only one is required. Python is the
interpreter and therefore prepended to the command line. You can also use a very specific command, for
example:
79
Red Hat Enterprise Linux 7 Security Guide
Note
In Red Hat Enterprise Linux 7, all utilities are now placed in /usr/bin/ and the /bin/ directory is
sym-linked to the /usr/bin/ directory. In other words, although the path for firewall-cm d when
run as root might resolve to /bin/firewall-cm d, /usr/bin/firewall-cm d can now be
used. All new scripts should use the new location but be aware that if scripts that run as root have
been written to use the /bin/firewall-cm d path then that command path must be whitelisted in
addition to the /usr/bin/firewall-cm d path traditionally used only for non-root users.
T he “*” at the end of the name attribute of a command means that all commands that start with this
string will match. If the “*” is not there then the absolute command including arguments must match.
firewall-cm d(1) man page — Describes command options for the firewalld command line
client.
firewalld.icm ptype(5) man page — Describes XML configuration files for ICMP filtering.
firewalld.service(5) man page — Describes XML configuration files for firewalld service.
firewalld.zone(5) man page — Describes XML configuration files for firewalld zone
configuration.
firewalld.direct(5) man page — Describes the firewalld direct interface configuration file.
firewall.richlanguage(5) man page — Describes the firewalld rich language rule syntax.
firewalld.zones(5) man page — General description of what zones are and how to configure
them.
80
C hapter 4 . Hardening Your System with T ools and Services
For connecting over the Internet, a growing number of websites now offer the ability to connect securely
using HT T PS. However, before connecting to an HT T PS webserver, a DNS lookup must be performed,
unless you enter the IP address directly. T hese DNS lookups are done insecurely and are subject to man-
in-the-middle attacks due to lack of authentication. In other words, a DNS client cannot have confidence
that the replies that appear to come from a given DNS nameserver are authentic and have not been
tampered with. More importantly, a recursive nameserver cannot be sure that the records it obtains from
other nameservers are genuine. T he DNS protocol did not provide a mechanism for the client to ensure it
was not subject to a man-in-the-middle attack. DNSSEC was introduced to address the lack of
authentication and integrity checks when resolving domain names using DNS. It does not address the
problem of confidentiality.
Publishing DNSSEC information involves digitally signing DNS resource records as well as distributing
public keys in such a way as to enable DNS resolvers to build a hierarchical chain of trust. Digital
signatures for all DNS resource records are generated and added to the zone as digital signature
resource records (RRSIG). T he public key of a zone is added as a DNSKEY resource record. T o build the
hierarchical chain, hashes of the DNSKEY are published in the parent zone as Delegation of Signing (DS)
resource records. T o facilitate proof of non-existence, the NextSECure (NSEC) and NSEC3 resource
records are used. In a DNSSEC signed zone, each resource record set (RRset) has a corresponding
RRSIG resource record. Note that records used for delegation to a child zone (NS and glue records) are
not signed; these records appear in the child zone and are signed there.
Processing DNSSEC information is done by resolvers that are configured with the root zone public key.
Using this key, resolvers can verify the signatures used in the root zone. For example, the root zone has
signed the DS record for .com . T he root zone also serves NS and glue records for the .com name
servers. T he resolver follows this delegation and queries for the DNSKEY record of .com using these
delegated name servers. T he hash of the DNSKEY record obtained should match the DS record in the root
zone. If so, the resolver will trust the obtained DNSKEY for .com . In the .com zone, the RRSIG records
are created by the .com DNSKEY. T his process is repeated similarly for delegations within .com , such as
redhat.com . Using this method, a validating DNS resolver only needs to be configured with one root key
while it collects many DNSKEYs from around the world during its normal operation. If a cryptographic check
fails, the resolver will return SERVFAIL to the application.
DNSSEC has been designed in such a way that it will be completely invisible to applications not supporting
DNSSEC. If a non-DNSSEC application queries a DNSSEC capable resolver, it will receive the answer
without any of these new resource record types such as RRSIG. However, the DNSSEC capable resolver
will still perform all cryptographic checks, and will still return a SERVFAIL error to the application if it detects
malicious DNS answers. DNSSEC protects the integrity of the data between DNS servers (authoritative and
recursive), it does not provide security between the application and the resolver. T herefor, it is important
that the applications are given a secure transport to their resolver. T he easiest way to accomplish that is
to run a DNSSEC capable resolver on localhost and use 127.0.0.1 in /etc/resolv.conf.
Alternatively a VPN connection to a remote DNS server could be used.
When using Wi-Fi Hotspots or VPNs, there is a reliance on “DNS lies”. Captive portals tend to hijack DNS
in order to redirect users to a page where they are required to authenticate (or pay) for the Wi-Fi service.
Users connecting to a VPN often need to use an “internal only” DNS server in order to locate resources
that do not exist outside the corporate network. T his requires additional handling by software. For
example, dnssec-trigger can be used to detect if a Hotspot is hijacking the DNS queries and unbound
can act as a proxy nameserver to handle the DNSSEC queries.
T o deploy a DNSSEC capable recursive resolver, either BIND or unbound can be used. Both enable
81
Red Hat Enterprise Linux 7 Security Guide
DNSSEC by default and are configured with the DNSSEC root key. T o enable DNSSEC on a server, either
will work however the use of unbound is preferred on mobile devices, such as notebooks, as it allows the
local user to dynamically reconfigure the DNSSEC overrides required for Hotspots when using dnssec-
trigger, and for VPNs when using Libreswan. T he unbound daemon further supports the deployment of
DNSSEC exceptions listed in the etc/unbound/* .d/ directories which can be useful to both servers
and mobile devices.
NetworkManager “triggers” dnssec-trigger when a new DNS server is obtained via DHCP.
Dnssec-trigger then performs a number of tests against the server and decides whether or not it
properly supports DNSSEC.
If it does, then dnssec-trigger reconfigures unbound to use that DNS server as a forwarder for all
queries.
If the tests fail, dnssec-trigger will ignore the new DNS server and try a few available fall-back
methods.
If it determines that an unrestricted port 53 (UDP and T CP) is available, it will tell unbound to become a
full recursive DNS server without using any forwarder.
If this is not possible, for example because port 53 is blocked by a firewall for everything except
reaching the network's DNS server itself, it will try to use DNS to port 80, or T LS encapsulated DNS to
port 443. Servers running DNS on port 80 and 443 can be configured in /etc/dnssec-
trigger/dnssec-trigger.conf. Commented out examples should be available in the default
configuration file.
If these fall-back methods also fail, dnssec-trigger offers to either operate insecurely, which would
bypass DNSSEC completely, or run in “cache only” mode where it will not attempt new DNS queries but
will answer for everything it already has in the cache.
Wi-Fi Hotspots increasingly redirect users to a sign-on page before granting access to the Internet. During
the probing sequence outlined above, if a redirection is detected, the user is prompted to ask if a login is
required to gain Internet access. T he dnssec-trigger daemon continues to probe for DNSSEC
resolvers every ten seconds. See Section 4.6.8, “Using Dnssec-trigger” for information on using the
dnssec-trigger graphical utility.
82
C hapter 4 . Hardening Your System with T ools and Services
T he Internet Corporation for Assigned Names and Numbers (ICANN) sometimes adds previously
unregistered T op-Level Domains (such as .yourcom pany) to the public register. T herefore, Red Hat
strongly recommends that you do not use a domain name that is not delegated to you, even on a private
network, as this can result in a domain name that resolves differently depending on network configuration.
As a result, network resources can become unavailable. Using domain names that are not delegated to
you also makes DNSSEC more difficult to deploy and maintain, as domain name collisions require manual
configuration to enable DNSSEC validation. See the ICANN FAQ on domain name collision for more
information on this issue.
In order to validate DNS using DNSSEC locally on a machine, it is necessary to install the DNS resolver
unbound (or bind ). It is only necessary to install dnssec-trigger on mobile devices. For servers,
unbound should be sufficient although a forwarding configuration for the local domain might be required
depending on where the server is located (LAN or Internet). dnssec-trigger will currently only help with
the global public DNS zone. NetworkManager, dhclient, and VPN applications can often gather the
domain list (and nameserver list as well) automatically, but not dnssec-trigger nor unbound.
T o determine whether the unbound daemon is running, enter the following command:
T he system ctl status command will report unbound as Active: inactive (dead) if the
unbound service is not running.
T o start the unbound daemon for the current session, run the following command as the root user:
Run the system ctl enable command to ensure that unbound starts up every time the system boots:
83
Red Hat Enterprise Linux 7 Security Guide
T he unbound daemon allows configuration of local data or overrides using the following directories:
T he /etc/unbound/conf.d directory is used to add configurations for a specific domain name. T his
is used to redirect queries for a domain name to a specific DNS server. T his is often used for sub-
domains that only exist within a corporate WAN.
T he /etc/unbound/keys.d directory is used to add trust anchors for a specific domain name. T his
is required when an internal-only name is DNSSEC signed, but there is no publicly existing DS record
to build a path of trust. Another use case is when an internal version of a domain is signed using a
different DNSKEY than the publicly available name outside the corporate WAN.
T he /etc/unbound/local.d directory is used to add specific DNS data as a local override. T his
can be used to build blacklists or create manual overrides. T his date will be returned to clients by
unbound, but it will not be marked as DNSSEC signed.
NetworkManager, as well as some VPN software, may change the configuration dynamically. T hese
configuration directories contain commented out example entries. For further information see the
unbound.conf(5) man page.
T he system ctl status command will report dnssec-triggerd as Active: inactive (dead) if
the dnssec-triggerd daemon is not running. T o start it for the current session run the following
command as the root user:
Run the system ctl enable command to ensure that dnssec-triggerd starts up every time the
system boots:
84
C hapter 4 . Hardening Your System with T ools and Services
the message tray at the bottom of the screen. Press the round blue notification icon in the bottom right of
the screen to reveal it. Right click the anchor icon to display a pop-up menu.
In normal operations unbound is used locally as the name server, and resolv.conf points to
127.0.0.1. When you click OK on the Hotspot Sign-On panel this is changed. T he DNS servers are
queried from NetworkManager and put in resolv.conf. Now you can authenticate on the Hotspot's
sign-on page. T he anchor icon shows a big red exclamation mark to warn you that DNS queries are being
made insecurely. When authenticated, dnssec-trigger should automatically detect this and switch back to
secure mode, although in some cases it cannot and the user has to do this manually by selecting
Reprobe.
Dnssec-trigger does not normally require any user interaction. Once started, it works in the background
and if a problem is encountered it notifies the user by means of a pop-up text box. It also informs unbound
about changes to the resolv.conf file.
T o send a query requesting DNSSEC data using dig, the option +dnssec is added to the command, for
example:
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;whitehouse.gov. IN A
;; ANSWER SECTION:
whitehouse.gov. 20 IN A 72.246.36.110
whitehouse.gov. 20 IN RRSIG A 7 2 20 20130825124016 20130822114016 8399
whitehouse.gov. BB8VHWEkIaKpaLprt3hq1GkjDROvkmjYTBxiGhuki/BJn3PoIGyrftxR
HH0377I0Lsybj/uZv5hL4UwWd/lw6Gn8GPikqhztAkgMxddMQ2IARP6p
wbMOKbSUuV6NGUT1WWwpbi+LelFMqQcAq3Se66iyH0Jem7HtgPEUE1Zc 3oI=
In addition to the A record, an RRSIG record is returned which contains the DNSSEC signature, as well as
the inception time and expiration time of the signature. T he unbound server indicated that the data was
DNSSEC authenticated by returning the ad bit in the flags: section at the top.
If DNSSEC validation fails, the dig command would return a SERVFAIL error:
85
Red Hat Enterprise Linux 7 Security Guide
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;badsign-a.test.dnssec-tools.org. IN A
T o request more information about the failure, DNSSEC checking can be disabled by specifying the +cd
option to the dig command:
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;badsign-a.test.dnssec-tools.org. IN A
;; ANSWER SECTION:
badsign-a.test.dnssec-tools.org. 49 IN A 75.119.216.33
badsign-a.test.dnssec-tools.org. 49 IN RRSIG A 5 4 86400 20130919183720
20130820173720 19442 test.dnssec-tools.org.
E572dLKMvYB4cgTRyAHIKKEvdOP7tockQb7hXFNZKVbfXbZJOIDREJrr
zCgAfJ2hykfY0yJHAlnuQvM0s6xOnNBSvc2xLIybJdfTaN6kSR0YFdYZ
n2NpPctn2kUBn5UR1BJRin3Gqy20LZlZx2KD7cZBtieMsU/IunyhCSc0 kYw=
Often, DNSSEC mistakes manifest themselves by bad inception or expiration time, although in this
example, the people at www.dnssec-tools.org have mangled this RRSIG signature on purpose, which we
would not be able to detect by looking at this output manually. T he error will show in the output of
system ctl status unbound and the unbound daemon logs these errors to syslog as follows:
86
C hapter 4 . Hardening Your System with T ools and Services
T o set up a fixed web page with known content that can be used by dnssec-trigger to detect a Hotspot,
proceed as follows:
1. Set up a web server on some machine that is publicly reachable on the Internet. See the Red Hat
Enterprise Linux 7 System Administrator's Guide for more information about web servers. .
2. Once you have the server running, publish a static page with known content on it. T he page does
not need to be a valid HT ML page. For example, you could use a plain-text file named hotspot.txt
that contains only the string OK. Assuming your server is located at exam ple.com and you
published your hotspot.txt file in the web server document_root/static/ sub-directory, then
the address to your static web page would be exam ple.com /static/hotspot.txt. See the
Docum entRoot directive in the Red Hat Enterprise Linux 7 System Administrator's Guide.
T his command adds a URL that is probed via HT T P (port 80). T he first part is the URL that will be
resolved and the page that will be downloaded. T he second part of the command is the text string
that the downloaded webpage is expected to contain.
For more information on the configuration options see the man page dnssec-trigger.conf(8).
T he default behavior for validating forward zones can be altered, so that all forward zones will not be
DNSSEC validated by default. T o do this, change the validate_connection_provided_zones
variable in the dnssec-trigger configuration file /etc/dnssec.conf. As root user, open and edit the
line as follows:
validate_connection_provided_zones=no
T he change is not done for any existing forward zones, but only for future forward zones. T herefore if you
want to disable DNSSEC for the current provided domain, you need to reconnect.
Adding forward zones for Wi-Fi provided zones can be enabled. T o do this, change the
add_wifi_provided_zones variable in the dnssec-trigger configuration file /etc/dnssec.conf. As
root user, open and edit the line as follows:
87
Red Hat Enterprise Linux 7 Security Guide
add_wifi_provided_zones=yes
T he change is not done for any existing forward zones, but only for future forward zones. T herefore if you
want to enable DNSSEC for the current Wi-Fi provided domain, you need to reconnect (restart) the Wi-Fi
connection.
Warning
T urning on the addition of Wi-Fi provided domains as forward zones into unbound may have
security implications such as:
1. A Wi-Fi access point can intentionally provide you a domain via DHCP for which it does not
have authority and route all your DNS queries to its DNS servers.
2. If you have the DNSSEC validation of forward zones turned off, the Wi-Fi provided DNS
servers can spoof the IP address for domain names from the provided domain without you
knowing it.
unbound(8) man page — Describes the command options for unbound, the DNS validating resolver.
resolv.conf(5) man page — Contains information that is read by the resolver routines.
http://www.dnssec.net/
http://www.dnssec-deployment.org/
http://www.internetsociety.org/deploy360/dnssec/community/
T he Internet Society's “Deploy 360” initiative to stimulate and coordinate DNSSEC deployment is
a good resource for finding communities and DNSSEC activities worldwide.
88
C hapter 4 . Hardening Your System with T ools and Services
http://www.unbound.net/
T his document contains general information about the unbound DNS service.
http://www.nlnetlabs.nl/projects/dnssec-trigger/
Libreswan is an open source, user space IPsec implementation available in Red Hat Enterprise Linux 7.
It uses the Internet key exchange (IKE) protocol. IKE version 1 and 2 are implemented as a user-level
daemon. Manual key establishment is also possible via ip xfrm commands, however this is not
recommended. Libreswan interfaces with the Linux kernel using netlink to transfer the encryption keys.
Packet encryption and decryption happen in the Linux kernel.
Libreswan uses the network security services (NSS) cryptographic library, which is required for Federal
Information Processing Standard (FIPS) security compliance.
After a new installation of Libreswan the NSS database should be initialized as part of the install process.
However, should you need to start a new database, first remove the old database as follows:
~]# rm /etc/ipsec.d/*db
T hen, to initialize a new NSS database, issue the following command as root:
If you do not wish to use a password for NSS, just press Enter twice when prompted for the password. If
you do enter a password then you will have to re-enter it every time Libreswan is started, such as every
time the system is booted.
89
Red Hat Enterprise Linux 7 Security Guide
T o check if the ipsec daemon provided by Libreswan is running, issue the following command:
T o start the ipsec daemon provided by Libreswan, issue the following command as root:
T o ensure that Libreswan will start when the system starts, issue the following command as root:
Configure any intermediate as well as host-based firewalls to permit the ipsec service. See Section 4.5,
“Using Firewalls” for information on firewalls and allowing specific services to pass through. Libreswan
requires the firewall to allow the following packets:
UDP port 500 for the Internet Key Exchange (IKE) protocol
We present three examples of using Libreswan to set up an IPsec VPN. T he first example is for
connecting two hosts together so that they may communicate securely. T he second example is connecting
two sites together to form one network. T he third example is supporting roaming users, known as road
warriors in this context.
Pre-Shared Keys (PSK) is the simplest authentication method. PSK's should consist of random
characters and have a length of at least 20 characters. Due to the dangers of non-random and short
PSKs, this method is not available when the system is running in FIPS mode
Raw RSA keys are commonly used for static host-to-host or subnet-to-subnet IPsec configurations.
T he hosts are manually configured with each other's public RSA key. T his method does not scale well
when dozens or more hosts all need to setup IPsec tunnels to each other.
90
C hapter 4 . Hardening Your System with T ools and Services
X.509 certificates are commonly used for large scale deployments where there are many hosts that
need to connect to a common IPsec gateway. A central certificate authority (CA) is used to sign RSA
certificates for hosts or users. T his central CA is responsible for relaying trust, including the
revocations of individual hosts or users.
T his generates an RSA key pair for the host. T he process of generating RSA keys can take many minutes,
especially on virtual machines with low entropy.
T o view the public key, issue the following command as root, on the host referred to as “left”:
You will need this key to add to the configuration file as explained below.
T o view the public key, issue the following command as root on the host referred to as “right”:
T he secret part is stored in /etc/ipsec.d/* .db files, also called the “NSS database”.
T o make a configuration file for this host-to-host tunnel, the lines leftrsasigkey= and
rightrsasigkey= from above, are added to a custom configuration file placed in the /etc/ipsec.d/
directory. T o enable Libreswan to read the custom configurations files, use an editor running as root to
edit the main configuration file, /etc/ipsec.conf, and enable the following line by removing the #
comment character so that it looks as follows:
include /etc/ipsec.d/*.conf
Using an editor running as root, create a file with a suitable name in the following format:
/etc/ipsec.d/my_host-to-host.conf
91
Red Hat Enterprise Linux 7 Security Guide
conn mytunnel
[email protected]
left=192.1.2.23
leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ==
[email protected]
right=192.1.2.45
rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ==
authby=rsasig
# load and initiate automatically
auto=start
You can use the identical configuration file on both left and right hosts. T hey will auto-detect if they are
“left” or “right”. If one of the hosts is a mobile host, which implies the IP address is not known in advance,
then on the mobile host use %defaultroute as its IP address. T his will pick up the dynamic IP address
automatically. On the static host that accepts connections from incoming mobile hosts, specify the mobile
host using %any for its IP address.
Ensure the leftrsasigkey value is obtained from the “left” host and the rightrsasigkey value is
obtained from the “right” host.
T o bring up the tunnel, issue the following command as root, on the left or the right side:
T he IKE negotiation takes place on UDP port 500. IPsec packets show up as Encapsulated
Security Payload (ESP) packets. When the VPN connection needs to pass through a NAT router, the
ESP packets are encapsulated in UDP packets on port 4500.
T o verify that packets are being sent via the VPN tunnel issue a command as root in the following format:
~]# tcpdump -n -i interface esp and udp port 500 and udp port 4500
00:32:32.632165 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1a), length 132
00:32:32.632592 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1a), length 132
00:32:32.632592 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 7,
length 64
00:32:33.632221 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1b), length 132
00:32:33.632731 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1b), length 132
00:32:33.632731 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 8,
length 64
00:32:34.632183 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1c), length 132
00:32:34.632607 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1c), length 132
00:32:34.632607 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 9,
length 64
00:32:35.632233 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1d), length 132
00:32:35.632685 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1d), length 132
00:32:35.632685 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 10,
length 64
92
C hapter 4 . Hardening Your System with T ools and Services
Where interface is the interface known to carry the traffic. T o end the capture with tcpdump, press
Ctrl+C.
Note
T he tcpdump commands interacts a little unexpectedly with IPsec. It only sees the outgoing
encrypted packet, not the outgoing plaintext packet. It does see the encrypted incoming packet, as
well as the decrypted incoming packet. If possible, run tcpdump on a router between the two
machines and not on one of the endpoints itself.
T o configure Libreswan to create a site-to-site IPsec VPN, first configure a host-to-host IPsec VPN as
described in Section 4.7.3, “Host-T o-Host VPN Using Libreswan” and then copy or move the file to a file
with suitable name such as /etc/ipsec.d/m y_site-to-site.conf. Using an editor running as root,
edit the custom configuration file /etc/ipsec.d/m y_site-to-site.conf as follows:
conn mysubnet
also=mytunnel
leftsubnet=192.0.1.0/24
rightsubnet=192.0.2.0/24
conn mysubnet6
also=mytunnel
connaddrfamily=ipv6
leftsubnet=2001:db8:0:1::/64
rightsubnet=2001:db8:0:2::/64
conn mytunnel
auto=start
[email protected]
left=192.1.2.23
leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ==
[email protected]
right=192.1.2.45
rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ==
authby=rsasig
T o bring the tunnels up, restart Libreswan or manually load and initiate all the connections using the
following commands as root:
93
Red Hat Enterprise Linux 7 Security Guide
Verifying that packets are being sent via the VPN tunnel is the same procedure as explained in
Section 4.7.3.1, “Verify Host-T o-Host VPN Using Libreswan”.
conn mysubnet
[email protected]
leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ==
left=192.1.2.23
leftsourceip=192.0.1.254
leftsubnet=192.0.1.0/24
[email protected]
rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ==
right=192.1.2.45
rightsourceip=192.0.2.254
rightsubnet=192.0.2.0/24
auto=start
authby=rsasig
94
C hapter 4 . Hardening Your System with T ools and Services
conn branch1
left=1.2.3.4
leftid=@headoffice
leftsubnet=0.0.0.0/0
leftrsasigkey=0sA[...]
#
right=5.6.7.8
rightid=@branch1
righsubnet=10.0.1.0/24
rightrsasigkey=0sAXXXX[...]
#
auto=start
authby=rsasigkey
conn branch2
left=1.2.3.4
leftid=@headoffice
leftsubnet=0.0.0.0/0
leftrsasigkey=0sA[...]
#
right=10.11.12.13
rightid=@branch2
righsubnet=10.0.2.0/24
rightrsasigkey=0sAYYYY[...]
#
auto=start
authby=rsasigkey
At the “branch1” office we use the same connection. Additionally we use a pass-through connection to
exclude our local LAN traffic from being sent through the tunnel:
conn branch1
left=1.2.3.4
leftid=@headoffice
leftsubnet=0.0.0.0/0
leftrsasigkey=0sA[...]
#
right=10.11.12.13
rightid=@branch2
righsubnet=10.0.1.0/24
rightrsasigkey=0sAYYYY[...]
#
auto=start
authby=rsasigkey
conn passthrough
left=1.2.3.4
right=0.0.0.0
leftsubnet=10.0.1.0/24
95
Red Hat Enterprise Linux 7 Security Guide
rightsubnet=10.0.1.0/24
authby=never
type=passthrough
auto=route
On the server:
conn roadwarriors
left=1.2.3.4
# if access to the LAN is given, enable this
#leftsubnet=10.10.0.0/16
leftcert=gw.example.com
leftid=%fromcert
right=%any
# trust our own Certificate Agency
rightca=%same
# allow clients to be behind a NAT router
rightsubnet=vhost:%priv,%no
authby=rsasigkey
# load connection, don't initiate
auto=add
# kill vanished roadwarriors
dpddelay=30
dpdtimeout=120
dpdaction=%clear
On the mobile client, the Road Warrior's device, we need to use a slight variation of the above
configuration:
conn roadwarriors
# pick up our dynamic IP
left=%defaultroute
leftcert=myname.example.com
leftid=%fromcert
# right can also be a DNS hostname
right=1.2.3.4
# if access to the remote LAN is required, enable this
#rightsubnet=10.10.0.0/16
# trust our own Certificate Agency
rightca=%same
authby=rsasigkey
# Initiate connection
auto=start
4.7.8. Road Warrior Application Using Libreswan and XAUTH with X.509
Libreswan offers a method to natively assign IP address and DNS information to roaming VPN clients as
the connection is established by using the XAUT H IPsec extension. XAUT H can be deployed using PSK
or X.509 certificates. Deploying using X.509 is more secure. Client certificates can be revoked by a
certificate revocation list or by Online Certificate Status Protocol (OCSP). With X.509 certificates, individual
clients cannot impersonate the server. With a PSK, also called Group Password, this is theoretically
possible.
96
C hapter 4 . Hardening Your System with T ools and Services
XAUT H requires the VPN client to additionally identify itself with a user name and password. For One time
Passwords (OT P), such as Google Authenticator or RSA SecureID tokens, the one-time token is
appended to the user password.
T his uses the configuration in /etc/pam .d/pluto to authenticate the user. Pam can be
configured to use various backends by itself. It can use the system account user-password
scheme, an LDAP directory, a RADIUS server or a custom password authentication module.
xauthby=file
user1:$apr1$MIwQ3DHb$1I69LzTnZhnCT2DPQmAOK.:remoteusers
NOT E: when using the htpasswd command, the connection name has to be manually added after
the user:password part on each line.
xauthby=alwaysok
T he server will always pretend the XAUT H user and password combination was correct. T he
client still has to specify a user name and a password, although the server ignores these. T his
should only be used when users are already identified by X.509 certificates, or when testing the
VPN without needing an XAUT H backend.
conn xauth-rsa
auto=add
authby=rsasig
pfs=no
rekey=no
left=ServerIP
leftcert=vpn.example.com
#leftid=%fromcert
leftid=vpn.example.com
leftsendcert=always
leftsubnet=0.0.0.0/0
rightaddresspool=10.234.123.2-10.234.123.254
right=%any
rightrsasigkey=%cert
modecfgdns1=1.2.3.4
modecfgdns2=8.8.8.8
modecfgdomain=example.com
modecfgbanner="Authorized Access is allowed"
leftxauthserver=yes
rightxauthclient=yes
leftmodecfgserver=yes
rightmodecfgclient=yes
modecfgpull=yes
xauthby=pam
dpddelay=30
97
Red Hat Enterprise Linux 7 Security Guide
dpdtimeout=120
dpdaction=clear
ike_frag=yes
# for walled-garden on xauth failure
# xauthfail=soft
#leftupdown=/custom/_updown
When xauthfail is set to soft, instead of hard, authentication failures are ignored and the VPN is setup
as if the user authenticated properly. A custom updown script can be used to check for the environment
variable XAUT H_FAILED. Such users can then be redirected, for example using iptables DNAT , to a
“walled garden” where can they contact the administrator, or renew a paid subscription to the service.
VPN clients use the m odecfgdom ain value and the DNS entries to redirect queries for the specified
domain to these specified nameservers. T his allows roaming users to access internal-only resources
using the internal DNS names.
If leftsubnet is not 0.0.0.0/0, split tunneling configuration requests are sent automatically to the
client. For example, when using leftsubnet=10.0.0.0/8, the VPN client would only send traffic for
10.0.0.0/8 through the VPN.
ipsec_auto(8) man page — Describes the use of the auto command line client for manipulating
automatically-keyed LibreSwan IPsec connections.
ipsec_rsasigkey(8) man page — Describes the tool used to generate RSA signature keys.
http://www.mozilla.org/projects/security/pki/nss/
98
C hapter 4 . Hardening Your System with T ools and Services
T he openssl command line utility has a number of pseudo-commands to provide information on the
commands that the version of openssl installed on the system supports. T he pseudo-commands list-
standard-com m ands, list-m essage-digest-com m ands, and list-cipher-com m ands output a
list of all standard commands, message digest commands, or cipher commands, respectively, that are
available in the present openssl utility.
rsa_keygen_bits:num bits — T he number of bits in the generated key. If not specified 1024 is
used.
rsa_keygen_pubexp:value — T he RSA public exponent value. T his can be a large decimal value,
or a hexadecimal value if preceded by 0x. T he default value is 65537.
For example, to create a 2048 bit RSA private key using using 3 as the public exponent, issue the
following command:
T o encrypt the private key, as it is output, using 128 bit AES and the passphrase “hello”, issue the
following command:
T o have a certificate signed by a certificate authority (CA), it is necessary to generate a certificate and
then send it to a CA for signing. T his is referred to as a certificate signing request. See Section 4.8.2.1,
“Creating a Certificate Signing Request” for more information. T he alternative is to create a self-signed
99
Red Hat Enterprise Linux 7 Security Guide
certificate. See Section 4.8.2.2, “Creating a Self-signed Certificate” for more information.
T o create a certificate for submission to a CA, issue a command in the following format:
T his will create an X.509 certificate called cert.csr encoded in the default privacy-enhanced electronic
mail (PEM) format. T he name PEM is derived from “Privacy Enhancement for Internet Electronic Mail”
described in RFC 1424. T o generate a certificate file in the alternative DER format, use the -outform
DER command option.
After issuing the above command, you will be prompted for information about you and the organization in
order to create a distinguished name ( DN) for the certificate. You will need the following information:
City or T own
T he req(1) man page describes the PKCS# 10 certificate request and generating utility. Default settings
used in the certificate creating process are contained within the /etc/pki/tls/openssl.cnf file. See
man openssl.cnf(5) for more information.
T o generate a self-signed certificate, valid for 366 days, issue a command in the following format:
~]$ openssl req -new -x509 -key privkey.pem -out selfcert.pem -days 366
Alternatively, change to the directory and issue the m ake command as follows:
~]$ cd /etc/pki/tls/certs/
~]$ make
100
C hapter 4 . Hardening Your System with T ools and Services
untrusted certificate. T he verify utility uses the same SSL and S/MIME functions to verify a certificate as is
used by OpenSSL in normal operation. If an error is found it is reported and then an attempt is made to
continue testing in order to report any other errors.
T o verify multiple individual X.509 certificates in PEM format, issue a command in the following format:
T o verify a certificate chain the leaf certificate must be in cert.pem and the intermediate certificates
which you do not trust must be directly concatenated in untrusted.pem . T he trusted root CA certificate
must be either among the default CA listed in /etc/pki/tls/certs/ca-bundle.crt or in a
cacert.pem file. T hen, to very the chain, issue a command in the following format:
T he default format for keys and certificates is PEM. If required, use the -keyform DER command to
specify the DER key format.
~]$ openssl pkeyutl -in plaintext -out cyphertext -inkey privkey.pem -engine id
Where id is the ID of the cryptographic graphic engine. T o check the availability of an engine, issue the
following command:
~]$ openssl pkeyutl -sign -in plaintext -out sigtext -inkey privkey.pem
T o verify a signed data file and to extract the data, issue a command as follows:
T o verify the signature, for example using a DSA key, issue a command as follows:
~]$ openssl pkeyutl -verify -in file -sigfile sig -inkey key.pem
101
Red Hat Enterprise Linux 7 Security Guide
Where algorithm is one of m d5|m d4 |m d2|sha1|sha|m dc2|ripem d160|dss1. At time of writing, the
SHA1 algorithm is preferred. If you need to sign or verify using DSA, then the dss1 option must be used
together with a file containing random data specified by the -rand option.
T o produce a message digest in the default Hex format using the sha1 algorithm, issue the following
command:
T o digitally sign the digest, using a private key privekey.pem, issue the following command:
T o compute the hash of a password from standard input, using the MD5 based BSD algorithm 1, issue a
command as follows:
T o compute the hash of a password stored in a file, and using a salt xx, issue a command as follows:
T he password is sent to standard output and there is no -out option to specify an output file. T he -
table will generate a table of password hashes with their corresponding clear text password.
Multiple files for seeding the random data process can be specified using the colon, :, as a list separator.
102
C hapter 4 . Hardening Your System with T ools and Services
T o test the computational speed of a system for a given algorithm, issue a command in the following
format:
where algorithm is one of the supported algorithms you intended to use. T o list the available algorithms,
type openssl speed and then press tab.
Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile
Updates to the Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL)
Profile
4.9. Encryption
Overview of LUKS
What LUKS does
LUKS encrypts entire block devices and is therefore well-suited for protecting the contents of
mobile devices such as removable storage media or laptop disk drives.
T he underlying contents of the encrypted block device are arbitrary. T his makes it useful for
encrypting swap devices. T his can also be useful with certain databases that use specially
formatted block devices for data storage.
LUKS devices contain multiple key slots, allowing users to add backup keys/passphrases.
LUKS is not well-suited for applications requiring many (more than eight) users to have distinct
access keys to the same device.
103
Red Hat Enterprise Linux 7 Security Guide
Red Hat Enterprise Linux 6 utilizes LUKS to perform file system encryption. By default, the option to encrypt
the file system is unchecked during the installation. If you select the option to encrypt your hard drive, you
will be prompted for a passphrase that will be asked every time you boot the computer. T his passphrase
"unlocks" the bulk encryption key that is used to decrypt your partition. If you choose to modify the default
partition table you can choose which partitions you want to encrypt. T his is set in the partition table
settings.
T he default cipher used for LUKS (see cryptsetup --help) is aes-cbc-essiv:sha256 (ESSIV -
Encrypted Salt-Sector Initialization Vector). Note that the installation program, Anaconda, uses by default
XT S mode (aes-xts-plain64). T he default key size for LUKS is 256 bits. T he default key size for LUKS with
Anaconda (XT S mode) is 512 bits. Ciphers that are available are:
Serpent
Warning
Following this procedure will remove all data on the partition that you are encrypting. You WILL lose
all your information! Make sure you backup your data to an external source before beginning this
procedure!
telinit 1
umount /home
3. If the command in the previous step fails, use fuser to find processes hogging /hom e and kill
them:
104
C hapter 4 . Hardening Your System with T ools and Services
T his command proceeds at the sequential write speed of your device and may take some time to
complete. It is an important step to ensure no unencrypted data is left on a used device, and to
obfuscate the parts of the device that contain encrypted data as opposed to just random data.
mkfs.ext3 /dev/mapper/home
df -h | grep home
13. Edit the /etc/fstab file, removing the old entry for /hom e and adding the following line:
/sbin/restorecon -v -R /home
shutdown -r now
16. T he entry in the /etc/crypttab makes your computer ask your luks passphrase on boot.
You now have an encrypted partition for all of your data to safely rest while the computer is off.
105
Red Hat Enterprise Linux 7 Security Guide
After being prompted for any one of the existing passprases for authentication, you will be prompted to
enter the new passphrase.
You will be prompted for the passphrase you wish to remove and then for any one of the remaining
passphrases for authentication.
You can create encrypted devices during system installation. T his allows you to easily configure a system
with encrypted partitions.
T o enable block device encryption, check the Encrypt System check box when selecting automatic
partitioning or the Encrypt check box when creating an individual partition, software RAID array, or logical
volume. After you finish partitioning, you will be prompted for an encryption passphrase. T his passphrase
will be required to access the encrypted devices. If you have pre-existing LUKS devices and provided
correct passphrases for them earlier in the install process the passphrase entry dialog will also contain a
check box. Checking this check box indicates that you would like the new passphrase to be added to an
available slot in each of the pre-existing encrypted block devices.
Note
Checking the Encrypt System check box on the Autom atic Partitioning screen and then
choosing Create custom layout does not cause any block devices to be encrypted
automatically.
Note
You can use kickstart to set a separate passphrase for each new encrypted block device.
For additional information on LUKS or encrypting hard drives under Red Hat Enterprise Linux 7 visit one of
the following links:
LUKS/cryptsetup FAQ
HOWT O: Creating an encrypted Physical Volume (PV) using a second hard drive and pvmove
106
C hapter 4 . Hardening Your System with T ools and Services
not know. GPG allows anyone reading a GPG-signed email to verify its authenticity. In other words, GPG
allows someone to be reasonably certain that communications signed by you actually are from you. GPG is
useful because it helps prevent third parties from altering code or intercepting conversations and altering
the message.
1. Install the Seahorse utility, which makes GPG key management easier:
2. T o create a key, from the Applications → Accessories menu select Passwords and Encryption
Keys, which starts the application Seahorse.
3. From the File menu select New and then PGP Key. T hen click Continue.
4. T ype your full name, email address, and an optional comment describing who you are (for example:
John C. Smith, [email protected], Software Engineer). Click Create. A dialog is displayed asking
for a passphrase for the key. Choose a strong passphrase but also easy to remember. Click OK and
the key is created.
Warning
If you forget your passphrase, you will not be able to decrypt the data.
T o find your GPG key ID, look in the Key ID column next to the newly created key. In most cases, if you
are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD. You should make a backup of your
private key and store it somewhere secure.
1. Start the KGpg program from the main menu by selecting Applications → Utilities → Encryption
T ool. If you have never used KGpg before, the program walks you through the process of creating
your own GPG keypair.
2. A dialog box appears prompting you to create a new key pair. Enter your name, email address, and
an optional comment. You can also choose an expiration time for your key, as well as the key
strength (number of bits) and algorithms.
3. Enter your passphrase in the next dialog box. At this point, your key appears in the main KGpg
window.
Warning
If you forget your passphrase, you will not be able to decrypt the data.
T o find your GPG key ID, look in the Key ID column next to the newly created key. In most cases, if you
are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD. You should make a backup of your
private key and store it somewhere secure.
107
Red Hat Enterprise Linux 7 Security Guide
T his command generates a key pair that consists of a public and a private key. Other people use
your public key to authenticate and/or decrypt your communications. Distribute your public key as
widely as possible, especially to people who you know will want to receive authentic communications
from you, such as a mailing list.
2. A series of prompts directs you through the process. Press the Enter key to assign a default value
if desired. T he first prompt asks you to select what kind of key you prefer:
In almost all cases, the default is the correct choice. An RSA/RSA key allows you not only to sign
communications, but also to encrypt files.
Again, the default, 2048, is sufficient for almost all users, and represents an extremely strong level
of security.
4. Choose when the key will expire. It is a good idea to choose an expiration date instead of using the
default, which is none. If, for example, the email address on the key becomes invalid, an expiration
date will remind others to stop using that public key.
Entering a value of 1y, for example, makes the key valid for one year. (You may change this
expiration date after the key is generated, if you change your mind.)
5. Before the gpg2 application asks for signature information, the following prompt appears:
6. Enter your name and email address for your GPG key. Remember this process is about
authenticating you as a real individual. For this reason, include your real name. If you choose a
bogus email address, it will be more difficult for others to find your public key. T his makes
authenticating your communications difficult. If you are using this GPG key for self-introduction on a
108
C hapter 4 . Hardening Your System with T ools and Services
mailing list, for example, enter the email address you use on that list.
Use the comment field to include aliases or other information. (Some people use different keys for
different purposes and identify each key with a comment, such as "Office" or "Open Source
Projects.")
7. At the confirmation prompt, enter the letter O to continue if all entries are correct, or use the other
options to fix any problems. Finally, enter a passphrase for your secret key. T he gpg2 program
asks you to enter your passphrase twice to ensure you made no typing errors.
8. Finally, gpg2 generates random data to make your key as unique as possible. Move your mouse,
type random keys, or perform other tasks on the system during this step to speed up the process.
Once this step is finished, your keys are complete and ready to use:
9. T he key fingerprint is a shorthand "signature" for your key. It allows you to confirm to others that
they have received your actual public key without any tampering. You do not need to write this
fingerprint down. T o display the fingerprint at any time, use this command, substituting your email
address:
Your "GPG key ID" consists of 8 hex digits identifying the public key. In the example above, the GPG
key ID is 1B2AFA1C. In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in
0x6789ABCD.
Warning
If you forget your passphrase, the key cannot be used and any data encrypted using that key will
be lost.
2. HowStuffWorks - Encryption
T o install the basic openCryptoki packages on your system, including a software implementation of a
token for testing purposes, run the following command as root:
109
Red Hat Enterprise Linux 7 Security Guide
Depending on the type of hardware tokens you intend to use, you may need to install additional packages
that provide support for your specific use case. For example, to obtain support for Trusted Platform
Module (T PM) devices, you need to install the opencryptoki-tpmtok package.
See the Red Hat Enterprise Linux 7 System Administrator's Guide for general information on how to install
packages using the Yum package manager.
T o enable the openCryptoki service, you need to run the pkcsslotd daemon. Start the daemon for the
current session by executing the following command as root:
T o ensure that the service is automatically started at boot time, run the following command:
See the Red Hat Enterprise Linux 7 System Administrator's Guide the for more information on how to use
systemd targets to manage services.
T he file defines the individual slots using key-value pairs. Each slot definition can contain a description, a
specification of the token library to be used, and an ID of the slot's manufacturer. Optionally, the version of
the slot's hardware and firmware may be defined. See the opencryptoki.conf(5) manual page for a
description of the file's format and for a more detailed description of the individual keys and the values that
can be assigned to them.
T o modify the behavior of the pkcsslotd daemon at run time, use the pkcsconf utility. T his tool allows
you to show and configure the state of the daemon, as well as to list and modify the currently configured
slots and tokens. For example, to display information about tokens, issue the following command (note that
all non-root users that need to communicate with the pkcsslotd daemon must be a part of the pkcs11
system group):
~]$ pkcsconf -t
See the pkcsconf(1) manual page for a list of arguments available with the pkcsconf tool.
Warning
Keep in mind that only fully trusted users should be assigned membership in the pkcs11 group, as
all members of this group have the right to block other users of the openCryptoki service from
accessing configured PKCS#11 tokens. All members of this group can also execute arbitrary code
with the privileges of any other users of openCryptoki.
110
C hapter 5. System Auditing
T he following list summarizes some of the information that Audit is capable of recording in its log files:
Association of an event with the identity of the user who triggered the event.
All modifications to Audit configuration and attempts to access Audit log files.
Include or exclude events based on user identity, subject and object labels, and other attributes.
T he use of the Audit system is also a requirement for a number of security-related certifications. Audit is
designed to meet or exceed the requirements of the following certifications or compliance guides:
Evaluated by National Information Assurance Partnership (NIAP) and Best Security Industries (BSI).
Use Cases
Watching file access
Audit can track whether a file or a directory has been accessed, modified, executed, or the file's
attributes have been changed. T his is useful, for example, to detect access to important files and
have an Audit trail available in case one of these files is corrupted.
111
Red Hat Enterprise Linux 7 Security Guide
have an Audit trail available in case one of these files is corrupted.
Audit can be configured to generate a log entry every time a particular system call is used. T his
can be used, for example, to track changes to the system time by monitoring the settim eofday,
clock_adjtim e, and other time-related system calls.
Because Audit can track whether a file has been executed, a number of rules can be defined to
record every execution of a particular command. For example, a rule can be defined for every
executable in the /bin directory. T he resulting log entries can then be searched by user ID to
generate an audit trail of executed commands per user.
T he pam _faillock authentication module is capable of recording failed login attempts. Audit
can be set up to record failed login attempts as well, and provides additional information about the
user who attempted to log in.
Audit provides the ausearch utility, which can be used to filter the log entries and provide a
complete audit trail based on a number of conditions.
T he aureport utility can be used to generate, among other things, daily reports of recorded
events. A system administrator can then analyze these reports and investigate suspicious activity
furthermore.
T he iptables and ebtables utilities can be configured to trigger Audit events, allowing system
administrators to monitor network access.
Note
System performance may be affected depending on the amount of information that is collected by
Audit.
112
C hapter 5. System Auditing
T he user-space Audit daemon collects the information from the kernel and creates log file entries in a log
file. Other Audit user-space utilities interact with the Audit daemon, the kernel Audit component, or the Audit
log files:
audisp — the Audit dispatcher daemon interacts with the Audit daemon and sends events to other
applications for further processing. T he purpose of this daemon is to provide a plug-in mechanism so
that real-time analytical programs can interact with Audit events.
auditctl — the Audit control utility interacts with the kernel Audit component to control a number of
settings and parameters of the event generation process.
T he remaining Audit utilities take the contents of the Audit log files as input and generate output based
on user's requirements. For example, the aureport utility generates a report of all recorded events.
113
Red Hat Enterprise Linux 7 Security Guide
T he default auditd configuration should be suitable for most environments. However, if your environment
has to meet the criteria set by the Controlled Access Protection Profile (CAPP), which is a part of the
Common Criteria certification, the Audit daemon must be configured with the following settings:
T he directory that holds the Audit log files (usually /var/log/audit/) should reside on a separate
partition. T his prevents other processes from consuming space in this directory, and provides accurate
detection of the remaining space for the Audit daemon.
T he max_log_file parameter, which specifies the maximum size of a single Audit log file, must be set
to make full use of the available space on the partition that holds the Audit log files.
T he max_log_file_action parameter, which decides what action is taken once the limit set in
max_log_file is reached, should be set to keep_logs to prevent Audit log files from being
overwritten.
T he space_left parameter, which specifies the amount of free space left on the disk for which an
action that is set in the space_left_action parameter is triggered, must be set to a number that
gives the administrator enough time to respond and free up disk space. T he space_left value
depends on the rate at which the Audit log files are generated.
T he admin_space_left parameter, which specifies the absolute minimum amount of free space for
which an action that is set in the admin_space_left_action parameter is triggered, must be set to a
value that leaves enough space to log actions performed by the administrator.
T he admin_space_left_action parameter must be set to single to put the system into single-user
mode and allow the administrator to free up some disk space.
T he disk_full_action parameter, which specifies an action that is triggered when no free space is
available on the partition that holds the Audit log files, must be set to halt or single. T his ensures
that the system is either shut down or operating in single-user mode when Audit can no longer log
events.
T he flush configuration parameter must be set to sync or data. T hese parameters assure that all
Audit event data is fully synchronized with the log files on the disk.
T he remaining configuration options should be set according to your local security policy.
Optionally, you can configure auditd to start at boot time using the following command as the root user:
114
C hapter 5. System Auditing
A number of other actions can be performed on auditd using the service auditd action command,
where action can be one of the following:
resum e — resumes logging of Audit events after it has been previously suspended, for example, when
there is not enough free space on the disk partition that holds the Audit log files.
Control rules — allow the Audit system's behavior and some of its configuration to be modified.
File system rules — also known as file watches, allow the auditing of access to a particular file or a
directory.
System call rules — allow logging of system calls that any specified program makes.
Audit rules can be specified on the command line with the auditctl utility (note that these rules are not
persistent across reboots), or written in the /etc/audit/audit.rules file. T he following two sections
summarize both approaches to defining Audit rules.
Note
All commands which interact with the Audit service and the Audit log files require root privileges.
Ensure you execute these commands as the root user.
T he auditctl command allows you to control the basic functionality of the Audit system and to define
rules that decide which Audit events are logged.
T he following are some of the control rules that allow you to modify the behavior of the Audit system:
-b
sets the maximum amount of existing Audit buffers in the kernel, for example:
115
Red Hat Enterprise Linux 7 Security Guide
-f
sets the action that is performed when a critical error is detected, for example:
~]# auditctl -f 2
-e
enables and disables the Audit system or locks its configuration, for example:
~]# auditctl -e 2
-r
~]# auditctl -r 0
-s
~]# auditctl -s
AUDIT_STATUS: enabled=1 flag=2 pid=0 rate_limit=0 backlog_limit=8192
lost=259 backlog=0
-l
~]# auditctl -l
LIST_RULES: exit,always watch=/etc/localtime perm=wa key=time-change
LIST_RULES: exit,always watch=/etc/group perm=wa key=identity
LIST_RULES: exit,always watch=/etc/passwd perm=wa key=identity
LIST_RULES: exit,always watch=/etc/gshadow perm=wa key=identity
⋮
-D
~]# auditctl -D
No rules
where:
116
C hapter 5. System Auditing
key_name is an optional string that helps you identify which rule or a set of rules generated a particular
log entry.
T o define a rule that logs all write access to, and every attribute change of, the /etc/passwd file,
execute the following command:
T o define a rule that logs all write access to, and every attribute change of, all the files in the
/etc/selinux/ directory, execute the following command:
T o define a rule that logs the execution of the /sbin/insm od command, which inserts a module into
the Linux kernel, execute the following command:
where:
action and filter specify when a certain event is logged. action can be either always or never. filter
specifies which kernel rule-matching filter is applied to the event. T he rule-matching filter can be one of
the following: task, exit, user, and exclude. For more information about these filters, refer to the
beginning of Section 5.1, “Audit System Architecture”.
system_call specifies the system call by its name. A list of all system calls can be found in the
/usr/include/asm /unistd_64 .h file. Several system calls can be grouped into one rule, each
specified after the -S option.
field=value specifies additional options that furthermore modify the rule to match events based on a
specified architecture, group ID, process ID, and others. For a full listing of all available field types and
their values, refer to the auditctl(8) man page.
117
Red Hat Enterprise Linux 7 Security Guide
key_name is an optional string that helps you identify which rule or a set of rules generated a particular
log entry.
T o define a rule that creates a log entry every time the adjtim ex or settim eofday system calls are
used by a program, and the system uses the 64-bit architecture, execute the following command:
T o define a rule that creates a log entry every time a file is deleted or renamed by a system user whose
ID is 500 or larger (the -F auid!=4 294 967295 option is used to exclude users whose login UID is
not set), execute the following command:
It is also possible to define a file system rule using the system call rule syntax. T he following command
creates a rule for system calls that is analogous to the -w /etc/shadow -p wa file system rule:
T o define Audit rules that are persistent across reboots, you must include them in the
/etc/audit/audit.rules file. T his file uses the same auditctl command line syntax to specify the
rules. Any empty lines or any text following a hash sign (#) is ignored.
T he auditctl command can also be used to read rules from a specified file with the -R option, for
example:
A file can contain only the following control rules that modify the behavior of the Audit system: -b, -D, -e, -
f, and -r. For more information on these options, refer to Section 5.5.1, “Defining Control Rules”.
118
C hapter 5. System Auditing
File system and system call rules are defined using the auditctl syntax. T he examples in Section 5.5.1,
“Defining Audit Rules with the auditctl Utility” can be represented with the following rules file:
-w /etc/passwd -p wa -k passwd_changes
-w /etc/selinux/ -p wa -k selinux_changes
-w /sbin/insmod -p x -k module_insertion
nispom .rules — Audit rule configuration that meets the requirements specified in Chapter 8 of the
National Industrial Security Program Operating Manual.
T o use these configuration files, create a backup of your original /etc/audit/audit.rules file and
copy the configuration file of your choice over the /etc/audit/audit.rules file:
119
Red Hat Enterprise Linux 7 Security Guide
T he following Audit rule logs every attempt to read or modify the /etc/ssh/sshd_config file:
If the auditd daemon is running, running the following command creates a new event in the Audit log file:
T he above event consists of three records (each starting with the type= keyword), which share the same
time stamp and serial number. Each record consists of several name=value pairs separated by a white
space or a comma. A detailed analysis of the above event follows:
First Record
type=SYSCALL
T he type field contains the type of the record. In this example, the SYSCALL value specifies that
this record was triggered by a system call to the kernel.
For a list of all possible type values and their explanations, refer to Section B.2, “Audit Record
T ypes”.
T he m sg field records:
a time stamp and a unique ID of the record in the form audit(time_stamp:ID). Multiple
records can share the same time stamp and ID if they were generated as part of the same
Audit event.
various event-specific name=value pairs provided by the kernel or user space applications.
arch=c000003e
T he arch field contains information about the CPU architecture of the system. T he value,
c000003e, is encoded in hexadecimal notation. When searching Audit records with the
ausearch command, use the -i or --interpret option to automatically convert hexadecimal
values into their human-readable equivalents. T he c000003e value is interpreted as x86_64 .
syscall=2
120
C hapter 5. System Auditing
T he syscall field records the type of the system call that was sent to the kernel. T he value, 2,
can be matched with its human-readable equivalent in the /usr/include/asm /unistd_64 .h
file. In this case, 2 is the open system call. Note that the ausyscall utility allows you to convert
system call numbers to their human-readable equivalents. Use the ausyscall --dum p
command to display a listing of all system calls along with their numbers. For more information,
refer to the ausyscall(8) man page.
success=no
T he success field records whether the system call recorded in that particular event succeeded
or failed. In this case, the call did not succeed.
exit=-13
T he exit field contains a value that specifies the exit code returned by the system call. T his
value varies for different system call. You can interpret the value to its human-readable equivalent
with the following command: ausearch --interpret --exit -13 (assuming your Audit log
contains an event that failed with exit code -13).
T he a0 to a3 fields record the first four arguments, encoded in hexadecimal notation, of the
system call in this event. T hese arguments depend on the system call that is used; they can be
interpreted by the ausearch utility.
item s=1
ppid=2686
T he ppid field records the Parent Process ID (PPID). In this case, 2686 was the PPID of the
bash process.
pid=3538
T he pid field records the Process ID (PID). In this case, 3538 was the PID of the cat process.
auid=500
T he auid field records the Audit user ID, that is the loginuid. T his ID is assigned to a user upon
login and is inherited by every process even when the user's identity changes (for example, by
switching user accounts with the su - john command).
uid=500
T he uid field records the user ID of the user who started the analyzed process. T he user ID can
be interpreted into user names with the following command: ausearch -i --uid UID. In this
case, 500 is the user ID of user shadowm an.
gid=500
T he gid field records the group ID of the user who started the analyzed process.
euid=500
T he euid field records the effective user ID of the user who started the analyzed process.
121
Red Hat Enterprise Linux 7 Security Guide
suid=500
T he suid field records the set user ID of the user who started the analyzed process.
fsuid=500
T he fsuid field records the file system user ID of the user who started the analyzed process.
egid=500
T he egid field records the effective group ID of the user who started the analyzed process.
sgid=500
T he sgid field records the set group ID of the user who started the analyzed process.
fsgid=500
T he fsgid field records the file system group ID of the user who started the analyzed process.
tty=pts0
T he tty field records the terminal from which the analyzed process was invoked.
ses=1
T he ses field records the session ID of the session from which the analyzed process was
invoked.
com m ="cat"
T he com m field records the command-line name of the command that was used to invoke the
analyzed process. In this case, the cat command was used to trigger this Audit event.
exe="/bin/cat"
T he exe field records the path to the executable that was used to invoke the analyzed process.
subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
T he subj field records the SELinux context with which the analyzed process was labeled at the
time of execution.
key="sshd_config"
T he key field records the administrator-defined string associated with the rule that generated this
event in the Audit log.
Second Record
type=CWD
In the second record, the type field value is CWD — current working directory. T his type is used
to record the working directory from which the process that invoked the system call specified in
the first record was executed.
T he purpose of this record is to record the current process's location in case a relative path
winds up being captured in the associated PAT H record. T his way the absolute path can be
reconstructed.
122
C hapter 5. System Auditing
T he m sg field holds the same time stamp and ID value as the value in the first record.
T he cwd field contains the path to the directory in which the system call was invoked.
T hird Record
type=PAT H
In the third record, the type field value is PAT H. An Audit event contains a PAT H-type record for
every path that is passed to the system call as an argument. In this Audit event, only one path
(/etc/ssh/sshd_config) was used as an argument.
T he m sg field holds the same time stamp and ID value as the value in the first and second record.
item =0
T he item field indicates which item, of the total number of items referenced in the SYSCALL type
record, the current record is. T his number is zero-based; a value of 0 means it is the first item.
nam e="/etc/ssh/sshd_config"
T he nam e field records the full path of the file or directory that was passed to the system call as
an argument. In this case, it was the /etc/ssh/sshd_config file.
inode=4 0924 8
T he inode field contains the inode number associated with the file or directory recorded in this
event. T he following command displays the file or directory that is associated with the 4 0924 8
inode number:
dev=fd:00
T he dev field specifies the minor and major ID of the device that contains the file or directory
recorded in this event. In this case, the value represents the /dev/fd/0 device.
m ode=0100600
T he m ode field records the file or directory permissions, encoded in numerical notation. In this
case, 0100600 can be interpreted as -rw-------, meaning that only the root user has read and
write permissions to the /etc/ssh/sshd_config file.
ouid=0
ogid=0
123
Red Hat Enterprise Linux 7 Security Guide
rdev=00:00
T he rdev field contains a recorded device identifier for special files only. In this case, it is not
used as the recorded file is a regular file.
obj=system _u:object_r:etc_t:s0
T he obj field records the SELinux context with which the recorded file or directory was labeled at
the time of execution.
T he Audit event analyzed above contains only a subset of all possible fields that an event can contain. For
a list of all event fields and their explanation, refer to Section B.1, “Audit Event Fields”. For a list of all event
types and their explanation, refer to Section B.2, “Audit Record T ypes”.
T he following Audit event records a successful start of the auditd daemon. T he ver field shows the
version of the Audit daemon that was started.
T he following Audit event records a failed attempt of user with UID of 500 to log in as the root user.
124
C hapter 5. System Auditing
T o search the /var/log/audit/audit.log file for failed login attempts, use the following command:
T o search for all account, group, and role changes, use the following command:
T o search for all logged actions performed by a certain user, using the user's login ID (auid), use the
following command:
T o search for all failed system calls from yesterday up until now, use the following command:
For a full listing of all ausearch options, refer to the ausearch(8) man page.
125
Red Hat Enterprise Linux 7 Security Guide
T o generate a report for logged events in the past three days excluding the current example day, use
the following command:
T o generate a report of all executable file events, use the following command:
~]# aureport -x
T o generate a summary of the executable file event report above, use the following command:
T o generate a summary report of failed events for all users, use the following command:
T o generate a summary report of all failed login attempts per each system user, use the following
command:
T o generate a report from an ausearch query that searches all file access events for user 500, use
the following command:
T o generate a report of all Audit files that are queried and the time range of events they include, use the
following command:
~]# aureport -t
For a full listing of all aureport options, refer to the aureport(8) man page.
Online Sources
Article Investigating kernel Return Codes with the Linux Audit System in the Hack In the Box magazine:
http://magazine.hackinthebox.org/issues/HIT B-Ezine-Issue-005.pdf.
Installed Documentation
126
C hapter 5. System Auditing
Manual Pages
audispd.conf(5)
auditd.conf(5)
ausearch-expression(5)
audit.rules(7)
audispd(8)
auditctl(8)
auditd(8)
aulast(8)
aulastlog(8)
aureport(8)
ausearch(8)
ausyscall(8)
autrace(8)
auvirt(8)
127
Red Hat Enterprise Linux 7 Security Guide
T he compliance policy can vary substantially across organizations and even across different systems
within the same organization. Differences among these policies are based on the purpose of these
systems and its importance for the organization. T he custom software settings and deployment
characteristics also raise a need for custom policy checklists.
Red Hat Enterprise Linux provides tools that allow for fully automated compliance audit. T hese tools are
based on the Security Content Automation Protocol (SCAP) standard and are designed for automated
tailoring of compliance policies.
If you require performing automated compliance audits on multiple systems remotely, you can utilize
OpenSCAP solution for Red Hat Satellite. For more information see Section 6.5, “Using OpenSCAP with
Red Hat Satellite” and Section 6.7, “Additional Resources”.
Note
Note that Red Hat does not provide any default compliance policy along with the Red Hat
Enterprise Linux 7 distribution. T he reasons for that are explained in Section 6.2, “Defining
Compliance Policy”.
Red Hat Enterprise Linux auditing capabilities are based on the Security Content Automation Protocol
(SCAP) standard. SCAP is a synthesis of interoperable specifications that standardize the format and
nomenclature by which software flaw and security configuration information is communicated, both to
machines and humans. SCAP is a multi-purpose framework of specifications that supports automated
128
C hapter 6. Compliance and Vulnerability Scanning
configuration, vulnerability and patch checking, technical control compliance activities, and security
measurement.
In other words, SCAP is a vendor-neutral way of expressing security policy, and as such it is widely used
in modern enterprises. SCAP specifications create an ecosystem where the format of security content is
well known and standardized while the implementation of the scanner or policy editor is not mandated.
Such a status enables organizations to build their security policy (SCAP content) once, no matter how
many security vendors do they employ.
T he latest version of SCAP includes several underlying standards. T hese components are organized into
groups according to their function within SCAP as follows:
SCAP Components
Languages — T his group consists of SCAP languages that define standard vocabularies and
conventions for expressing compliance policy.
Open Vulnerability and Assessment Language (OVAL) — A language developed to perform logical
assertion about the state of the scanned system.
Open Checklist Interactive Language (OCIL) — A language designed to provide a standard way to
query users and interpret user responses to the given questions.
Asset Identification (AI) — A language developed to provide a data model, methods, and guidance
for identifying security assets.
Asset Reporting Format (ARF) — A language designed to express the transport format of
information about collected security assets and the relationship between assets and security
reports.
Enumerations — T his group includes SCAP standards that define naming format and an official list or
dictionary of items from certain security-related areas of interest.
Common Platform Enumeration (CPE) — A structured naming scheme used to identify information
technology (IT ) systems, platforms, and software packages.
Metrics — T his group comprises of frameworks to identify and evaluate security risks.
Integrity — An SCAP specification to maintain integrity of SCAP content and scan results.
129
Red Hat Enterprise Linux 7 Security Guide
Trust Model for Security Automation Data (TMSAD) — A set of recommendations explaining usage
of existing specification to represent signatures, hashes, key information, and identity information in
context of an XML file within a security automation domain.
Each of the SCAP components has its own XML-based document format and its XML name space. A
compliance policy expressed in SCAP can either take a form of a single OVAL definition XML file, data
stream file, single zip archive, or a set of separate XML files containing an XCCDF file that represents a
policy checklist.
T he common way to represent a compliance policy is a set of XML files where one of the files is an XCCDF
checklist. T his XCCDF file usually points to the assessment resources, multiple OVAL, OCIL and the Script
Check Engine (SCE) files. Furthermore, the file set can contain a CPE dictionary file and an OVAL file
defining objects for this dictionary.
Being an XML-based language, the XCCDF defines and uses a vast selection of XML elements and
attributes. T he following list briefly introduces the main XCCDF elements; for more details about XCCDF,
consult the NIST Interagency Report 7275 Revision 4.
<xccdf:Benchm ark> — T his is a root element that encloses the whole XCCDF document. It may
also contain checklist metadata, such as a title, description, list of authors, date of the latest
modification, and status of the checklist acceptance.
<xccdf:Rule> — T his is a key element that represents a checklist requirement and holds its
description. It may contain child elements that define actions verifying or enforcing compliance with the
given rule or modify the rule itself.
<xccdf:Value> — T his key element is used for expressing properties of other XCCDF elements
within the benchmark.
<xccdf:Group> — T his element is used to organize an XCCDF document to structures with the
same context or requirement domains by gathering the <xccdf:Rule>, <xccdf:Value>, and
<xccdf:Group> elements.
<xccdf:Profile> — T his element serves for a named tailoring of the XCCDF benchmark. It allows
the benchmark to hold several different tailorings. <xccdf:Profile> utilizes several selector
elements, such as <xccdf:select> or <xccdf:refine-rule>, to determine which elements are
going to be modified and processed while it is in effect.
<xccdf:T ailoring> — T his element allows defining the benchmark profiles outside the
benchmark, which is sometimes desirable for manual tailoring of the compliance policy.
130
C hapter 6. Compliance and Vulnerability Scanning
<xccdf:T estResult> — T his element serves for keeping the scan results for the given benchmark
on the target system. Each <xccdf:T estResult> should refer to the profile that was used to define
the compliance policy for the particular scan and it should also contain important information about the
target system that is relevant for the scan.
<xccdf:rule-result> — T his is a child element of <xccdf:T estResult> that is used to hold the
result of applying a specific rule from the benchmark to the target system.
<xccdf:fix> — T his is a child element of <xccdf:Rule> that serves for remediation of the target
system that is not compliant with the given rule. It can contain a command or script that is run on the
target system in order to bring the system into compliance the rule.
<xccdf:check> — T his is a child element of <xccdf:Rule> that refers to an external source which
defines how to evaluate the given rule.
<xccdf:select> — T his is a selector element that is used for including or excluding the chosen
rules or groups of rules from the policy.
<xccdf:set-value> — T his is a selector element that is used for overwriting the current value of
the specified <xccdf:Value> element without modifying any of its other properties.
<xccdf:refine-value> — T his is a selector element that is used for specifying constraints of the
particular <xccdf:Value> element during policy tailoring.
<xccdf:refine-rule> — T his selector element allows overwriting properties of the selected rules.
131
Red Hat Enterprise Linux 7 Security Guide
132
C hapter 6. Compliance and Vulnerability Scanning
2. Analysis of the target system for the presence of a particular machine state.
3. Reporting the results of the comparison between the specified machine state and the observed
machine state.
Unlike other tools or custom scripts, the OVAL language describes a desired state of resources in a
declarative manner. T he OVAL language code is never executed directly, but by means of an OVAL
interpreter tool called scanner. T he declarative nature of OVAL ensures that the state of the assessed
system is not accidentally modified, which is important because security scanners are often run with the
highest possible privileges.
OVAL specification is open for public comments and contribution and various IT companies collaborate
with the MIT RE Corporation, federally funded not-for-profit organization. T he OVAL specification is
continuously evolving and different editions are distinguished by a version number. T he current version
5.10.1 was released in January 2012.
Like all other SCAP components, OVAL is based on XML. T he OVAL standard defines several document
formats. Each of them includes different kind of information and serves a different purpose.
T he OVAL Definitions format is the most common OVAL file format that is used directly for system
scans. T he OVAL Definitions document describes the desired state of the target system.
T he OVAL Variables format defines variables used to amend the OVAL Definitions document. T he
OVAL Variables document is typically used in conjunction with the OVAL Definitions document to tailor
the security content for the target system at runtime.
T he OVAL System Characteristics format holds information about the assessed system. T he OVAL
System Characteristics document is typically used to compare the actual state of the system against
the expected state defined by an OVAL Definitions document.
T he OVAL Results is the most comprehensive OVAL format that is used to report results of the system
evaluation. T he OVAL Results document typically contains copy of the evaluated OVAL definitions,
bound OVAL variables, OVAL system characteristics, and results of tests that are computed based on
comparison of the system characteristics and the definitions.
T he OVAL Directives format is used to tailor verbosity of an OVAL Result document by either including
or excluding certain details.
T he OVAL Common Model format contains definitions of constructs and enumerations used in several
other OVAL schemes. It is used to reuse OVAL definitions in order to avoid duplications across multiple
documents.
T he OVAL Definitions document consists of a set of configuration requirements where each requirement is
defined in the following five basic sections: definitions, tests, objects, states, and variables. T he elements
within the definitions section describe which of the tests shall be fulfilled to satisfy the given definition. T he
test elements link objects and states together. During the system evaluation, a test is considered passed
when a resource of the assessed system that is denoted by the given object element corresponds with
the given state element. T he variables section defines external variables which may be used to adjust
elements from the states section. Besides these sections, the OVAL Definitions document typically
contains also the generator and signature sections. T he generator section holds information about the
document origin and various additional information related to its content.
133
Red Hat Enterprise Linux 7 Security Guide
Each element from the OVAL document basic sections is unambiguously identified by an identifier in the
following form:
oval:namespace:type:ID
where namespace is a name space defining the identifier, type is either def for definitions elements, tst for
tests elements, obj for objects element, ste for states elements, and var for variables elements, and ID is
an integer value of the identifier.
134
C hapter 6. Compliance and Vulnerability Scanning
-->
<lin-def:behaviors nolinkto='true'
nomd5='true'
nosize='true'
nouser='true'
nogroup='true'
nomtime='true'
nomode='true'
nordev='true'
noconfigfiles='true'
noghostfiles='true' />
<lin-def:name operation="pattern match"/>
<lin-def:epoch operation="pattern match"/>
<lin-def:version operation="pattern match"/>
<lin-def:release operation="pattern match"/>
<lin-def:arch operation="pattern match"/>
<lin-def:filepath>/etc/redhat-release</lin-def:filepath>
</lin-def:rpmverifyfile_object>
</objects>
<states>
<lin-def:rpminfo_state id="oval:org.open-scap.cpe.rhel:ste:7"
version="1">
<lin-def:name operation="pattern match">^redhat-release</lin-def:name>
<lin-def:version operation="pattern match">^7[^\d]</lin-def:version>
</lin-def:rpminfo_state>
</states>
</oval_definitions>
T he data stream uses XML format that consists of a header formed by a table of contents and a list of the
<ds:com ponent> elements. Each of these elements encompasses an SCAP component such as
XCCDF, OVAL, CPE, and other. T he data stream file may contain multiple components of the same type,
and thus covering all security policies needed by your organization.
<ds:data-stream-collection xmlns:ds="http://scap.nist.gov/schema/scap/source/1.2"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:cat="urn:oasis:names:tc:entity:xmlns:xml:catalog"
id="scap_org.open-scap_collection_from_xccdf_ssg-rhel7-xccdf-1.2.xml"
schematron-version="1.0">
<ds:data-stream id="scap_org.open-scap_datastream_from_xccdf_ssg-rhel7-xccdf-
1.2.xml"
scap-version="1.2" use-case="OTHER">
<ds:dictionaries>
<ds:component-ref id="scap_org.open-scap_cref_output--ssg-rhel7-cpe-
dictionary.xml"
xlink:href="#scap_org.open-scap_comp_output--ssg-rhel7-cpe-
dictionary.xml">
135
Red Hat Enterprise Linux 7 Security Guide
<cat:catalog>
<cat:uri name="ssg-rhel7-cpe-oval.xml"
uri="#scap_org.open-scap_cref_output--ssg-rhel7-cpe-oval.xml"/>
</cat:catalog>
</ds:component-ref>
</ds:dictionaries>
<ds:checklists>
<ds:component-ref id="scap_org.open-scap_cref_ssg-rhel7-xccdf-1.2.xml"
xlink:href="#scap_org.open-scap_comp_ssg-rhel7-xccdf-1.2.xml">
<cat:catalog>
<cat:uri name="ssg-rhel7-oval.xml"
uri="#scap_org.open-scap_cref_ssg-rhel7-oval.xml"/>
</cat:catalog>
</ds:component-ref>
</ds:checklists>
<ds:checks>
<ds:component-ref id="scap_org.open-scap_cref_ssg-rhel7-oval.xml"
xlink:href="#scap_org.open-scap_comp_ssg-rhel7-oval.xml"/>
<ds:component-ref id="scap_org.open-scap_cref_output--ssg-rhel7-cpe-oval.xml"
xlink:href="#scap_org.open-scap_comp_output--ssg-rhel7-cpe-oval.xml"/>
<ds:component-ref id="scap_org.open-scap_cref_output--ssg-rhel7-oval.xml"
xlink:href="#scap_org.open-scap_comp_output--ssg-rhel7-oval.xml"/>
</ds:checks>
</ds:data-stream>
<ds:component id="scap_org.open-scap_comp_ssg-rhel7-oval.xml"
timestamp="2014-03-14T16:21:59">
<oval_definitions xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5"
xmlns:oval="http://oval.mitre.org/XMLSchema/oval-common-5"
xmlns:ind="http://oval.mitre.org/XMLSchema/oval-definitions-5#independent"
xmlns:unix="http://oval.mitre.org/XMLSchema/oval-definitions-5#unix"
xmlns:linux="http://oval.mitre.org/XMLSchema/oval-definitions-5#linux"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://oval.mitre.org/XMLSchema/oval-common-5
oval-common-schema.xsd
http://oval.mitre.org/XMLSchema/oval-definitions-5
oval-definitions-schema.xsd
http://oval.mitre.org/XMLSchema/oval-definitions-5#independent
independent-definitions-schema.xsd
http://oval.mitre.org/XMLSchema/oval-definitions-5#unix
unix-definitions-schema.xsd
http://oval.mitre.org/XMLSchema/oval-definitions-5#linux
linux-definitions-schema.xsd">
T he following sections explain how to install, start, and utilize SCAP Workbench in order to perform system
scans, remediation, scan customization, and display relevant examples for these tasks.
136
C hapter 6. Compliance and Vulnerability Scanning
T o install SCAP Workbench on your system, run the following command as root:
T his command installs all packages required by SCAP Workbench to function properly, including the scap-
workbench package that provides the utility itself. Note that required dependencies, such as the qt and
openssh packages, will be automatically updated to the newest available version if the packages are
already installed on your system.
Before you can start using SCAP Workbench effectively, you also need to install or import some security
content on your system. You can download the SCAP content from the respective web site, or if specified
as an RPM file or package, you can install it from the specified location, or known repository, using the
Yum package manager.
For example, you can install the SCAP Security Guide (SSG) package, scap-security-guide, that contains
the currently most evolved and elaborate set of security polices for Linux systems. See the SSG project
page to learn the exact steps how to deploy the package on your system.
After you install the scap-security-guide on your system, unless specified otherwise, the SSG security
content is available under the /usr/share/xm l/scap/ssg/rhel7/ directory, and you can proceed with
other security compliance operations.
T o find out other possible sources of existing SCAP content that might suit your needs, see Section 6.7,
“Additional Resources”.
As soon as you start the utility, the SCAP Workbench window appears. T he SCAP Workbench window
consists of several interactive components which you should become familiar with before you start
scanning your system:
Input file
T his field contains the full path to the chosen security policy. You can search for applicable SCAP
content on your system by clicking the Browse button.
Checklist
T his combo box displays the name of the checklist that is to be applied by the selected security
policy. You can choose a specific checklist by clicking this combo box if more than one checklist is
available.
T ailoring
T his combo box informs you about the customization used for the given security policy. You can
select custom rules that will be applied for the system evaluation by clicking this combo box. T he
default value is (no tailoring), which means that there will be no changes to the used security
policy. If you made any changes to the selected security profile, you can save those changes as
an XML file by clicking the Save T ailoring button.
Profile
137
Red Hat Enterprise Linux 7 Security Guide
T his combo box contains the name of the selected security profile. You can select the security
profile from a given XCCDF or data stream file by clicking this combo box. T o create a new profile
that inherits properties of the selected security profile, click the Custom ize button.
T arget
T he two radio buttons enable you to select whether the system to be evaluated is a local or
remote machine.
Selected Rules
T his field displays a list of security rules that are subject of the security policy. Hovering over a
particular security rule provides detailed information about that rule.
Save content
T his menu allows you to save SCAP files that have been selected in the Input file and T ailoring
fields either to the selected directory or as an RPM package.
Status bar
T his is a graphical bar that indicates status of an operation that is being performed.
Online remediation
T his check box enables the remediation feature during the system evaluation. If you check this
box, SCAP Workbench will attempt to correct system settings that would fail to match the state
defined by the policy.
Scan
T his button allows you to start the evaluation of the specified system.
138
C hapter 6. Compliance and Vulnerability Scanning
1. Select a security policy by clicking the Browse button and searching the respective XCCDF or data
stream file.
Warning
Selecting a security policy results in loss of any previous tailoring changes that were not
saved. T o re-apply the lost options, you have to choose the available profile and tailoring
content again. Note that your previous customizations may not be applicable with the new
security policy.
2. If the selected SCAP file is a data stream file that provides more than one checklist, you can select
the particular checklist by clicking the Checklist combo box.
139
Red Hat Enterprise Linux 7 Security Guide
Warning
Changing the checklist may result in selection of a different profile and any previous
customizations may not be applicable to the new checklist.
3. If you have pre-arranged a file with customized security content specific to your use case, you can
load this file by clicking on the T ailoring combo box. You can also create a custom tailoring file by
altering an available security profile. For more information, see Section 6.3.4, “Customizing Security
Profiles”.
a. Select the (no tailoring) option if you do not want to use any customization for the
current system evaluation. T his is the default option if no previous customization was
selected.
b. Select the (open tailoring file...) option to search up the particular tailoring file to
be used for the current system evaluation.
c. If you have previously used some tailoring file, SCAP Workbench remembers this file and
adds it to the list. T his simplifies repetitive application of the same scan.
a. Modify further the selected profile by clicking the Custom ize button. For more information
about profile customization, see Section 6.3.4, “Customizing Security Profiles”.
5. Select either of two T arget radio buttons to scan either a local or a remote machine.
a. If you have selected a remote system, specify it by entering the user name, hostname, and the
port information as shown in the following example:
6. You can allow automatic correction of the system configuration by selecting the Online
rem ediation check box. With this option enabled, SCAP Workbench will attempt to change the
system configuration in accordance with security rules applied by the policy, should the related
checks fail during the system scan.
Warning
If not used carefully, running the system evaluation with the remediation option enabled could
render the system non-functional.
14 0
C hapter 6. Compliance and Vulnerability Scanning
selected XCCDF profile without actually changing the respective XCCDF file.
T he T ailoring window contains a complete set of XCCDF elements relevant to the selected security
profile with detailed information about each element and its functionality. You can enable or disable these
elements by selecting or de-selecting the respective check boxes in the main field of this window. T he
T ailoring window also supports undo and redo functionality; you can undo or redo your selections by
clicking the respective arrow icon in the top left corner of the window.
After you have finished your profile customizations, confirm the changes by clicking the Finish
T ailoring button. Your changes are now in the memory and do not persist if SCAP Workbench is
closed or certain changes, such as selecting a new SCAP content or choosing another tailoring option, are
made. If you wish your changes to be stored, click the Save T ailoring button in the SCAP Workbench
window. T his action allows you save your changes to the security profile as an XCCDF tailoring file in the
chosen directory. Note that this tailoring file can be also further selected with another profiles.
By selecting the Save into a directory option, SCAP Workbench saves both the XCCDF or data
stream file and the tailoring file to the specified location. T his can be useful as a backup solution.
14 1
Red Hat Enterprise Linux 7 Security Guide
By selecting the Save as RPM option, you can instruct SCAP Workbench to create an RPM package
containing the XCCDF or data stream file and tailoring file. T his is useful for distributing the desired
security content to systems that cannot be scanned remotely, or just for delivering the content for further
processing.
Warning
You can display and further process the scan results by clicking the Report button, which opens the
Evaluation Report window. T his window contains the Save combo box, and two buttons, Open in
Browser, and Close.
You can store the scan results in form of an XCCDF, ARF, or HT ML file if you click the Save combo box.
Choose the HT ML Report option to generate the scan report in human-readable form. T he XCCDF and
ARF (data stream) formats are suitable for further automatic processing. You can repeatedly choose all
three options.
14 2
C hapter 6. Compliance and Vulnerability Scanning
If you prefer to view the scan results immediately without saving them, you can click the Open in
Browser button, which opens the scan results in form of a temporary HT ML file in your default web
browser.
T he following sections explain how to install oscap, perform the most common operations, and display the
relevant examples for these tasks. T o learn more about specific sub-commands, use the --help option
with an oscap command:
where module represents a type of SCAP content that is being processed, and module_operation is a
sub-command for the specific operation on the SCAP content.
SDS - Source data stream that will be split into multiple files.
TARGET_DIRECTORY - Directory of the resulting files.
Options:
--datastream-id <id> - ID of the datastream in the collection to use.
--xccdf-id <id> - ID of XCCDF in the datastream that should be
evaluated.
T o learn about all oscap features and the complete list of its options, see the oscap(8) manual page.
T his command allows you to install all packages required by oscap to function properly, including the
openscap package that provides the utility itself. If you want to write your own security content, you should
also install the openscap-engine-sce package that provides the Script Check Engine (SCE). SCE is an
extension to SCAP protocol that allows content authors to write their security content using a scripting
language, such as Bash, Python or Ruby. T he package can be installed in the same way as the openscap-
utils packages.
14 3
Red Hat Enterprise Linux 7 Security Guide
Optionally, after installing oscap, you can check capabilities of your version of oscap, what specifications
it supports, where the certain oscap files are stored, what kinds of SCAP objects you can use, and other
useful information. T o display this information, type the following command:
~]$ oscap -V
OpenSCAP command line tool (oscap) 1.0.4
Copyright 2009--2014 Red Hat Inc., Durham, North Carolina.
14 4
C hapter 6. Compliance and Vulnerability Scanning
file probe_file
interface probe_interface
password probe_password
process probe_process
runlevel probe_runlevel
shadow probe_shadow
uname probe_uname
xinetd probe_xinetd
sysctl probe_sysctl
process58 probe_process58
fileextendedattribute probe_fileextendedattribute
routingtable probe_routingtable
Before you can start using the oscap utility effectively, you also have to install or import some security
content on your system. You can download the SCAP content from the respective web site, or if specified
as an RPM file or package, you can install it from the specified location, or known repository, using the
Yum package manager.
For example, you can install the SCAP Security Guide (SSG) package, scap-security-guide, that contains
the latest set of security polices for Linux systems. See the SSG project page to learn the exact steps how
to deploy the package on your system.
After you install the scap-security-guide on your system, unless specified otherwise, the SSG security
content is available under the /usr/share/xm l/scap/ssg/rhel7/ directory, and you can proceed with
other security compliance operations.
T o find out other possible sources of existing SCAP content that might suit your needs, see Section 6.7,
“Additional Resources”.
After installing the SCAP content on your system, oscap can process the content by specifying the file
path to the content. T he oscap utility supports SCAP version 1.2 and is backward compatible with SCAP
versions 1.1 and 1.0 so it can process earlier versions of the SCAP content without any special
requirements.
Run the following command to examine the internal structure of a SCAP document and display useful
information such as the document type, specification version, a status of the document, the date the
document was published, and the date the document was copied to a file system:
where file is the full path to the security content file being examined. T he following example better
illustrates the usage of the oscap info command:
14 5
Red Hat Enterprise Linux 7 Security Guide
Stream: scap_org.open-scap_datastream_from_xccdf_ssg-rhel7-xccdf-1.2.xml
Generated: (null)
Version: 1.2
Checklists:
Ref-Id: scap_org.open-scap_cref_ssg-rhel7-xccdf-1.2.xml
Profiles:
xccdf_org.ssgproject.content_profile_test
xccdf_org.ssgproject.content_profile_rht-ccp
xccdf_org.ssgproject.content_profile_common
xccdf_org.ssgproject.content_profile_stig-rhel7-server-
upstream
Referenced check files:
ssg-rhel7-oval.xml
system: http://oval.mitre.org/XMLSchema/oval-
definitions-5
Checks:
Ref-Id: scap_org.open-scap_cref_ssg-rhel7-oval.xml
Ref-Id: scap_org.open-scap_cref_output--ssg-rhel7-cpe-oval.xml
Ref-Id: scap_org.open-scap_cref_output--ssg-rhel7-oval.xml
Dictionaries:
Ref-Id: scap_org.open-scap_cref_output--ssg-rhel7-cpe-dictionary.xml
T he oscap utility can scan systems against the SCAP content represented by both, an XCCDF (T he
eXtensible Configuration Checklist Description Format) benchmark and OVAL (Open Vulnerability and
Assessment Language) definitions. T he security policy can have a form of a single OVAL or XCCDF file or
multiple separate XML files where each file represents a different component (XCCDF, OVAL, CPE, CVE,
and others). T he result of a scan can be printed to both, standard output and an XML file. T he result file
can be then further processed by oscap in order to generate a report in a human-readable format. T he
following examples illustrate the most common usage of the command.
Example 6.6. Scanning the System Using the SSG OVAL definitions
T o scan your system against the SSG OVAL definition file while evaluating all definitions, run the
following command:
T he results of the scan will be stored as the scan-oval-results.xm l file in the current directory.
14 6
C hapter 6. Compliance and Vulnerability Scanning
Example 6.7. Scanning the System Using the SSG OVAL definitions
T o evaluate a particular OVAL definition from the security policy represented by the SSG data stream
file, run the following command:
T he results of the scan will be stored as the scan-oval-results.xm l file in the current directory.
Example 6.8. Scanning the System Using the SSG XCCDF benchmark
T he results of the scan will be stored as the scan-xccdf-results.xm l file in the current directory.
Note
T he --profile command-line argument selects the security profile from the given XCCDF or
data stream file. T he list of available profiles can be obtained by running the oscap info
command. If the --profile command-line argument is omitted the default XCCDF profile is
used as required by SCAP standard. Note that the default XCCDF profile may or may not be an
appropriate security policy.
where module is either xccdf or oval, sub-module is a type of the generated document, and file
represents an XCCDF or OVAL file.
14 7
Red Hat Enterprise Linux 7 Security Guide
T o transform a result of an SSG OVAL scan into a HT ML file, run the following command:
T he result report will be stored as the ssg-scan-oval-report.htm l file in the current directory.
T his example assumes that you run the command from the same location where the scan-oval-
results.xm l file is stored. Otherwise you need to specify the fully-qualified path of the file that
contains the scan results.
T o transform a result of an SSG XCCDF scan into a HT ML file, run the following command:
T he result report will be stored as the ssg-scan-xccdf-report.htm l file in the current directory.
Alternatively, you can generate this report in the time of the scan using the --report command-line
argument:
where file is the full path to the file being validated. T he only exception is the data stream module (ds),
which uses the ds-validate operation instead of validate. Note that all SCAP components within the
given data stream are validated automatically and none of the components is specified separately, as can
be seen in the following example:
14 8
C hapter 6. Compliance and Vulnerability Scanning
With certain SCAP content, such as OVAL specification, you can also perform a Schematron validation.
T he Schematron validation is slower than the standard validation but provides deeper analysis, and is
thus able to detect more errors. T he following SSG example shows typical usage of the command:
T his solution supports two methods of performing security compliance scans, viewing and further
processing of the scan results. You can either use the OpenSCAP Satellite Web Interface or run
commands and scripts from the Satellite API. For more information about this solution to security
compliance, its requirements and capabilities, see the Red Hat Satellite 5.6 User Guide.
T he users of Red Hat Satellite 5 may find useful the XCCDF part of the patch definitions. T o download
these definitions, run the following command:
T o audit security vulnerabilities for the software installed on the system, run the following command:
T he oscap utility maps Red Hat Security Advisories to CVE identifiers that are linked to the National
Vulnerability Database and reports which security advisories are not applied.
14 9
Red Hat Enterprise Linux 7 Security Guide
Note
Note that these OVAL definitions are designed to only cover software and updates released by
Red Hat. You need to provide additional definitions in order to detect the patch status of third-party
software.
T he output of this command is an outline of the SSG document and it contains available configuration
profiles. T o audit your system settings, choose a suitable profile and run the appropriate evaluation
command. For example, the following command is used to assess the given system against a draft SCAP
profile for Red Hat Certified Cloud Providers:
Installed Documentation
oscap(8) — T he manual page for the oscap command-line utility provides a complete list of available
options and their usage explanation.
scap-workbench(8) — T he manual page for the SCAP Workbench application provides a basic
information about the application as well as some links to potential sources of SCAP content.
Guide to the Secure Configuration of Red Hat Enterprise Linux 7 — An HT ML document located in the
/usr/share/doc/scap-security-guide-0.1.5/ directory that provides a detailed guide for
security settings of your system in form of an XCCDF checklist.
Online Documentation
T he OpenSCAP project page — T he home page to the OpenSCAP project provides detailed
information about the oscap utility and other components and projects related to SCAP.
T he SCAP Workbench project page — T he home page to the SCAP Workbench project provides
detailed information about the scap-workbench application.
T he SCAP Security Guide (SSG) project page — T he home page to the SSG project that provides the
latest security content for Red Hat Enterprise Linux.
150
C hapter 6. Compliance and Vulnerability Scanning
National Institute of Standards and T echnology (NIST ) SCAP page — T his page represents a vast
collection of SCAP related materials, including SCAP publications, specifications, and the SCAP
Validation Program.
National Vulnerability Database (NVD) — T his page represents the largest repository of SCAP content
and other SCAP standards based vulnerability management data.
Red Hat OVAL content repository — T his is a repository containing OVAL definitions for Red Hat
Enterprise Linux systems.
MIT RE CVE — T his is a database of publicly known security vulnerabilities provided by the MIT RE
corporation.
MIT RE OVAL — T his page represents an OVAL related project provided by the MIT RE corporation.
Amongst other OVAL related information, these pages contain the latest version of the OVAL language
and a huge repository of OVAL content, counting over 22 thousands OVAL definitions.
Red Hat Satellite 5.6 User Guide — T his book describes, amongst other topics, how to maintain
system security on multiple systems by using OpenSCAP.
151
Red Hat Enterprise Linux 7 Security Guide
Level 1 — Security Level 1 provides the lowest level of security. Basic security requirements are
specified for a cryptographic module (for example, at least one Approved algorithm or Approved
security function shall be used). No specific physical security mechanisms are required in a Security
Level 1 cryptographic module beyond the basic requirement for production-grade components. An
example of a Security Level 1 cryptographic module is a personal computer (PC) encryption board.
Level 2 — Security Level 2 enhances the physical security mechanisms of a Security Level 1
cryptographic module by adding the requirement for tamper-evidence, which includes the use of
tamper-evident coatings or seals or for pick-resistant locks on removable covers or doors of the
module. T amper-evident coatings or seals are placed on a cryptographic module so that the coating or
seal must be broken to attain physical access to the plaintext cryptographic keys and critical security
parameters (CSPs) within the module. T amper-evident seals or pick-resistant locks are placed on
covers or doors to protect against unauthorized physical access.
Level 3 — In addition to the tamper-evident physical security mechanisms required at Security Level 2,
Security Level 3 attempts to prevent the intruder from gaining access to CSPs held within the
cryptographic module. Physical security mechanisms required at Security Level 3 are intended to have
a high probability of detecting and responding to attempts at physical access, use or modification of the
cryptographic module. T he physical security mechanisms may include the use of strong enclosures
and tamper detection/response circuitry that zeroes all plaintext CSPs when the removable
covers/doors of the cryptographic module are opened.
Level 4 — Security Level 4 provides the highest level of security defined in this standard. At this
security level, the physical security mechanisms provide a complete envelope of protection around the
cryptographic module with the intent of detecting and responding to all unauthorized attempts at
physical access. Penetration of the cryptographic module enclosure from any direction has a very high
probability of being detected, resulting in the immediate zeroization of all plaintext CSPs. Security Level
4 cryptographic modules are useful for operation in physically unprotected environments.
152
C hapter 7. Federal Standards and Regulations
1. For proper operation of the in-module integrity verification, the prelink has to be disabled. T his can
be done by setting configuring PRELINKING=no in the /etc/sysconfig/prelink configuration
file. Existing prelinking, if any, should be undone on all system files using the prelink -u -a
command.
~]# dracut -f
Warning
4. Modify the kernel command line of the current kernel in the /boot/grub/grub.conf file by adding
the following option:
fips=1
Note
~]$ df /boot
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 495844 53780 416464 12% /boot
T o ensure that the boot= configuration option will work even if device naming changes
between boots, identify the universally unique identifier (UUID) of the partition by running the
following command:
For the example above, the following string needs to appended to the kernel command line:
boot=UUID=05c000f1-f899-467b-a4d9-d5ca4424c797
Should you require strict FIPS compliance, the fips=1 kernel option needs to be added to the kernel
command line during system installation so that key generation is done with FIPS approved algorithms and
continuous monitoring tests in place. Users should also ensure that the system has plenty of entropy
during the installation process by moving the mouse around, or if no mouse is available, ensuring that
153
Red Hat Enterprise Linux 7 Security Guide
many keystrokes are typed. T he recommended amount of keystrokes is 256 and more. Less than 256
keystrokes may generate a non-unique key.
154
Encryption Standards
Encryption Standards
AES was announced by National Institute of Standards and T echnology (NIST ) as U.S. FIPS PUB 197
(FIPS 197) on November 26, 2001 after a 5-year standardization process. Fifteen competing designs were
presented and evaluated before Rijndael was selected as the most suitable. It became effective as a
standard May 26, 2002. It is available in many different encryption packages. AES is the first publicly
accessible and open cipher approved by the NSA for top secret information (see Security of AES, below).
[4]
T he Rijndael cipher was developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, and
submitted by them to the AES selection process. Rijndael is a portmanteau of the names of the two
inventors. [5]
DES is now considered to be insecure for many applications. T his is chiefly due to the 56-bit key size
being too small; in January, 1999, distributed.net and the Electronic Frontier Foundation collaborated to
publicly break a DES key in 22 hours and 15 minutes. T here are also some analytical results which
demonstrate theoretical weaknesses in the cipher, although they are unfeasible to mount in practice. T he
algorithm is believed to be practically secure in the form of T riple DES, although there are theoretical
attacks. In recent years, the cipher has been superseded by the Advanced Encryption Standard (AES). [7]
In some documentation, a distinction is made between DES as a standard and DES the algorithm which is
referred to as the DEA (the Data Encryption Algorithm). [8 ]
155
Red Hat Enterprise Linux 7 Security Guide
Public key cryptography is a fundamental and widely used technology around the world, and is the
approach which underlies such Internet standards as T ransport Layer Security (T LS) (successor to SSL),
PGP and GPG. [10 ]
T he distinguishing technique used in public key cryptography is the use of asymmetric key algorithms,
where the key used to encrypt a message is not the same as the key used to decrypt it. Each user has a
pair of cryptographic keys — a public key and a private key. T he private key is kept secret, whilst the
public key may be widely distributed. Messages are encrypted with the recipient's public key and can only
be decrypted with the corresponding private key. T he keys are related mathematically, but the private key
cannot be feasibly (ie, in actual or projected practice) derived from the public key. It was the discovery of
such algorithms which revolutionized the practice of cryptography beginning in the middle 1970s. [11]
In contrast, Symmetric-key algorithms, variations of which have been used for some thousands of years,
use a single secret key shared by sender and receiver (which must also be kept private, thus accounting
for the ambiguity of the common terminology) for both encryption and decryption. T o use a symmetric
encryption scheme, the sender and receiver must securely share a key in advance. [12]
Because symmetric key algorithms are nearly always much less computationally intensive, it is common to
exchange a key using a key-exchange algorithm and transmit data using that key and a symmetric key
algorithm. PGP, and the SSL/T LS family of schemes do this, for instance, and are called hybrid
cryptosystems in consequence. [13]
A.2.1. Diffie-Hellman
Diffie–Hellman key exchange (D–H) is a cryptographic protocol that allows two parties that have no prior
knowledge of each other to jointly establish a shared secret key over an insecure communications
channel. T his key can then be used to encrypt subsequent communications using a symmetric key cipher.
[14]
T he scheme was first published by Whitfield Diffie and Martin Hellman in 1976, although it later emerged
that it had been separately invented a few years earlier within GCHQ, the British signals intelligence
agency, by Malcolm J. Williamson but was kept classified. In 2002, Hellman suggested the algorithm be
called Diffie–Hellman–Merkle key exchange in recognition of Ralph Merkle's contribution to the invention of
public-key cryptography (Hellman, 2002). [15]
U.S. Patent 4,200,770, now expired, describes the algorithm and credits Hellman, Diffie, and Merkle as
inventors. [17]
A.2.2. RSA
156
Encryption Standards
In cryptography, RSA (which stands for Rivest, Shamir and Adleman who first publicly described it) is an
algorithm for public-key cryptography. It is the first algorithm known to be suitable for signing as well as
encryption, and was one of the first great advances in public key cryptography. RSA is widely used in
electronic commerce protocols, and is believed to be secure given sufficiently long keys and the use of up-
to-date implementations.
A.2.3. DSA
DSA (Digital Signature Algorithm) is a standard for digital signatures, a United States federal government
standard for digital signatures. DSA is for signatures only and is not an encryption algorithm. [18 ]
A.2.4. SSL/TLS
T ransport Layer Security (T LS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic
protocols that provide security for communications over networks such as the Internet. T LS and SSL
encrypt the segments of network connections at the T ransport Layer end-to-end.
Several versions of the protocols are in widespread use in applications like web browsing, electronic mail,
Internet faxing, instant messaging and voice-over-IP (VoIP). [19 ]
[3] " Ad vanc ed Enc ryp tio n Stand ard ." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Ad vanc ed _Enc ryp tio n_Stand ard
[4] " Ad vanc ed Enc ryp tio n Stand ard ." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Ad vanc ed _Enc ryp tio n_Stand ard
[5] " Ad vanc ed Enc ryp tio n Stand ard ." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Ad vanc ed _Enc ryp tio n_Stand ard
[6 ] " Data Enc ryp tio n Stand ard ." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Data_Enc ryp tio n_Stand ard
[7] " Data Enc ryp tio n Stand ard ." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Data_Enc ryp tio n_Stand ard
[8 ] " Data Enc ryp tio n Stand ard ." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Data_Enc ryp tio n_Stand ard
[9 ] " Pub lic -key Enc ryp tio n." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Pub lic -key_c ryp to g rap hy
[10 ] " Pub lic -key Enc ryp tio n." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Pub lic -key_c ryp to g rap hy
[11] " Pub lic -key Enc ryp tio n." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Pub lic -key_c ryp to g rap hy
[12] " Pub lic -key Enc ryp tio n." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Pub lic -key_c ryp to g rap hy
[13] " Pub lic -key Enc ryp tio n." Wikipedia. 14 No vemb er 20 0 9 http ://en.wikip ed ia.o rg /wiki/Pub lic -key_c ryp to g rap hy
157
Red Hat Enterprise Linux 7 Security Guide
[18 ] " DSA." Wikipedia. 24 Feb ruary 20 10 http ://en.wikip ed ia.o rg /wiki/Dig ital_Sig nature_Alg o rithm
[19 ] " TLS/SSL." Wikipedia. 24 Feb ruary 20 10 http ://en.wikip ed ia.o rg /wiki/Trans p o rt_Layer_Sec urity
[20 ] " Cramer-Sho up c ryp to s ys tem." Wikipedia. 24 Feb ruary 20 10 http ://en.wikip ed ia.o rg /wiki/Cramer–Sho up _c ryp to s ys tem
[21] " ElG amal enc ryp tio n" Wikipedia. 24 Feb ruary 20 10 http ://en.wikip ed ia.o rg /wiki/ElG amal_enc ryp tio n
158
Audit System Reference
159
Red Hat Enterprise Linux 7 Security Guide
0 — user
1 — task
4 — exit
5 — exclude
160
Audit System Reference
161
Red Hat Enterprise Linux 7 Security Guide
In fields generated by the kernel, this field holds a thread ID. T he thread
ID is equal to process ID for single-threaded processes. Note that the
value of this thread ID is different from the values of pthread_t IDs used
in user-space. For more information, refer to the gettid(2) man page.
162
Audit System Reference
ANOM_AMT U_FAIL[a] T riggered when a failure of the Abstract Machine T est Utility (AMT U) is
detected.
ANOM_CRYPT O_FAIL[a] T riggered when a failure in the cryptographic system is detected.
ANOM_LOGIN_FAILURES [a] T riggered when the limit of failed login attempts is reached.
ANOM_LOGIN_LOCAT ION [a] T riggered when a login attempt is made from a forbidden location.
ANOM_LOGIN_SESSIONS [a] T riggered when a login attempt reaches the maximum amount of
concurrent sessions.
ANOM_LOGIN_T IME [a] T riggered when a login attempt is made at a time when it is prevented by,
for example, pam _tim e.
ANOM_MAX_DAC [a] T riggered when the maximum amount of Discretionary Access Control
(DAC) failures is reached.
ANOM_MAX_MAC [a] T riggered when the maximum amount of Mandatory Access Control
(MAC) failures is reached.
ANOM_MK_EXEC [a] T riggered when a file is made executable.
163
Red Hat Enterprise Linux 7 Security Guide
INT EGRIT Y_HASH [b ] T riggered to record a hash type integrity verification event run by the
kernel.
INT EGRIT Y_MET ADAT A[b ] T riggered to record a metadata integrity verification event run by the
kernel.
INT EGRIT Y_PCR [b ] T riggered to record Platform Configuration Register (PCR) invalidation
messages.
INT EGRIT Y_RULE [b ] T riggered to record a policy rule.
164
Audit System Reference
165
Red Hat Enterprise Linux 7 Security Guide
RESP_ACCT _REMOT E [c ] T riggered when a user account is locked from a remote session.
RESP_ACCT _UNLOCK_T IM T riggered when a user account is unlocked after a configured period of
ED [c ] time.
166
Audit System Reference
167
Red Hat Enterprise Linux 7 Security Guide
Revision History
Revision 1-14 .1 T ue Jun 03 2014 T omáš Čapek
Version for 7.0 GA release.
168