LFS211 Labs - V2020 04 27 PDF
LFS211 Labs - V2020 04 27 PDF
Version 2020-04-27
The training materials provided or developed by The Linux Foundation in connection with the training services are protected
by copyright and other intellectual property rights.
Open source code incorporated herein may have other copyright holders and is used pursuant to the applicable open source
license.
The training materials are provided for individual use by participants in the form in which they are provided. They may not be
copied, modified, distributed to non-participants or used to provide training to others without the prior written consent of The
Linux Foundation.
No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without express prior
written consent.
Published by:
No representations or warranties are made with respect to the contents or use of this material, and any express or implied
warranties of merchantability or fitness for any particular purpose or specifically disclaimed.
Although third-party application software packages may be referenced herein, this is for demonstration purposes only and
shall not constitute an endorsement of any of these software applications.
Linux is a registered trademark of Linus Torvalds. Other trademarks within this course material are the property of their
respective owners.
If there are any questions about proper and fair use of the material herein, please contact:
[email protected]
1 Introduction 1
1.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
3 Network Configuration 15
3.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5 Remote Access 23
5.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7 HTTP Servers 41
7.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
9 Email Servers 59
9.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
10 File Sharing 67
10.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
11 Advanced Networking 71
11.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12 HTTP Caching 77
12.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
iii
iv CONTENTS
14.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
15 Firewalls 87
15.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
17 High Availability 99
17.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
18 Database 101
18.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Introduction
1.1 Labs
Thus, the sensible procedure is to configure things such that single commands may be run with superuser privilege, by using
the sudo mechanism. With sudo the user only needs to know their own password and never needs to know the root password.
If you are using a distribution such as Ubuntu, you may not need to do this lab to get sudo configured properly for the course.
However, you should still make sure you understand the procedure.
To check if your system is already configured to let the user account you are using run sudo, just do a simple command like:
$ sudo ls
You should be prompted for your user password and then the command should execute. If instead, you get an error message
you need to execute the following procedure.
Launch a root shell by typing su and then giving the root password, not your user password.
On all recent Linux distributions you should navigate to the /etc/sudoers.d subdirectory and create a file, usually with the
name of the user to whom root wishes to grant sudo access. However, this convention is not actually necessary as sudo will
scan all files in this directory as needed. The file can simply contain:
An older practice (which certainly still works) is to add such a line at the end of the file /etc/sudoers. It is best to do so using
the visudo program, which is careful about making sure you use the right syntax in your edit.
You probably also need to set proper permissions on the file by typing:
(Note some Linux distributions may require 400 instead of 440 for the permissions.)
1
2 CHAPTER 1. INTRODUCTION
After you have done these steps, exit the root shell by typing exit and then try to do sudo ls again.
There are many other ways an administrator can configure sudo, including specifying only certain permissions for certain
users, limiting searched paths etc. The /etc/sudoers file is very well self-documented.
However, there is one more setting we highly recommend you do, even if your system already has sudo configured. Most
distributions establish a different path for finding executables for normal users as compared to root users. In particular the
directories /sbin and /usr/sbin are not searched, since sudo inherits the PATH of the user, not the full root user.
Thus, in this course we would have to be constantly reminding you of the full path to many system administration utilities;
any enhancement to security is probably not worth the extra typing and figuring out which directories these programs are in.
Consequently, we suggest you add the following line to the .bashrc file in your home directory:
PATH=$PATH:/usr/sbin:/sbin
If you log out and then log in again (you don’t have to reboot) this will be fully effective.
2.1 Labs
The server package name is usually vsftpd and either ftp or tnftp client packages are consistent with the solutions.
Solution 2.1
On openSUSE
On Ubuntu
On CentOS
Solution 2.2
1. Start the daemon manually
On CentOS
On openSUSE
On Ubuntu
2. Verify it is running
# killall vsftpd
Exercise 2.3: Optional Starting a system service with the SYSV init script if they
exist on your system
Please Note
This section is marked optional as some distros do not include the SYSV init subsystem by default and the scripts
to control the services may not be included in some service packages. It is important to know about the SYSV init
subsystem as some older applications may not include systemd service files.
Start the vsftpd daemon using the SYSV init script, and verify that it is running.
Solution 2.3
1. Start the daemon using the SYSV init script.
# /etc/init.d/vsftpd start
2. Verify it is running
$ ftp localhost
# /etc/init.d/vsftpd stop
Solution 2.4
1. Start the daemon
2. Verify it is running
or
# systemctl status vsftpd.service
$ ftp localhost
Solution 2.5
• Enable the vsftpd daemon using the distribution appropriate command.
# systemctl enable vsftpd.service
Very Important
Some distributions have the stress package; others have the newer stress-ng. Either package will work as stress-ng
supports the same command line options as stress.
On CentOS
On openSUSE
On Ubuntu
The stress package does not include a systemd unit configuration, so one must be created. The package installed on the test
system has the binary for stress as /usr/bin/stress. Create a systemd vendor unit file as /usr/lib/systemd/system/
foo.service. You will require root level access to create a file in this directory.
/usr/lib/systemd/system/foo.service
[Unit]
Description=Example service unit to run stress
[Service]
ExecStart=/usr/bin/stress --cpu 4 --io 4 --vm 2 --vm-bytes 256M
[Install]
WantedBy=multi-user.target
A copy of this file (foo1.service) can be found in the tarball in the LFS211/SOLUTIONS/s 02/ directory.
Once the unit file is created systemd will be able to start and stop the service. Use the top command to verify that stress is
working. The following commands may be useful:
# systemctl daemon-reload
# systemctl start foo
# systemctl status foo -l
# systemd-delta
# systemctl stop foo
The example program stress which is now a service, does not display much feedback as to what it is doing. The systemctl
status of the service can be checked, the output would look something like this:
Examining the output of the top command will show two processes of stress with more than the specified 256M of memory,
four processes of stress with nearly 100% CPU usage and four processes with neither high CPU or high memory usage.
They would be the memory hogs, CPU hogs, and io hogs, respectively. See the example below:
As we are interested specifically in the stress service and its child processes, we provide a script for monitoring the service
processes running.
track-STRESS.sh looks for a stress process with a PPID of 1, then all of the related child processes. Once the processes
are located some data is extracted with the ps command.
A copy of this script (track-STRESS.sh) can be found in the tarball in the LFS211/SOLUTIONS/s 02/ directory.
track-STRESS.sh
#!/bin/bash
#
# This little script is used with the LFS311 lab exercise
# on systemd startup files and their affect on a background
# service. The "foo" service.
#
# The script looks for "stress" launched with a PPID of 1
# and it's chidren processes then uses the "ps" command
# to collect some information.
while [ true ]
do
sleep 5
done
exit
Since /usr/lib/systemd/system/foo.service is the default configuration supplied by the packager of the service and may
be altered by the vendor at any time, create a custom unit file in /etc/systemd/system/foo.service for the stress service.
This file is not usually overwritten by the vendor so local customizations can go here. Change the parameters slightly for the
foo service using this directory. It is common practice to copy the vendor unit file into the /etc/systemd/system/ directory
and make appropriate customizations.
/etc/systemd/system/foo.service
[Unit]
Description=Example service unit to run stress
[Service]
ExecStart=/usr/bin/stress --cpu 2 --io 2 --vm 4 --vm-bytes 256M
[Install]
WantedBy=multi-user.target
A copy of this file (foo2.service) can be found in the tarball in the LFS211/SOLUTIONS/s 02/ directory.
Start or restart the service and examine the differences in the following commands output.
# track-STRESS.sh
# systemctl status foo -l
# systemd-delta
The changes to the configuration file can be seen with the track-STRESS.sh script, notice the number of memory hogs is
now 4 and the CPU hogs is reduced to 2.
Which configuration (or unit file) file is active is not clear in the script. Use systemctl status foo to see which unit file is
being used. This will show which configuration files are being used but not the differences in the files.
To see the details of the unit file changes the systemd-delta command can be used. The output has the changes in diff
format, making it easy to see what has changed. See the example below:
Often times it is desirable to add or change features by program or script control, the drop-in files are convenient for this. One
item of caution, if one is changing a previously defined function (like ExecStart) it must be undefined first then added back in.
Create a drop-in directory and file for out stress service and verify the changes are active. Our example file for foo.service
using a drop-in directory (00-foo.conf) can be found in the SOLUTIONS tarball in the LFS211/SOLUTIONS/s 02/ directory
and contains:
/etc/systemd/system/foo.service.d/00-foo.conf
[Service]
ExecStart=
ExecStart=/usr/bin/stress --cpu 1 --vm 1 --io 1 --vm-bytes 128M
Start or restart the service and examine the differences in the output of the following commands.
# track-STRESS.sh
# systemctl status foo -l
# systemd-delta
The information in the drop in file over writes the unit file. In this example the number of “hogs” has been greatly reduced.
systemctl status shows the dropin file is active. If there was several dropin files it would show the order that were applied to
the service.
Figure 2.8: systemctl status showing unit file override and dropin file
Like the other commands, systemd-delta shows the files used by the service. In addition to the files used, the details of the
changes in the unit file are displayed. Notice the changes to the service made with the drop-in file are not displayed, only the
file name.
With systemd, additional features and capabilities can be easily added. As an example, cgroups controls can be added to
our service. Here is an example of adding a systemd slice to the example service and adding a resource limit to that slice.
The slice is then attached to the service drop-in file. First setup a <service>.slice unit file:
/etc/systemd/system/foo.slice
[Unit]
Description=stress slice
[Slice]
CPUQuota=30%
A copy of this file (foo.slice) can be found in the tarball in the LFS211/SOLUTIONS/s 02/ directory.
Then connect our service to the slice. Add the following to the bottom of the unit file in /etc/systemd/system/foo.service.
d/00-foo.conf
The cgroup information in the slice has been applied to the service. Notice the amount of CPU resource consumed. The total
is 30% of one processor but it may be spread across multiple CPU’s.
The systemctl status command shows the change in CGroup with the slice attribute active.
Bonus step: In our example there are no unique values in the /etc/systemd/system/foo.service file so in this example
it is redundant. We can get rid of the extra file.
# mv /etc/systemd/system/foo.service /root/
# systemctl daemon-reload
# systemctl restart foo
# systemctl status foo
Consult the man pages systemd.resource-control(5), systemd.service(5), systemd-delta(1) and other systemd man
pages for additional information.
Network Configuration
3.1 Labs
Solution 3.1
• Record the IP address, netmask or prefix and the network device.
# ip address show
In some of the later editions of the distribution’s systemd may have an optional feature systemd-resolved.service
active. Check to see if systemd-resolved is active and if so record the configuration in /etc/systemd/resolved.
Check to see if systemd-resolved is active:
# systemctl status systemd-resolved
15
16 CHAPTER 3. NETWORK CONFIGURATION
Very Important
Make a backup copy in /var/tmp of any file before editing.
Solution 3.2
• On an Ubuntu system, edit /etc/network/interfaces and add or modify the configuration like below, using your
discovered values:
/etc/network/interfaces
interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback
auto enp0s3
iface enp0s3 inet static
address 10.0.2.15
netmask 255.255.255.0
gateway 10.0.2.2
A copy of this file (interfaces) can be found in the tarball in the LFS211/SOLUTIONS/s 03/ directory.
• On a CentOS system, edit /etc/sysconfig/network-scripts/ifcfg-<adaptername> file and ensure it has the
following contents:
/etc/sysconfig/network-scripts/ifcfg-<adapter name>
DEVICE=<adapter-name>
TYPE=ethernet
BOOTPROTO=none
IPADDR=10.0.2.15
PREFIX=24
GATEWAY=10.0.2.2
DNS1=8.8.8.8
NAME="LFSstatic"
ONBOOT=yes
On openSUSE
On a SUSE system you have to turn off NetworkManager first:
1. As the root user, run the command
# yast lan
2. You will see a warning telling you that network manager is managing the network settings: click OK.
4. Click OK
/etc/sysconfig/network/ifcfg-eth0
NAME="LFSstatic"
DEVICE=eth0
BOOTPROTO="static"
IPADDR=10.0.2.15/24
STARTMODE="auto"
USERCONTROL="no"
• Once you’ve made the configuration changes restart the networking services using the distribution’s method
Solution 3.3
Your machine:
Another machine:
$ ping 10.200.45.110
$ ping 10.200.45.100
Solution 3.4
• On Ubuntu systems, edit the changes out of
/etc/network/interfaces.
• On CentOS systems, remove
/etc/sysconfig/network-scripts/ifcfg-<interfacename> .
• On OpenSUSE systems, edit the changes out of
/etc/network/config, /etc/network/ifcfg-<interface> and /etc/ifroute-<interface> .
4.1 Labs
1. This lab exercise is designed to be completed on a single system. If desirable, you may use a second system to act as
client to enhance the lab experience. Confirm either iproute or iproute2 is installed on your system.
2. Use the netem option of the tc command to introduce a network problem on the server (random packet drops).
# tc qdisc add dev lo root netem loss random 40
Please Note
If your version of tc does not support the loss random 40 option substitute corrupt 30%. The new command
would be:
# tc qdisc add dev lo root netem corrupt 30%.
Solution 4.1
1. Use the ping command to verify packets are dropping:
$ ping localhost
You should see all the requests being sent, but a smaller number of responses.
3. Clean up the tc command to avoid issues in future labs:
# tc qdisc del dev lo root
19
20 CHAPTER 4. NETWORK TROUBLESHOOTING AND MONITORING
Please Note
You may have to start or enable the SMTP service, and install the telnet client for this lab.
Then select configuration of local only and accept the defaults as presented.
Solution 4.2
or
# ss -lnt
Exercise 4.3: OPTIONAL: Block traffic to a service with TCP Wrappers and
prove it is blocked
Very Important
The support for tcp-wrappers in vsftpd has been removed in many distributions causing this lab to fail.
Use TCP Wrappers to block access to FTP daemon and prove it is blocked
Solution 4.3
1. Start a service:
# /etc/init.d/vsftpd start
vsftpd: ALL
On openSUSE
On OpenSUSE the version of vsftpd is not compiled with TCP Wrappers support. You may alternatively use the
following iptables command to block the FTP traffic.
# iptables -A INPUT -m tcp -p tcp --dport ftp -j REJECT
You should get a connection refused message. Note: The loopback may still work, use a different adapter.
5. Remove the line from /etc/hosts.deny to clean up the exercise. Or if you created an iptables rule to block traffic,
flush the rules with:
# iptables -F
Remote Access
5.1 Labs
Please Note
This lab assumes that root is allowed to login via a password through ssh. There is a configuration parameter that can
change this behavior. The different distributions may change this parameter. The parameter may also change from
release to release.
Check if the parameter PermitRootLogin is set to yes in /etc/ssh/sshd_config; if it is not, set the parameter to
yes, and restart the sshd server.
Solution 5.1
1. Make an SSH key and add it to an SSH agent:
$ ssh-keygen -t rsa -f $HOME/.ssh/id-rsa
$ eval $(ssh-agent)
$ ssh-add $HOME/.ssh/id-rsa
$ ssh-copy-id student@localhost
$ ssh student@localhost
$ id
$ exit
Solution 5.2
23
24 CHAPTER 5. REMOTE ACCESS
$HOME/.ssh/config
host garply
hostname localhost
user root
host *
ForwardX11 yes
2. Verify or update the permissions on $HOME/.ssh/config to allow read and write for the file owner only.
$ chmod 600 $HOME/.ssh/config
$ ssh garply
$ hostname
$ id
$ exit
Solution 5.3
1. Edit /etc/ssh/sshd_config and make sure this line is present:
PermitRootLogin without-password
$ ssh garply
Copy /home/student/.ssh/authorized_keys to the directory /root/.ssh/, and make sure it is owned by the root
user and group:
Log in to the host garply again, to prove your ssh-key login works:
$ ssh garply
$ id
$ hostname
$ exit
Solution 5.4
V 2020-04-27 © Copyright the Linux Foundation 2020. All rights reserved.
5.1. LABS 25
Please Note
If you are using OpenSUSE without IPv6 support you need to add/modify the following line to the file /etc/ssh/sshd_
config:
AddressFamily inet
pssh (or parallel-ssh) will send commands to many machines, controlled by a text file as to what machines are used. pssh
works best with StrictHostKeyChecking=no or previously added fingerprints to ˜/.ssh/knownhosts. The pssh commands
are most secure with the ssh key copied into the target’s authorized keys file.
Please Note
Some distros use the names like pssh, others use parallel-ssh to avoid conflicts with other software use the appropri-
ate package management command to verify installation and the names being used.
Solution 5.5
1. Install or verify pssh is installed:
On openSUSE
2. Setup ssh keys and fingerprints. If not already done, create a key pair on the local machine:
$ ssh-keygen
3. Test the password-less connection, if you are prompted for a password fix it now:
$ ssh localhost
Solution 5.6
On openSUSE
Please Note
The tigervncserver defaults only listen to the localhost, unless an additional option is passed to the vncserver
when launched. This behavior may be overridden via a configuration file if required or desired. See the man
pages for vncserver and vnc.conf for details.
$ vncserver --localhost no
The vncserver can also use the “forcing ssh via” connection which requires a ssh connection to the server, and
then accesses the vncserver via the localhost.
Hints:
• Ensure the $HOME/.vnc/xstartup file is executable.
• $HOME/.vnc/xstartup may contain references to applications that are not installed; install them.
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
xsetroot -solid grey
xterm -geometry 80x24+0+0 &
xeyes -geometry 100x100-0+0 &
exec twm &
(A copy of this file (xstartup) can be found in the tarball in the LFS211/SOLUTIONS/s 05/ directory.)
Solution 5.7
1. Connect to a VNC server over SSH.
$ vncviewer -via student@hostname localhost:1
$ vncserver -kill :1
Solution 5.8
1. Create the systemd configuration file /etc/systemd/system/[email protected] with the following content:
/etc/systemd/system/[email protected]
[Unit]
Description=Remote desktop service (VNC) on port :%I
After=syslog.target network.target
[Service]
Type=forking
User=student
[Install]
WantedBy=multi-user.target
A copy of this file ([email protected]) can be found in the tarball in the LFS211/SOLUTIONS/s 05/ directory.
2. Re-load the systemd configuration files:
# systemctl daemon-reload
6.1 Labs
Please Note
Before starting this lab, make sure your system time is correct.
4. Test
Solution 6.1
The locations of the configuration files and the content layouts are different depending on the distributions. All of the configu-
rations follow the bind guidelines but there is little or no fixed locations. The most common configuration files are named.conf
and the zone files. The included chart shows the locations for CentOS-8 and Ubuntu-2004.
29
30 CHAPTER 6. DOMAIN NAME SERVICE
Edit /etc/bind/named.conf.options and add or edit the file to contain these lines inside the options block:
/etc/bind/named.conf.options
listen-on port 53 { any; };
allow-query { any; };
recursion yes;
Edit /etc/named.conf and add or edit the file to contain these lines inside the options block:
/etc/named.conf
listen-on port 53 { any; };
allow-query { any; };
On openSUSE
Install the named server:
# zypper install bind
Edit /etc/named.conf.options and add or edit the file to contain these lines inside the options block:
/etc/named.conf
listen-on port 53 { any; };
allow-query { any; };
# touch /etc/named.conf.include
Exercise 6.2: Create an authoritative forward zone for the example.com domain
with the following settings
• 30 second TTL
• www.example.com has the address 192.168.111.45 and the IPv6 address fe80::22c9:d0ff:1ecd:c0ef
Solution 6.2
/etc/bind/named.conf.local
zone "example.com." IN {
type master;
file "/etc/bind/example.com.zone";
};
$TTL 30
@ IN SOA localhost. admin.example.com. (
2012092901 ; serial YYYYMMDDRR format
3H ; refresh
1H ; retry
2H ; expire
1M) ; neg ttl
IN NS localhost.;
www.example.com. IN A 192.168.111.45
www.example.com. IN AAAA fe80::22c9:d0ff:1ecd:c0ef
foo.example.com. IN A 192.168.121.11
bar.example.com. IN CNAME www.example.com.
A copy of this file (example.com.zone) can be found in the tarball in the LFS211/SOLUTIONS/s 06/ directory.
Solution 6.3
/etc/named.conf
zone "45.20.10.in-addr.arpa." IN {
type master;
file "45.20.10.in-addr.arpa.zone";
};
/etc/bind/named.conf.local
zone "45.20.10.in-addr.arpa." IN {
type master;
file "/etc/bind/45.20.10.in-addr.arpa.zone";
};
$TTL 30
@ IN SOA localhost. admin.example.com. (
2012092901 ; serial YYYYMMDDRR format
3H ; refresh
1H ; retry
2H ; expire
1M) ; neg ttl
@ IN NS localhost.;
;generate 1-254
$GENERATE 1-254 $ IN PTR host$.example.com.
A copy of this file (45.20.10.in-addr.arpa.zone) can be found in the tarball in the LFS211/SOLUTIONS/s 06/ direc-
tory.
Solution 6.4
1. The configuration is going to be IP Address sensitive. Please record your IP addresses.
# ip a | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 192.168.122.46/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0
Please Note
For this solution these addresses will be used, please adjust for your installation.
2. Prepare the zone file for use as the external access. Copy the existing:
CentOS- file /var/named/example.com.zone to /var/named/example.com.zone-x or
Ubuntu- file /etc/bind/example.com.zone file to /etc/bind/example.com.zone-x.
3. The external zone file, example.com.zone-x is a trimmed version of the original file and the addresses have been
changed.
example.com.zone-x
$TTL 30
@ IN SOA localhost. admin.example.com. (
2020040901 ; serial YYYYMMDDRR format
3H ; refresh
1H ; retry
2H ; expire
1M) ; neg ttl
IN NS localhost.;
www.example.com. IN A 10.0.0.192
foo.example.com. IN A 10.0.0.193
bar.example.com. IN CNAME www.example.com.
A copy of this file (example.com.zone-x) can be found in the tarball in the LFS211/SOLUTIONS/s 06/ directory.
Please Note
ALL zone stanzas must have views or no stanzas can have views. Multiple zone stanzas can be in a single view
stanza; this example uses a unique view stanza for each zone definition.
Very Important
The following files can be dropped in and will replace the original files that have been used thus far. The names
of the files may require adjustment and the original should be saved in case a rewind is required. The files are
available in the LFS211/SOLUTIONS/s 06/ directory.
options {
listen-on port 53 { any; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { any; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
include "/etc/crypto-policies/back-ends/bind.config";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
view "internal" {
match-clients {127.0.0.0/8; 10.0.3.0/16; };
zone "example.com." IN {
type master;
file "example.com.zone";
};
};
view "external" {
match-clients { any; };
recursion no;
zone "example.com." IN {
type master;
file "example.com.zone-x";
};
};
view "hint" {
zone "." IN {
type hint;
file "named.ca";
};
};
view "rfc-zones"{
include "/etc/named.rfc1912.zones";
};
include "/etc/named.root.key";
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the
// structure of BIND configuration files in Debian, *BEFORE* you customize
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local
include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.local";
view "stuff" {
include "/etc/bind/named.conf.default-zones";
};
options {
directory "/var/cache/bind";
// forwarders {
// 0.0.0.0;
// };
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation auto;
// listen-on-v6 { any; };
};
view "internal" {
match-clients {127.0.0.0/8; 10.0.3.0/24; };
zone "example.com." IN {
type master;
file "/etc/bind/example.com.zone";
};
};
view "external" {
match-clients { any; };
recursion no;
zone "example.com." IN {
type master;
file "/etc/bind/example.com.zone-x";
};
};
view "rfc-zones"{
include "/etc/bind/zones.rfc1918";
};
5. restart bind
# systemctl restart named
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: ebb6acc89fcfc32725ad9a675e8f70e29d79ccae794cfed4 (good)
;; QUESTION SECTION:
;www.example.com. IN A
;; ANSWER SECTION:
www.example.com. 30 IN A 192.168.111.45
;; AUTHORITY SECTION:
example.com. 30 IN NS localhost.
7. The table below shows the patterns tested and the results. Extra spaces are included in the table for your entries.
Command Answer
dig www.example.com @127.0.0.1 192.168.111.45
dig www.example.com @192.168.122.46 10.0.0.192
dig foo.example.com @127.0.0.1 192.168.121.11
dig foo.example.com @192.168.122.46 10.0.0.193
dig bar.example.com @127.0.0.1 192.168.111.45
dig bar.example.com @192.168.122.46 10.0.0.192
dig theworld.example.com @127.0.0.1 192.74.137.5
dig theworld.example.com @192.168.122.46 fail
dig host42.example.com @127.0.0.1 10.20.45.42
dig host42.example.com @192.168.122.46 fail
HTTP Servers
7.1 Labs
Exercise 7.1: Install Apache and create a simple index.html file to serve
Include text to indicate this is the default server.
Solution 7.1
1. Make sure Apache is installed:
On openSUSE
41
42 CHAPTER 7. HTTP SERVERS
index.html
<html>
<head>
<title>This is my file</title>
</head>
<body>
<h1>This is my default file</h1>
</body>
</html>
A copy of this file (index.html) can be found in the tarball in the LFS211/SOLUTIONS/s 07/ directory.
The default DocumentRoot directories are:
On openSUSE
/srv/www/htdocs/
or
$ w3m -dump http://<YOUR_IP_ADDRESS>/index.html
Exercise 7.2: Create a new virtual network interface and serve a different doc-
ument root from the new interface
Please Note
The original html document should also be accessible from the original IP address.
2. Serve a file indicating this is an IP based virtual machine. The file should be /ipvhost/index.html and only available
on the newly defined IP address.
Solution 7.2
Where X is a number no one else in the same LAN is using. Add this new address to /etc/hosts with the host name
of ipvhost.example.com for ease of use later.
2. Create a new directory /ipvhost/:
# mkdir /ipvhost/
# vi /ipvhost/index.html
ipvhost/index.html
<html>
<head>
<title>This is the IP vhost</title>
</head>
<body>
<h1>This is my IP vhost</h1>
</body>
</html>
5. Create a new IP based virtual host definition. Add this stanza to the suggested file as listed below:
ipvhost/index.html
<VirtualHost 192.168.153.X:80>
DocumentRoot /ipvhost/
ServerName ipvhost.example.com
<Directory /ipvhost/>
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
</VirtualHost>
On openSUSE
/etc/apache2/vhists.d/ipvhost.conf
6. Restart apache:
# systemctl restart httpd
Please Note
On Ubuntu and OpenSUSE the service name is apache2.
• Create a new host name by adding the original IP address of the server to /etc/hosts with the name
namevhost.example.com.
• Ensure the original web server host still serves traffic as the default vhost.
• Serve this html file on only the newly defined name vhost:
index.html
<html>
<head>
<title>This is the namevhost</title>
</head>
<body>
<h1>This is namevhost</h1>
</body>
</html>
Solution 7.3
1. Create a new name based virtual host definition with the name namevhost.conf in the default configuration directory.
Create a new config file with the following contents, replacing the string DOCUMENTROOT with the proper Document-
Root for your system:
namevhost.conf
<Virtualhost *:80>
DocumentRoot /var/www/html
ServerName _default_
</Virtualhost>
<VirtualHost *:80>
DocumentRoot /namevhost/
ServerName namevhost.example.com
<Directory /namevhost/>
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
</VirtualHost>
A copy of this file (namevhost.conf) can be found in the tarball in the LFS211/SOLUTIONS/s 07/ directory.
2. Create the new document root folder, and create the index.html file:
# mkdir /namevhost/
# vi /namevhost/index.html
4. Restart apache:
• Require the user bob enter the password heyman! to access this directory.
Solution 7.4
1. Create the new secure folder:
On openSUSE
/srv/www/htdocs/secure/
2. Create the following stanza in the location listed to password protect the directory:
secure-dir.conf
<Location /secure/>
AuthType Basic
AuthName "Restricted Area"
AuthUserFile secure.users
Require valid-user
</Location>
A copy of this file (secure-dir.conf) can be found in the tarball in the LFS211/SOLUTIONS/s 07/ directory.
On openSUSE
/etc/apache2/vhosts.d/secure-dir.conf
3. Create a password file and an entry for the user bob in the appropriate directory:
Please Note
You may have to install apache2-utils if htpasswd does not exist.
On openSUSE
4. Restart apache:
or
# systemctl restart apache2
5. Verify that the directory is password protected and that bob is allowed to log in.
• Country Name: US
Solution 7.5
1. Back up the original private key, if one exists:
# mv /etc/pki/tls/private/localhost.key /etc/pki/tls/private/localhost.key.orig
# mv /etc/ssl/private/ssl-cert-snakeoil.key \
/etc/ssl/private/ssl-cert-snakeoil.key.orig
On openSUSE
There is no key by default so nothing needs to be backed up.
On openSUSE
On openSUSE
Edit /etc/apache2/sites-enabled/default-ssl.conf and modify the paths for the key and crt files so they
look like this:
SSLCertificateFile /etc/ssl/certs/server.crt
SSLCertificateKeyFile /etc/ssl/private/server.key
Note: You may have to comment out the directives SSLSessionCache and SSLSessionCacheTimeout from
/etc/apache2/mods-enabled/ssl.conf.
On openSUSE
Enable SSL vhost:
# cp /etc/apache2/vhosts.d/vhost-ssl.template /etc/apache2/vhosts.d/vhost-ssl.conf
Enable the SSL server module, edit /etc/sysconfig/apache2 and add the string SSL to the variable
APACHE_SERVER_FLAGS so it looks like this:
APACHE_SERVER_FLAGS="SSL"
5. Restart Apache and test your new certificate. You may have to add ipvhost.example.com to your /etc/hosts file.
or
Solution 7.6
1. Create a new private key
On openSUSE
On openSUSE
You’ll be asked for a challenge password. Make sure you remember it.
3. You must then send off this CSR to be signed by a Certificate Authority.
8.1 Labs
Exercise 8.1: Create a new cgi script-enabled directory /new-cgi/ served at the URI
/scripts/.
Create the script /new-cgi/foo.cgi with the following contents (you may have to create the directory /new-cgi/):
/new-cfi/foo.cfi
#!/bin/bash
echo -e "\n"
echo -e "Content-type: text/plain\n\n"
echo -e "File is $1\n"
A copy of this file (foo.cfi) can be found in the tarball in the LFS211/SOLUTIONS/s 08/ directory.
Solution 8.1
2. Create a configuration include file in the location suggested below which, enables cgi-scripts for the /scripts/ URI.
newscripts.conf
ScriptAlias /scripts/ /new-cgi/
<Directory /new-cgi/>
Require all granted
</Directory>
A copy of this file (newscripts.service) can be found in the tarball in the LFS211/SOLUTIONS/s 08/ directory.
51
52 CHAPTER 8. ADVANCED HTTP SERVERS
On openSUSE
/etc/apache2/conf.d/newscripts.conf
or
# a2enmod cgi
Exercise 8.2: Create a rewrite rule for “pretty” CGI script URIs
Please Note
If you have done the virtualhost lab from the previous chapter you need to add these two lines to namevhost.conf in
the _default_ namevhost section:
RewriteEngine on
RewriteOptions inherit
Solution 8.2
1. Create a configuration include file in the suggested location below which sets up the proper rewrite rules:
rewrite.conf
RewriteEngine on
RewriteRule \ˆ/foo/(.*) /scripts/foo.cgi?$1 [L,PT]
On openSUSE
/etc/apache2/conf.d/rewrite.conf
Please Note
On Ubuntu and Debian systems the rewrite commands seem to work best in 000-default.conf.. If using
Ubuntu or Debian systems, put the following inside the virtualhost stanza of 000-default.conf:
RewriteEngine on
RewriteOptions inherit
RewriteRule ˆ/foo/(.*) /scripts/foo.cgi?$1 [L,PT]
The documentation for the rewrite flags can be found at: https://httpd.apache.org/docs/2.4/rewrite/flags.
html
2. If required, enable the rewrite module:
On openSUSE
Edit /etc/sysconfig/apache2 and edit the line with the
APACHE_MODULES, and add the value rewrite:
Solution 8.3
1. Create a configuration include file in the suggested location which enables mod status:
status.conf
<Location /server-status/>
SetHandler server-status
On openSUSE
/etc/apache2/conf.d/status.conf
On openSUSE
/srv/www/htdocs/server-status
On openSUSE
Edit /etc/sysconfig/apache2 and edit the line with the APACHE_MODULES, and add the value status:
http://localhost/server-status/
Solution 8.4
1. Create the following html file for the /magic/ URI in the file locations listed below:
index.html
<html>
<head>
<title>This file is a magic include file</title>
</head>
<body>
<h1>This file is a magic include file</h1>
<h2>Foo include below</h2>
<!--#include virtual="/includes/foo.html" -->
<h2>Bar include below</h2>
<!--#include virtual="/includes/bar.html" -->
</body>
</html>
On openSUSE
/srv/www/htdocs/magic/index.html
2. Create the two files to be included in the main page using the content and locations suggested below.
foo.html
this is the foo include
On openSUSE
/srv/www/htdocs/includes/foo.html
bar.html
this is the bar include
On openSUSE
/srv/www/htdocs/includes/bar.html
3. Create a configuration include file in the suggested location listed below which enables includes.
magic.conf
<Location /magic/>
Options +Includes
XBitHack on
</Location>
On openSUSE
/etc/apache2/conf.d/magic.conf
# ln -s /etc/apache2/mods-available/include.load /etc/apache2/mods-enabled/
Email Servers
9.1 Labs
Exercise 9.1: Enable the Postfix SMTP server for external access
• Ensure all hosts in your network are allowed to send email to your server.
Solution 9.1
1. Using your appropriate installer, ensure Postfix is installed:
# yum|zypper|apt install postfix
Please Note
If asked what method for configuration type choose Internet Site.
If asked which mail name, set it to your current host name.
4. Restart Postfix:
59
60 CHAPTER 9. EMAIL SERVERS
Please Note
Be aware the firewall may interfere with this test.
Please Note
The commands (like helo,mail,rcpt, etc) may need to be capitalized on some distributions.
This is neato
.
quit
6. Verify the mail was received using the mutt command or the mail command.
Please Note
You may have to install the mail command. It is part of either the mailx (CentOS or OpenSUSE) or mailutils
(Ubuntu and Debian) packages.
• Prepare for this lab by sending a couple of emails to the student user:
Solution 9.2
1. Ensure dovecot and mutt are installed:
On openSUSE
4. Restart dovecot:
# systemctl restart dovecot
$ mutt -f imap://student@<IP_ADDRESS>/
Please Note
You may have to connect twice with mutt to verify the imap server is working.
The server may already be set up for SSL/StartTLS with a dummy SSL certificate.
There may be a permission challenge creating mail directories. Look in the mail log to confirm the create directory
error. Add the group mail to the student account temporarily.
• On OpenSUSE:
(a) Add or edit /etc/dovecot/conf.d/10-ssl.conf and make sure these lines exist:
ssl = required
ssl_cert = </etc/ssl/certs/dovecot.pem
ssl_key = </etc/ssl/private/dovecot.pem
(b) Generate a self-signed certificate for IMAP.
# cd /usr/share/doc/packages/dovecot/
# ./mkcert.sh
To avoid issues with an incorrectly set up DNS server, or enforced ssl, use this setting for your lab as well:
Very Important
Don’t enable these settings in production. Use them only for this lab.
Please Note
We will re-enforce SSL authentication in the next exercise.
Solution 9.4
1. Enable the SASL authentication service in Dovecot.
• Edit /etc/dovecot/conf.d/10-master.conf and after the section service auth add or un-comment the fol-
lowing lines:
unix_listener /var/spool/postfix/private/auth {
mode = 0666
}
2. Restart Dovecot:
5. Restart Postfix:
Please Note
Be advised that any system listed in permit mynetworks will be allowed to relay.
There are other Postfix options to control relaying of mail other than smtpd recipient restrictions that may
be more suitable for a live installations. This example shows the restrictions being stacked and applied to the
smtpd recipient restrictions control element.
Feel free to experiment with others such as: smtpd relay restrictions.
The current settings of permit mynetworks in conjunction with mynetworks style will allow the local system to
relay without authentication.
If you wish to test authentication on a single machine eliminate the permit mynetworks entry from
smtpd recipient restrictions to force all systems attempting to relay to authenticate.
Please Note
The commands (like helo,mail,rcpt, etc) may need to be capitalized on some distributions.
$ telnet <SERVER> 25
helo localhost
mail from:student
rcpt to:root@<OTHER MACHINE>
quit
This should fail with relay access denied. Test again with authentication:
Create the base64 encoded user and password.
$ echo -en "\0student\0student" | base64
Cool no?
.
quit
Exercise 9.5: Enable StartTLS for Postfix, and force Plain-Text logins to use
StartTLS
Use the following information to create a certificate.
• Country Name: US
Solution 9.5
1. Create a new PEM certificate:
• For CentOS:
# cd /etc/pki/tls/certs
# make postfix.pem
• For Ubuntu:
# /usr/bin/openssl req -utf8 -newkey rsa:2048 -keyout /tmp/postfix.key -nodes \
-x509 -days 365 -out /tmp/postfix.crt -set_serial 0
# cat /tmp/postfix.key > /etc/postfix/postfix.pem
# echo "" >> /etc/postfix/postfix.pem
# cat /tmp/postfix.crt >> /etc/postfix/postfix.pem
# rm -f /tmp/postfix.crt /tmp/postfix.key
• Change the Postfix configuration to enable and enforce TLS:
Note: CentOS and Ubuntu have different key locations,only Ubuntu shown.
# postconf -e "smtpd_tls_auth_only = yes"
# postconf -e "smtpd_tls_security_level = may"
# postconf -e "smtpd_tls_cert_file = /etc/postfix/postfix.pem"
# postconf -e "smtpd_tls_key_file = /etc/postfix/postfix.pem"
• Restart Postfix:
# systemctl restart postfix
• Test SMTP StartTLS:
Please Note
You may have to do this twice to get the key data.
After the starttls command use the ”control + d” key combination.
Cool no?
And secure!
.
quit
Please Note
There is no option for AUTH until after you start the TLS session.
Relay access is still denied until after the AUTH step.
File Sharing
10.1 Labs
Exercise 10.1: Use SCP to copy a folder from one location to another. Create a
directory full of testing files to use for this lab:
$ mkdir /tmp/transfer-lab/
$ mkdir /tmp/receive/
$ for i in /tmp/transfer-lab/{a,b,c}-{1,2,3}.{txt,log,bin}
do
echo $i > $i
done
Use scp to copy just the .log files from /tmp/transfer-lab/, into /tmp/receive/ through the localhost interface.
Solution 10.1
Exercise 10.2: Use rsync over ssh to add the *.bin files only to the previously
created folder
Solution 10.2
Please Note
Notice the . at the end of the destination of the rsync command.
67
68 CHAPTER 10. FILE SHARING
Solution 10.3
1. Create the upload directory with the proper permissions.
On openSUSE
2. Edit vsftpd.conf and enable anonymous uploads. Add the following options:
/etc/vsftpd/vsftpd.conf or /etc/vsftpd.conf
anon_upload_enable=yes
anonymous_enable=yes
Please Note
On Ubuntu or OpenSUSE you must also change the option write enable to match:
write_enable=YES
Exercise 10.4: Share a folder over the rsync protocol. Create a directory full of
testing files to use for this lab
# mkdir /srv/rsync/
# for i in /srv/rsync/{a,b,c}-{1,2,3}.{txt,log,bin}
do
echo $i > $i
done
Serve the directory /srv/rsync/ directly via rsync, use the rsync module name of default.
Solution 10.4
1. Create or edit /etc/rsyncd.conf and add these contents:
/etc/rsyncd.conf
[default]
path = /srv/rsync
comment = default rsync files
Please Note
On OpenSUSE you must also comment out or remove the line:
hosts allow = trusted.hosts
Advanced Networking
11.1 Labs
Please Note
If you have a problem with this exercise, you may need to reboot and choose a different kernel.
Create a new tagged VLAN interface with the id 7 and these settings:
Solution 11.1
1. Enable VLANs:
Enable VLANs
Edit /etc/sysconfig/network and add the following content:
/etc/sysconfig/network
VLAN=yes
VLAN_NAME_TYPE="DEV_PLUS_VID"
71
72 CHAPTER 11. ADVANCED NETWORKING
/etc/sysconfig/network/ifcfg-INTERFACE.7
DEVICE=<INTERFACE>.7
BOOTPROTO=static
TYPE=vlan
ONBOOT=yes
IPADDR=192.168.X.100
NETMASK=255.255.255.0
PHYSDEV="<INTERFACE>"
A copy of this file (ifcfg-INTERFACE.7-redhat) can be found in the tarball in the LFS211/SOLUTIONS/s 11/
directory.
/etc/sysconfig/network/ifcfg-INTERFACE.7
NAME='<INTERFACE> vlan'
STARTMODE='auto'
VLAN_ID='7'
IPADDR='192.168.X.100/24'
ETHERDEVICE=<INTERFACE>
A copy of this file (ifcfg-INTERFACE.7-opensuse) can be found in the tarball in the LFS211/SOLUTIONS/s 11/
directory.
/etc/network/interfaces
auto <INTERFACE>.7
iface <INTERFACE>.7 inet static
address 192.168.X.100
netmask 255.255.255.0
vlan-raw-device <INTERFACE>
A copy of this file (interfaces) can be found in the tarball in the LFS211/SOLUTIONS/s 11/ directory.
The package vlan may need to be installed and the kernel-module 8021q loaded.
# ifup <INTERFACE>.7
4. As an optional step, the ip command can create temporary vlan interfaces for testing. In this case the interface is called
eth0, the vlan id is 8. Check the /proc/net/vlan directory for configuration and usage information:
[root@centos ˜]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default \
qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default \
qlen 1000 link/ether 52:54:00:ab:3b:11 brd ff:ff:ff:ff:ff:ff
[root@centos ˜]# ip link add link eth0 name eth0.v8 type vlan id 8
Create an IP alias on each system. The addresses should be on separate subnets, and require a specific route:
Solution 11.2
1. Create a static route configuration:
/etc/sysconfig/network-scripts/route-INTERFACE
On your system:
192.168.Y.0/24 via <ADDR-Y> dev <INTERFACE>
172.16.Y.0/24 via <ADDR-Y> dev <INTERFACE>
/etc/sysconfig/network/ifroute-INTERFACE
On your system:
192.168.Y.0/24 <ADDR-Y> - <INTERFACE>
172.16.Y.0/24 <ADDR-Y> - <INTERFACE>
/etc/network/interfaces
On your system:
up route add -net 192.168.Y.0/24 gw <ADDR-Y> dev <INTERFACE>
up route add -net 172.16.Y.0/24 gw <ADDR-Y> dev <INTERFACE>
• On your system:
$ ping 192.168.Y.100
$ ping 172.16.Y.100
Exercise 11.3: Configure and enable a stratum 3 NTP server. Connect your
server to the NTP pool as a client
Solution 11.3
1. Ensure the NTP daemon is installed using your favorite installer:
2. Configure the NTP server to query the NTP pool and allow for anonymous traffic.
Edit /etc/ntp.conf and change the server X.X.X.X lines to match these:
/etc/ntp.conf
server 0.pool.ntp.org
server 1.pool.ntp.org
server 2.pool.ntp.org
server 3.pool.ntp.org
Please Note
You may have to wait a few minutes for the timeservers to sync prior to getting any output.
HTTP Caching
12.1 Labs
• Even though your RFC 1918 local network may already be in the default squid.conf file, explicitly set your current
network as an ACL.
Solution 12.1
1. Ensure squid is installed with your appropriate installer:
# yum | zypper | apt install squid
2. Create an ACL for your network, edit /etc/squid/squid.conf and add the following just after the line which reads:
squid.conf
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR
CLIENTS
3. Explicitly allow HTTP access for the newly created ACL, by adding this line below the ACL added above:
77
78 CHAPTER 12. HTTP CACHING
NOTE: You should see a Squid error page when you attempt to access the non-existent URI.
1. Create an ACL defining the URI to block, edit /etc/squid/squid.conf and create a new ACL above the lines you
previously added.
Block access to the newly created ACL blockedsite, edit /etc/squid/squid.conf and add the following line just
above the line you added earlier, http access allow examplenetwork
# squid -k reconfigure
13.1 Labs
Exercise 13.1: Create the environment for the nfs and cifs lab
Create some directories to share and some to use as mount points.
# mkdir -p /home/{export,share}/{nfs,cifs}
# touch /home/export/nfs/{foo,bar,baz}.{txt,log,bin}
Add a group for collaboration and add student to the new group.
On openSUSE
79
80 CHAPTER 13. NETWORK FILE SYSTEMS
• Allow a single host on your local network to have read/write access. Initially use the loopback for ease of testing.
Solution 13.2
1. Edit /etc/exports and add the following content:
/etc/exports
/home/export/nfs 127.0.0.1/32(rw) <NETWORK ADDRESS>/24(ro)
On openSUSE
If an additional systems is available on the network NETWORK ADDRESS, mount the share on the other system. The touch
command should only work on the host allowed read/write access, which is 127.0.0.1 in our test case.
Solution 13.3
1. Ensure that the NFS server is installed:
On openSUSE
smb.conf
[global]
workgroup = LNXFND
server string = Myserver
log file = /var/log/samba/log.%m
max log size = 50
cups options = raw
[mainexports]
path = /home/export/cifs
read only = yes
guest ok = yes
comment = Main exports share
A copy of this file (smb.conf) can be found in the tarball in the LFS211/SOLUTIONS/s 13/ directory.
On openSUSE
# systemctl restart
4. Check if the share is available. The password is not required; when prompted press Enter to continue as anonymous.
$ smbclient -L 127.0.0.1
$ smbclient //SERVER/mainexports
smbclient will prompt for a password; you can press Enter to have no password and smbclient will continue as the
anonymous user. If there is a user id and password added by smbpasswd, you may use those credentials. See the man
page for smbclient for additional information.
• Create and share the directory /home/export/private/ as the share name private and populate it with files:
# mkdir /home/export/private/
# touch /home/export/private/PRIVATE_FILES_ONLY
# chown -R student: /home/export/private
Solution 13.4
[private]
path = /home/export/private
comment = student's private share
read only = No
public = No
valid users = student
# smbpasswd -a student
3. From a remote system, verify the access for the student account:
Solution 13.5
Exercise 13.6: Convert the NFS and CIFS to be automatically mounted and
unmounted
Convert the exported filesystems to be auto mounted by systemd.automount, and dismount them when idle for 10 seconds.
Using systemd.automount, make previous NFS and CIFS mount automatically mount when accessed, and dismount if idle
for 10 seconds. Note: The idle timer will not run if a process has done a cd into the directory.
Solution 13.6
/etc/fstab
127.0.0.1:/home/export/nfs /home/share/nfs nfs
,→ x-systemd.automount,x-systemd.idle-timeout=10,noauto,_netdev 0 0
//localhost/mainexports /home/share/cifs cifs
,→ creds=/root/smbfile,x-systemd.automount,x-systemd.idle-timeout=10,noauto,_netdev 0 0
3. Test the auto-mounter by displaying a file in the shared directory or by changing into the directory. Don’t forget the the
auto-timer does not run if the directory is busy.
$ df -h
$ cd /home/share/nfs
$ df -h
$ cd
Wait a bit and re-execute the df command to see that the auto-mounter dismounted the share.
1. Disable or remove the mount entry just created from the /etc/fstab.
/etc/systemd/system/home-share-nfs.mount
# store in /etc/systemd/system/home-share-nfs.mount
# unit name must match the mount point (the where matches the unit file name)
[Unit]
Description=basic mount options file for auto-mounting home-share-nfs
[Mount]
What=localhost:/home/export/nfs
Where=/home/share/nfs
Type=nfs
[Install]
WantedBy=multi-user.target
A copy of this file (home-share-nfs.mount) can be found in the tarball in the LFS211/SOLUTIONS/s 13/ directory.
The second file is:
/etc/systemd/system/home-share-nfs.automount
# store in /etc/systemd/system/home-share-nfs.automount
# unit name must match the mount point (the where matches the unit file name)
[Unit]
Description=options for our home-share-nfs mount
[Automount]
Where=/home/share/nfs
TimeoutIdleSec=10
[Install]
WantedBy=multi-user.target
A copy of this file (home-share-nfs.automount) can be found in the tarball in the LFS211/SOLUTIONS/s 13/ directory.
There are restrictions on the file names; consult the man page for automount for additional details.
3. Tell systemd about the new configuration files:
# systemctl daemon-reload
6. Test the automount. As before, after about 10 seconds the auto-mounted device should unmount.
$ df -h
$ cd /home/share/nfs
$ df -h
$ cd
14.1 Labs
85
86 CHAPTER 14. INTRODUCTION TO NETWORK SECURITY
Firewalls
15.1 Labs
Very Important
This exercise expects to be run on a “class VM” system where we can freely disable and change the firewall rules. If
you are running on a machine that requires “ssh login”, BE VERY CAREFUL: you may lock yourself out.
Solution 15.1
3. Open three terminal sessions: one root shell and two user (student) shells.
# iptables -vnL
# iptables -F
87
88 CHAPTER 15. FIREWALLS
6. In the other student window, use telnet to connect to localhost port 4200:
The telnet session should connect and appear hung, as it is waiting for input: type something. The typed characters
should appear on the netcat terminal window. After verifying the netcat connection, stop netcat with CTRL-C.
7. Log all new connections with iptables. From the root shell terminal:
# iptables -A INPUT -p tcp -m tcp -m state --state NEW -j LOG --log-level info --log-prefix "LF NEW "
8. Monitor new connections via the system log. On the root terminal session:
# journalctl -f
Stop and restart the netcat and telnet commands while watching the journalctl output; you should see the connection
packet header with the text LF NEW visible.
9. Add another rule to the firewall to log established connections:
# iptables -A INPUT -p tcp -m tcp -m state --state ESTABLISHED -j LOG \
--log-level info --log-prefix "LF ESTABLISHED"
10. Test and observe the output in the root journalctl log. The log should show the initial connection and the packet headers
of all the transactions between telnet and netcat.
11. Add an iptables rule to reject new connections on port 4200. If there is an established connection, it should continue to
work.
# iptables -A INPUT -p tcp -m tcp --dport 4200 -m state --state ESTABLISHED -j REJECT
If there was an established session, it should continue to function. Any new connections should fail.
12. If we examine the log output, we see the ESTABLISHED headers being logged, but we cannot connect. This is due to the
order of the rules. To change the existing iptables records, first save the current configuration:
With your favorite editor, modify /tmp/ip.save and move the line that ends in REJECT to be before the ESTABLISHED
line:
/tmp/ip.save Edited
root@ubuntu:˜# cat /tmp/ip.save
# Generated by iptables-save v1.6.1 on Sat Jul 27 09:59:20 2019
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -p tcp -m tcp -m state --state NEW -j LOG --log-prefix "New Connection" --log-level 6
-A INPUT -p tcp -m tcp --dport 4200 -m state --state ESTABLISHED -j REJECT --reject-with
,→ icmp-port-unreachable
-A INPUT -p tcp -m tcp -m state --state ESTABLISHED -j LOG --log-prefix "LFT ESTABLISHED "
,→ --log-level 6
COMMIT
# Completed on Sat Jul 27 09:59:20 2019
# iptables -F
# iptables-restore /tmp/ip.save
Re-test as desired, there should not be any more ESTABLISHED messages for destination port 4200.
15. Flush out the iptables records to prepare for the next exercise:
# iptables -F
Solution 15.2
On openSUSE
On OpenSUSE, the easiest way to save persistent firewall rules is to use the YaST tool.
On Ubuntu
# iptables-save >/etc/iptables/rules.v4
Exercise 15.3: Using nftables, Enable a firewall which blocks all unwanted traf-
fic
This is a repeat of the last exercise using nftables.
Solution 15.3
3. Add a table to nft, remember there are no pre-configured tables in nft by default. The table will be for both IPv4 and
IPv6.
# nft add table inet filter_it
4. Add a chain to the table and declare which hook will be used in the TCP stack:
# nft add chain inet filter_it input_stuff {type filter hook input priority 0 \; }
11. Examining the listing of our firewall shows the rules in a poorly organized fashion:
# nft list table inet filter_it
table inet filter_it {
chain input_stuff {
type filter hook input priority 0; policy accept;
tcp dport ssh accept
ct state established,related accept
counter packets 2 bytes 140
iifname "lo" accept
drop
}
}
12. We can use the output of nft list table filter_it as input to nft. First, save the configuration. If required create
the directory:
# mkdir /etc/nftables
# nft list table inet filter_it > /etc/nftables/00-filter_it.nft
/etc/nftables/00-filter it.nft
table inet filter_it {
chain input_stuff {
type filter hook input priority 0; policy accept;
ct state established,related accept
iifname "lo" accept
tcp dport ssh accept
counter packets 0 bytes 0
drop
}
}
15. Add the nftables configuration to the startup sequence and reboot to test:
# mkdir /etc/nftables
# systemctl disable firewalld
OpenSUSE needs extra help, all files referenced here are in the SOLUTIONS directory.
On openSUSE
# mkdir /etc/nftables
# cp SOLUTIONS/nftables.service /etc/systemd/system/nftables.service
# echo 'include "/etc/nftables/00-filter_it.nft"' >> /etc/nftables.conf
# systemctl daemon-reload
# systemctl disable firewalld
# systemctl enable nftables
16.1 Labs
LXC requires a Linux kernel of 3.10 or greater. If your kernel version is older than 3.10 please skip this exercise.
It is common practice to run Linux Containers as a non-root user, but in this exercise we will execute the LXC commands as
root. The configuration occurs on the Virtual Machine unless otherwise noted.
Solution 16.1
1. Make sure LXC is installed.
• On CentOS8:
# dnf install lxc lxc-templates lxcfs lxc-doc lxc-libs
• On Ubuntu:
# apt-get install lxc
2. Stop the firewall if is running on the Virtual-Machine. Best practice is to have a firewall in place between the Virtual-
Machine and the LXC container, however, we will disable the firewall to stay focused on LXC-container for now.
# systemctl disable --now firewalld
/etc/sysconfig/lxc-net
93
94 CHAPTER 16. LXC VIRTUALIZATION OVERVIEW
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.0.3.0/24"
LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
LXC_DHCP_MAX="253"
LXC_DHCP_CONFILE=""
LXC_DOMAIN=""
4. Start the LXC network service and make it start on initialization automatically.
# systemctl enable --now lxc-net
---
DIST RELEASE ARCH VARIANT BUILD
---
..... snip ...
centos 8-Stream amd64 default 20200401_07:08
centos 8-Stream arm64 default 20200401_07:08
centos 8-Stream ppc64el default 20200401_07:08
centos 8 amd64 default 20200401_07:39
centos 8 arm64 default 20200401_07:39
centos 8 ppc64el default 20200401_07:39
debian bullseye amd64 default 20200401_05:24
debian bullseye arm64 default 20200401_05:24
debian bullseye armel default 20200401_05:24
debian bullseye armhf default 20200401_05:54
Distribution:
centos
Release:
8
Architecture:
amd64
---
You just created a Centos 8 x86_64 (20200401_07:39) container.
It is possible to have a command line option with the options specified on the command line.
Please Note
CentOS 7 requires an additional option ”-d” to run in background. The defaults have been updated in later
releases.
# lxc-start -n bucket
10. Explore the LXC container, verify the network address and functionality.
11. In the VM window, not the container, create and start two more LXC containers called bucket1 and bucket2
12. One could attach to each container to add additional software like a web-server or use the lxc-attach command to run
commands inside the containers.
Add the nginx web server to all three containers.
# lxc-attach -n bucket -- /bin/dnf install -y nginx
# lxc-attach -n bucket1 -- /bin/dnf install -y nginx
# lxc-attach -n bucket2 -- /bin/dnf install -y nginx
14. Verify each of the containers web server is responding. Use lxc-ls -f to obtain a list of the IP addresses.
15. It may be helpful to use a text based browser, OPTIONALLY install or compile lynx.
16. Create a new web page different on each server. Something like Hello World from bucket1 and test.
# lxc-attach bucket2
# echo "Hello World from bucket2" > /usr/share/nginx/html/index.html
# exit
# lynx -dump http://10.3.0.200
17. Using name virtual hosts requires the web server name is resolvable. Add the hostname bucket to the /etc/hosts
on the Virtual-Machine and the container bucket. Be mindful of duplicate names with loopback addresses.
/etc/nginx/conf.d/00-load.conf
upstream bucket {
server 10.0.3.174; #ip for bucket1
server 10.0.3.200; #ip for bucket2
}
server {
listen 80;
server_name bucket;
location / {
proxy_pass http://bucket;
}
}
20. Test the round robin balancer is working. On the VM open the default web page on the bucket container, if everything
is fine a message should be returned from either bucket1 or bucket2. The output should look similar to:
[root@localhost ˜]# /usr/local/bin/lynx -dump http://bucket
Hello World from bucket1
Please Note
Note that older versions of LXC have a lxc-clone command and newer versions have the lxc-copy command. Use the
man pages to see the difference.
Add another exact copy of the web server containers (not the load balancer) and have all of the containers and web servers
automatically start at boot time.
Solution 16.2
# lxc-ls -f
4. From the VM, start the container bucket3 and record its IP address.
# lxc-start -n bucket3
# lxc-ls -f
5. Edit the nginx configuration, /etc/nginx/conf.d/00-load.conf file and add in the new node, bucket3, to the load-
balance list on the container bucket.
6. Re-load the nginx web server on bucket.
# systemctl reload nginx
High Availability
17.1 Labs
99
100 CHAPTER 17. HIGH AVAILABILITY
Database
18.1 Labs
1. Make sure MariaDB is installed. Use whichever command is right for your distribution:
$ sudo yum install mariadb-server
$ sudo zypper install mariadb-server
$ sudo apt-get install mariadb-server
5. MariaDB includes a script (mysql secure installation) for improving the security of the newly installed server. The
script prompts the actions that it is about to implement. Some of the security items are:
There is currently no root account password in MariaDB so we will set it to password when prompted. Take the defaults
for all prompts except Remove test database, we will be using the test database.
Here are the results of running the script:
101
102 CHAPTER 18. DATABASE
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorization.
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Enter password:
mysqladmin Ver 9.0 Distrib 5.5.56-MariaDB, for Linux on x86_64
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
$ mysql -u root -p
8. Add the user student with the password password to the database. The user student should be able to log-in from
anywhere:
10. Grant the new user "ALL" privileges to the database test:
11. Log out of the database as the user root and log back in as the user student:
MariaDB [(none)]> exit
$ mysql -u student -p test
Notice the prompt now has the database name test instead of none.
Adding tables
MariaDB [test]>
Inserting Data
MariaDB [test]> INSERT INTO Courses VALUES (1,'Rocket Design','Acme Rocket Basics');
Query OK, 1 row affected (0.00 sec)
MariaDB [test]> INSERT INTO Courses VALUES (2,'Solid Fuel','Acme Chemical Propulsion');
Query OK, 1 row affected (0.00 sec)
MariaDB [test]> INSERT INTO Courses VALUES (3,'Gantry Basics','Hold rocket up');
Query OK, 1 row affected (0.00 sec)
MariaDB [test]>
Querying
System log
19.1 Labs
Overview
In this exercise the rsyslog daemon will be configured to receive messages from a different system. To facilitate this,
network namespaces will be used to emulate a separate machine. Log messages will be generated in the namespace
and sent to the logging server which will write the log messages to a file other than the default log message file.
(b) The ip netns exec command will allow program execution using the newly created network namespace. Verify the
available network devices within the namespace:
# ip netns exec netns1 ip link list
(c) The lo interface is configured but is not up. Set up the lo interface:
# ip netns exec netns1 ip link set dev lo up
(e) The lo interface can be tested with a ping command, if it is executed within the namespace:
# ip netns exec netns1 ping -c3 127.0.0.1
(f) Create a pair of virtual adapters to be used to communicate between the namespace network stack and the
default network stack.
Add the device pair:
# ip link add veth0 type veth peer name veth1
105
106 CHAPTER 19. SYSTEM LOG
(k) Open a new terminal window and use tcpdump to monitor the traffic to and from the namespace adapter:
# tcpdump -i veth0 -n -B512
(a) The input module for TCP or UDP reception has to be enabled in /etc/rsyslog.conf. Please be aware there are
two formats for rsyslog.conf, as listed below:
A copy of this file (rsyslog.conf-new) can be found in the tarball in the LFS211/SOLUTIONS/s 19/ directory.
A copy of this file (rsyslog.conf-old) can be found in the tarball in the LFS211/SOLUTIONS/s 19/ directory.
Remove the comment marker # from both the UDP and TCP modules and run statements.
(b) Restart the rsyslog daemon:
# systemctl restart rsyslog
(c) Open a terminal window and monitor the messages being sent to your system log.
Please Note
Ubuntu uses /var/log/syslog and CentOS uses /var/log/messages.
# tail -f /var/log/syslog
There should be three terminal windows open:
• tcpdump
• A tail of system messages
• A root shell for issuing commands
(d) Test reception of syslog messages.
On the command window issue a logger message from the network namespace to the default system.
There should be TCP or UDP traffic from 10.1.1.1 to 10.1.1.2 of type syslog on the tcpdump screen. The text
should be visible on the system log monitoring screen:
# ip netns exec netns1 logger -n 10.1.1.2 from netns boo
The tcpdump window should show something like:
11:22:59.984357 IP 10.1.1.2.17500 > 10.1.1.255.17500: UDP, length 139
11:23:00.854595 IP 10.1.1.1.35329 > 10.1.1.2.514: SYSLOG user.notice, length: 127
and the system log monitor should have a message like:
Oct 8 11:25:56 yoga student from netns boo
3. The messages from the remote host will get logged in the same file as the servers own messages mixing them all
together.
Use a property filter to isolate incoming log messages and direct them to a specified file.
/etc/rsyslog.d/00-property-filter.conf
:FROMHOST-IP, isequal, "10.1.1.1" -/var/log/syslog-other
& ˜
(b) Create the new log file and set the permissions the same as those of the existing log file.
# touch /var/log/syslog-other
# chown syslog.adm /var/log/syslog-other
(c) Restart the rsyslog daemon to pick up the changes:
# systemctl restart rsyslog
(d) Test the property filter correctly logs the messages from the remote host to the server.
(e) Change the terminal monitoring the system log to now monitor the new log file:
# tail -f /var/log/syslog-other
(f) Send a system log message from the network namespace to the logging server and verify the message is written
to the new log file:
# ip netns exec netns1 logger -n 10.1.1.2 Test to new remote log server.
Package Management
20.1 Labs
While your Linux distribution may have stress-ng within its packaging system, here we are going to obtain the source using
git, the distributed source control system originally developed for use with the Linux kernel, but now used by literally millions
of projects. We will then compile and install.
If you do not have git installed, do so with your packaging system; modern distributions generally come with it by default.
2. Compile it with:
$ cd stress-ng
$ make
You may find the compile fails due to some missing headers, due to a missing development package. For example on a
RHEL 7 system one might get:
.........
cc -O2 -Wall -Wall -Wextra -DVERSION="0.05.00" -O2 -c -o stress-key.o stress-key.c
stress-key.c:36:22: fatal error: keyutils.h: No such file or directory
#include <keyutils.h>
ˆ
compilation terminated.
make: *** [stress-key.o] Error 1
109
110 CHAPTER 20. PACKAGE MANAGEMENT
On other distributions or versions package names may differ, so happy hunting! This kind of snag is one advantage to
using the pre-packaged software supplied by distributions.
3. Test with:
$ ./stress-ng -c 3 -t 10s -m 4
which ties up the system with 3 CPU hogs for 10 seconds while using 1 GB of memory.
4. Install with:
$ sudo make install
5. Change directories and test again and also see if the documentation was also installed properly:
$ cd /tmp
$ stress-ng -c 3 -t 10s -m 4
$ man stress-ng
Please Note
While some program sources have a make uninstall target, not all do. Thus, uninstalling (removing, cleaning up)
can be a pain, which is one reason packaging systems have been designed and implemented.
We will use a simple hello program package, the source of which is contained in the SOLUTIONS tarball you should have
already obtained from https://training.linuxfoundation.org/cm/LFS211.
Before beginning you may want to make sure you have the necessary utilities installed with:
myappdebian-1.0/
myappdebian-1.0/Makefile
myappdebian-1.0/myhello.c
myappdebian-1.0/README
where we have:
README
Some very informative information should go in here :)
and
hello.c
1 #include <stdio.h>
2 #include <stdlib.h>
3 char hello_string[] = "hello world";
4 int main ()
5 {
6 printf ("\n%s\n\n", hello_string);
7 exit (EXIT_SUCCESS);
8 }
and
Makefile
BIN = $(DESTDIR)/usr/bin
CFLAGS := -O $(CFLAGS)
TARGET= myhello
SRCFILES= myhello.c
all: $(TARGET)
$(TARGET): $(SRCFILES)
$(CC) $(CFLAGS) -o $(TARGET) $(SRCFILES)
install: $(TARGET)
install -d $(BIN)
install $(TARGET) $(BIN)
clean:
rm -f $(TARGET)
Please Note
You can certainly construct a simpler Makefile. However, you must have the line:
BIN = $(DESTDIR)/usr/bin
and point the installation to the BIN directory, or the executable will not be installed as part of the package.
The main steps in the following are encapsulated in lab makedeb.sh which is also in the LFS211/SOLUTIONS/s 20/ directory
in the SOLUTIONS tarball.
1. Make a working directory and put a copy of the source tarball in there:
3. Go into the expanded directory and build the package, using dh make, one of several possible package builder pro-
grams:
$ cd myappdebian-1.0
$ dh_make -f ../*myappdebian-1.0.tar.gz
$ dpkg-buildpackage -uc -us
$ ./myhello
hello world
5. Take a good look at all the files in the debian directory and try to imagine building them all by hand!
$ cd ..
$ sudo dpkg --install *.deb
$ myhello
hello world
The script can be used to build the rpm file or as a guide to create the rpm manually.
After installing the newly created rpm file, test the application by running the myhello command.