Vol 2 - Linux Server Administration
Vol 2 - Linux Server Administration
Lev
el-I
nte
rme
dia
te]
Lin
uxS e
rver
Ad
minis
tra
tio
n
S
tud
entMa
ter
ial
(
Vol
ume2-No
tesa
ndWo
rkb
ook
)
L
INUX
h
ttp
://
mic
rol
in
k.e
du.
et/
Table of Contents
1. Process management ......................................................................................... 1
1.1. terminology .............................................................................................. 1
1.2. basic process management ....................................................................... 2
1.3. signalling processes ................................................................................. 6
1.4. practice : basic process management ....................................................... 9
1.5. solution : basic process management ..................................................... 10
1.6. priority and nice values ......................................................................... 12
1.7. practice : process priorities .................................................................... 15
1.8. solution : process priorities .................................................................... 16
1.9. background processes ............................................................................ 18
1.10. practice : background processes ........................................................... 20
1.11. solution : background processes ........................................................... 21
2. Disk management ............................................................................................ 23
2.1. hard disk devices ................................................................................... 24
2.2. practice: hard disk devices ..................................................................... 31
2.3. solution: hard disk devices .................................................................... 32
2.4. partitions ................................................................................................. 34
2.5. practice: partitions .................................................................................. 39
2.6. solution: partitions ................................................................................. 40
2.7. file systems ............................................................................................ 41
2.8. practice: file systems .............................................................................. 45
2.9. solution: file systems ............................................................................. 46
2.10. mounting .............................................................................................. 47
2.11. practice: mounting file systems ........................................................... 52
2.12. solution: mounting file systems ........................................................... 53
2.13. uuid and filesystems ............................................................................ 55
2.14. practice: uuid and filesystems .............................................................. 57
2.15. solution: uuid and filesystems .............................................................. 58
2.16. RAID .................................................................................................... 59
2.17. practice: RAID ..................................................................................... 65
3. Logical volume management ......................................................................... 66
3.1. introduction to lvm ................................................................................ 67
3.2. lvm terminology ..................................................................................... 68
3.3. example: using lvm ................................................................................ 69
3.4. example: extend a logical volume ......................................................... 71
3.5. example: resize a physical Volume ....................................................... 73
3.6. example: mirror a logical volume .......................................................... 75
3.7. example: snapshot a logical volume ...................................................... 76
3.8. verifying existing physical volumes ...................................................... 77
3.9. verifying existing volume groups .......................................................... 79
3.10. verifying existing logical volumes ....................................................... 81
3.11. manage physical volumes .................................................................... 82
3.12. manage volume groups ........................................................................ 84
3.13. manage logical volumes ....................................................................... 86
3.14. practice : lvm ....................................................................................... 89
4. Booting the system .......................................................................................... 90
iii
Linux System Administration
iv
Linux System Administration
v
Linux System Administration
vi
List of Tables
2.1. ide device naming .......................................................................................... 25
2.2. scsi device naming ......................................................................................... 26
2.3. primary, extended and logical partitions ........................................................ 34
2.4. Partition naming ............................................................................................. 34
3.1. disk partitioning example ............................................................................... 67
3.2. LVM Example ............................................................................................... 67
vii
Chapter 1. Process management
Table of Contents
1.1. terminology ...................................................................................................... 1
1.2. basic process management ............................................................................... 2
1.3. signalling processes ......................................................................................... 6
1.4. practice : basic process management ............................................................... 9
1.5. solution : basic process management ............................................................. 10
1.6. priority and nice values ................................................................................. 12
1.7. practice : process priorities ............................................................................ 15
1.8. solution : process priorities ............................................................................ 16
1.9. background processes .................................................................................... 18
1.10. practice : background processes ................................................................... 20
1.11. solution : background processes .................................................................. 21
1.1. terminology
1.1.1. process
A process is compiled source code that is currently running on the system.
1.1.2. PID
All processes have a process id or PID.
1.1.3. PPID
Every process has a parent process (with a PPID). The child process is often started
by the parent process.
1.1.4. init
The init process always has process ID 1. The init process is started by the kernel
itself so technically it does not have a parent process. init serves as a foster parent
for orphaned processes.
1.1.5. kill
When a process stops running, the process dies, when you want a process to die, you
kill it.
1
Process management
1.1.6. daemon
Processes that start at system startup and keep running forever are called daemon
processes or daemons. These daemons never die.
1.1.7. zombie
When a process is killed, but it still shows up on the system, then the process is
referred to as zombie. You cannot kill zombies, because they are already dead.
1.2.2. pidof
You can find all process id's by name using the pidof command.
When starting a new bash you can use echo to verify that the pid from before is the
ppid of the new shell. The child process from above is now the parent process.
Typing exit will end the current process and brings us back to our original values
for $$ and $PPID.
2
Process management
1.2.5. exec
With the exec command, you can execute a process without forking a new process.
In the following screenshot a Korn shell (ksh) is started and is being replaced with a
bash shell using the exec command. The pid of the bash shell is the same as the pid
of the Korn shell. Exiting the child bash shell will get me back to the parent bash,
not to the Korn shell (which does not exist anymore).
1.2.6. ps
One of the most common tools on Linux to look at processes is ps. The following
screenshot shows the parent child relationship between three bash processes.
3
Process management
4224 4223
[paul@RHEL4b ~]$ bash
[paul@RHEL4b ~]$ echo $$ $PPID
4866 4224
[paul@RHEL4b ~]$ bash
[paul@RHEL4b ~]$ echo $$ $PPID
4884 4866
[paul@RHEL4b ~]$ ps fx
PID TTY STAT TIME COMMAND
4223 ? S 0:01 sshd: paul@pts/0
4224 pts/0 Ss 0:00 \_ -bash
4866 pts/0 S 0:00 \_ bash
4884 pts/0 S 0:00 \_ bash
4902 pts/0 R+ 0:00 \_ ps fx
[paul@RHEL4b ~]$ exit
exit
[paul@RHEL4b ~]$ ps fx
PID TTY STAT TIME COMMAND
4223 ? S 0:01 sshd: paul@pts/0
4224 pts/0 Ss 0:00 \_ -bash
4866 pts/0 S 0:00 \_ bash
4903 pts/0 R+ 0:00 \_ ps fx
[paul@RHEL4b ~]$ exit
exit
[paul@RHEL4b ~]$ ps fx
PID TTY STAT TIME COMMAND
4223 ? S 0:01 sshd: paul@pts/0
4224 pts/0 Ss 0:00 \_ -bash
4904 pts/0 R+ 0:00 \_ ps fx
[paul@RHEL4b ~]$
On Linux, ps fax is often used. On Solaris ps -ef (which also works on Linux) is
common. Here is a partial output from ps fax.
...
1.2.7. pgrep
Similar to the ps -C, you can also use pgrep to search for a process by its command
name.
4
Process management
You can also list the command name of the process with pgrep.
1.2.8. top
Another popular tool on Linux is top. The top tool can order processes according to
cpu usage or other properties. You can also kill processes from within top. Press h
inside top for help.
In case of trouble, top is often the first tool to fire up, since it also provides you
memory and swap space information.
5
Process management
1.3.1. kill
The kill command will kill (or stop) a process. The screenshot shows how to use a
standard kill to stop the process with pid 1942.
root@deb503:~# kill -1 1
root@deb503:~#
It is up to the developer of the process to decide whether the process can do this
running, or whether it needs to stop and start. It is up to the user to read the
documentation of the program.
6
Process management
1.3.6. killall
The killall command will also default to sending a signal 15 to the processes.
This command and its SysV counterpart killall5 can by used when shutting down
the system. This screenshot shows how Red Hat Enterprise Linux 5.3 uses killall5
when halting the system.
1.3.7. pkill
You can use the pkill command to kill a process by its command name.
1.3.8. top
Inside top the k key allows you to select a signal and pid to kill. Below is a partial
screenshot of the line just below the summary in top after pressing k.
7
Process management
A suspended process does not use any cpu cycles, but it stays in memory and can be
re-animated with a SIGCONT signal (kill -18 on Linux).
8
Process management
4. Using your terminal name from above, use ps to find all processes associated with
your terminal.
9. Display only those two sleep processes in top. Then quit top.
9
Process management
root@rhel53 ~# ps -C init
PID TTY TIME CMD
1 ? 00:00:04 init
root@rhel53 ~# who am i
paul pts/0 2010-04-12 17:44 (192.168.1.38)
4. Using your terminal name from above, use ps to find all processes associated with
your terminal.
or also
in this example the PPID is from the su - command, but when inside gnome then for
example gnome-terminal can be the parent process
10
Process management
9. Display only those two sleep processes in top. Then quit top.
top -p pidx,pidy (replace pidx pidy with the actual numbers)
11
Process management
1.6.1. introduction
All processes have a priority and a nice value. Higher priority processes will get
more cpu time than lower priority processes. You can influence this with the nice
and renice commands.
The screenshots shows the creation of four distinct pipes (in a new directory).
The cat is copied with a distinct name to the current directory. (This enables us to
easily recognize the processes within top. You could do the same exercise without
copying the cat command, but using different users. Or you could just look at the pid
of each process.)
The commands you see above will create two proj33 processes that use cat to bounce
the x character between pipe33a and pipe33b. And ditto for the z character and
proj42.
12
Process management
1.6.4. top
Just running top without options or arguments will display all processes and an
overview of innformation. The top of the top screen might look something like this.
Notice the cpu idle time (0.0%id) is zero. This is because our cat processes are
consuming the whole cpu. Results can vary on systems with four or more cpu cores.
1.6.5. top -p
The top -p 1670,1671,1673,1674 screenshot below shows four processes, all of then
using approximately 25 percent of the cpu.
All four processes have an equal priority (PR), and are battling for cpu time. On
some systems the Linux kernel might attribute slightly varying priority values, but
the result will still be four processes fighting for cpu time.
1.6.6. renice
Since the processes are already running, we need to use the renice command to
change their nice value (NI).
The screenshot shows how to use renice on both the proj33 processes.
Normal users can attribute a nice value from zero to 20 to processes they own. Only
the root user can use negative nice values. Be very careful with negative nice values,
since they can make it impossible to use the keyboard or ssh to a system.
13
Process management
1.6.8. nice
The nice works identical to the renice but it is used when starting a command.
The screenshot shows how to start a script with a nice value of five.
14
Process management
3. Use top and ps to display information (pid, ppid, priority, nice value, ...) about
these two cat processes.
4. Bounce another character between two other pipes, but this time start the
commands nice. Verify that all cat processes are battling for the cpu. (Feel free to
fire up two more cats with the remaining pipes).
5. Use ps to verify that the two new cat processes have a nice value. Use the -o and
-C options of ps for this.
6. Use renice te increase the nice value from 10 to 15. Notice the difference with
the usual commands.
15
Process management
3. Use top and ps to display information (pid, ppid, priority, nice value, ...) about
these two cat processes.
top (probably the top two lines)
4. Bounce another character between two other pipes, but this time start the
commands nice. Verify that all cat processes are battling for the cpu. (Feel free to
fire up two more cats with the remaining pipes).
echo -n y | nice cat - p3 > p4 &
nice cat <p4 >p3 &
5. Use ps to verify that the two new cat processes have a nice value. Use the -o and
-C options of ps for this.
[paul@rhel53 pipes]$ ps -C cat -o pid,ppid,pri,ni,comm
PID PPID PRI NI COMMAND
4013 3947 14 0 cat
4016 3947 21 0 cat
4025 3947 13 10 cat
4026 3947 13 10 cat
6. Use renice te increase the nice value from 10 to 15. Notice the difference with
the usual commands.
[paul@rhel53 pipes]$ renice +15 4025
4025: old priority 10, new priority 15
[paul@rhel53 pipes]$ renice +15 4026
16
Process management
17
Process management
1.9.1. jobs
Stuff that runs in background of your current shell can be displayed with the jobs
command. By default you will not have any jobs running in background.
root@rhel53 ~# jobs
root@rhel53 ~#
1.9.2. control-Z
Some processes can be suspended with the Ctrl-Z key combination. This sends
a SIGSTOP signal to the Linux kernel, effectively freezing the operation of the
process.
When doing this in vi(m), then vi(m) goes to the background. The background vi(m)
can be seen with the jobs command.
1.9.4. jobs -p
An interesting option is jobs -p to see the process id of background processes.
18
Process management
[1] 4902
[paul@RHEL4b ~]$ sleep 400 &
[2] 4903
[paul@RHEL4b ~]$ jobs -p
4902
4903
[paul@RHEL4b ~]$ ps `jobs -p`
PID TTY STAT TIME COMMAND
4902 pts/0 S 0:00 sleep 500
4903 pts/0 S 0:00 sleep 400
[paul@RHEL4b ~]$
1.9.5. fg
Running the fg command will bring a background job to the foreground. The number
of the background job to bring forward is the parameter of fg.
1.9.6. bg
Jobs that are suspended in background can be started in background with bg. The
bg will send a SIGCONT signal.
Below an example of the sleep command (suspended with Ctrl-Z) being reactivated
in background with bg.
19
Process management
10. (if time permits, a general review question...) Explain in detail where the numbers
come from in the next screenshot. When are the variables replaced by their value ?
By which shell ?
[paul@RHEL4b ~]$ echo $$ $PPID
4224 4223
[paul@RHEL4b ~]$ bash -c "echo $$ $PPID"
4224 4223
[paul@RHEL4b ~]$ bash -c 'echo $$ $PPID'
5059 4224
[paul@RHEL4b ~]$ bash -c `echo $$ $PPID`
4223: 4224: command not found
20
Process management
10. (if time permits, a general review question...) Explain in detail where the numbers
come from in the next screenshot. When are the variables replaced by their value ?
By which shell ?
21
Process management
The current bash shell will replace the $$ and $PPID while scanning the line, and
before executing the echo command.
The variables are now double quoted, but the current bash shell will replace $$ and
$PPID while scanning the line, and before executing the bach -c command.
The variables are now single quoted. The current bash shell will not replace the $$
and the $PPID. The bash -c command will be executed before the variables replaced
with their value. This latter bash is the one replacing the $$ and $PPID with their
value.
With backticks the shell will still replace both variable before the embedded echo is
executed. The result of this echo is the two process id's. These are given as commands
to bash -c. But two numbers are not commands!
22
Chapter 2. Disk management
Table of Contents
2.1. hard disk devices ........................................................................................... 24
2.2. practice: hard disk devices ............................................................................. 31
2.3. solution: hard disk devices ............................................................................ 32
2.4. partitions ......................................................................................................... 34
2.5. practice: partitions .......................................................................................... 39
2.6. solution: partitions ......................................................................................... 40
2.7. file systems .................................................................................................... 41
2.8. practice: file systems ..................................................................................... 45
2.9. solution: file systems ..................................................................................... 46
2.10. mounting ...................................................................................................... 47
2.11. practice: mounting file systems ................................................................... 52
2.12. solution: mounting file systems ................................................................... 53
2.13. uuid and filesystems .................................................................................... 55
2.14. practice: uuid and filesystems ...................................................................... 57
2.15. solution: uuid and filesystems ..................................................................... 58
2.16. RAID ............................................................................................................ 59
2.17. practice: RAID ............................................................................................. 65
23
Disk management
2.1.1. terminology
Data is written in concentric circles called tracks. Track zero is (usually) on the
inside. The time it takes to position the head over a certain track is called the seek
time. Often the platters are stacked on top of each other, hence the set of tracks
accessible at a certain position of the comb forms a cylinder. Tracks are divided into
512 byte sectors, with more unused space (gap) between the sectors on the outside
of the platter.
When you break down the advertised access time of a hard drive, you will notice
that most of that time is taken by movement of the heads (about 65%) and rotational
latency (about 30%).
block device
Random access hard disk devices have an abstraction layer called block device to
enable formatting in fixed-size (usually 512 bytes) blocks. Blocks can be accessed
independent of access to other blocks. A block device has the letter b to denote the
file type in the output of ls -l.
Note also that the ISO 9660 standard for cdrom uses a 2048 byte block size.
Old hard disks (and floppy disks) use cylinder-head-sector addressing to access a
sector on the disk. Most current disks use LBA (Logical Block Addressing).
ide or scsi
Actually, the title should be ata or scsi, since ide is an ata compatible device. Most
desktops use ata devices, most servers use scsi.
24
Disk management
ata
An ata controller allows two devices per bus, one master and one slave. Unless
your controller and devices support cable select, you have to set this manually with
jumpers.
With the introduction of sata (serial ata), the original ata was renamed to parallel
ata. Optical drives often use atapi, which is an ATA interface using the SCSI
communication protocol.
scsi
A scsi controller allows more than two devices. When using SCSI (small computer
system interface), each device gets a unique scsi id. The scsi controller also needs
a scsi id, do not use this id for a scsi-attached device.
Older 8-bit SCSI is now called narrow, whereas 16-bit is wide. When the bus
speeds was doubled to 10Mhz, this was known as fast SCSI. Doubling to 20Mhz
made it ultra SCSI. Take a look at http://en.wikipedia.org/wiki/SCSI for more SCSI
standards.
It is possible to have only /dev/hda and /dev/hdd. The first one is a single ata hard
disk, the second one is the cdrom (by default configured as slave).
25
Disk management
Below a sample of how scsi devices on a linux can be named. Adding a scsi disk or
raid controller with a lower scsi address will change the naming scheme (shifting the
higher scsi addresses one letter further in the alphabet).
/sbin/fdisk
You can start by using /sbin/fdisk to find out what kind of disks are seen by the
kernel. Below the result on Debian, with two ata-ide disks present.
And here an example of sata disks on a laptop with Ubuntu. Remember that sata
disks are presented to you with the scsi /dev/sdx notation.
Here is an overview of disks on a RHEL4u3 server with two real 72GB scsi disks.
This server is attached to a NAS with four NAS disks of half a terabyte. On the NAS
disks, four LVM (/dev/mdx) software RAID devices are configured.
You can also use fdisk to obtain information about one specific hard disk device.
26
Disk management
Later we will use fdisk to do dangerous stuff like creating and deleting partitions.
/bin/dmesg
Kernel boot messages can be seen after boot with dmesg. Since hard disk devices
are detected by the kernel during boot, you can also use dmesg to find information
about disk devices.
Here's another example of dmesg (same computer as above, but with extra 200gb
disk now).
27
Disk management
/sbin/lsscsi
The /sbin/lsscsi will gve you a nice readable output of all scsi (and scsi emulated
devices). This first screenshot shows lsscsi on a SPARC system.
root@shaka:~# lsscsi
[0:0:0:0] disk Adaptec RAID5 V1.0 /dev/sda
[1:0:0:0] disk SEAGATE ST336605FSUN36G 0438 /dev/sdb
root@shaka:~#
Here is the same command, but run on a laptop with scsi emulated dvd writer and
scsi emulated usb.
paul@laika:~$ lsscsi
[0:0:0:0] disk ATA HTS721010G9SA00 MCZO /dev/sda
[1:0:0:0] disk ATA HTS721010G9SA00 MCZO /dev/sdb
[3:0:0:0] cd/dvd _NEC DVD_RW ND-7551A 1-02 /dev/scd0
[4:0:0:0] disk GENERIC USB Storage-CFC 019A /dev/sdc
[4:0:0:1] disk GENERIC USB Storage-SDC 019A /dev/sdd
[4:0:0:2] disk GENERIC USB Storage-SMC 019A /dev/sde
[4:0:0:3] disk GENERIC USB Storage-MSC 019A /dev/sdf
/proc/scsi/scsi
Another way to locate scsi devices is via the /proc/scsi/scsi file.
Another simple tool is scsiinfo which is a part of scsitools (also not installed by
default).
28
Disk management
root@deb503:~# scsiinfo -l
/dev/sda /dev/sdb /dev/sdc
Although technically the /sbin/badblocks tool is meant to look for bad blocks, you
can use it to completely erase all data from a disk. Since this is really writing to every
sector of the disk, it can take a long time!
/dev/sdb:
IO_support = 0 (default 16-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 12161/255/63, sectors = 195371568, start = 0
/dev/hdd:
multcount = 0 (off)
IO_support = 0 (default)
unmaskirq = 0 (off)
using_dma = 1 (on)
29
Disk management
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 24321/255/63, sectors = 390721968, start = 0
30
Disk management
2. Use fdisk to find the total size of all hard disk devices on your system.
3. Stop a virtual machine, add three virtual 1 gigabyte scsi hard disk devices and one
virtual 400 megabyte ide hard disk device. If possible, also add another virtual 400
megabyte ide disk.
4. Use dmesg to verify that all the new disks are properly detected at boot-up.
6. Use fdisk (with grep and /dev/null) to display the total size of the new disks.
8. Look at /proc/scsi/scsi.
31
Disk management
2. Use fdisk to find the total size of all hard disk devices on your system.
fdisk -l
3. Stop a virtual machine, add three virtual 1 gigabyte scsi hard disk devices and one
virtual 400 megabyte ide hard disk device. If possible, also add another virtual 400
megabyte ide disk.
This exercise happens in the settings of vmware or VirtualBox.
4. Use dmesg to verify that all the new disks are properly detected at boot-up.
See 1.
ATA: ls -l /dev/hd*
6. Use fdisk (with grep and /dev/null) to display the total size of the new disks.
#Verify the device (/dev/sdc??) you want to erase before typing this.
#
root@rhel53 ~# badblocks -ws /dev/sdc
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
32
Disk management
8. Look at /proc/scsi/scsi.
root@rhel53 ~# lsscsi
[0:0:2:0] disk VBOX HARDDISK 1.0 /dev/sda
[0:0:3:0] disk VBOX HARDDISK 1.0 /dev/sdb
[0:0:6:0] disk VBOX HARDDISK 1.0 /dev/sdc
33
Disk management
2.4. partitions
A partition's geometry and size is usually defined by a starting and ending cylinder
(sometimes by sector). Partitions can be of type primary (maximum four), extended
(maximum one) or logical (contained within the extended partition). Each partition
has a type field that contains a code. This determines the computers operating system
or the partitions file system.
fdisk -l
In the fdisk -l example below you can see that two partitions exist on /dev/sdb2. The
first partition spans 31 cylinders and contains a Linux swap partition. The second
partition is much bigger.
34
Disk management
/proc/partitions
The /proc/partitions file contains a table with major and minor number of partitioned
devices, their number of blocks and the device name in /dev. Verify with /proc/
devices to link the major number to the proper device.
3 0 524288 hda
3 64 734003 hdb
8 0 8388608 sda
8 1 104391 sda1
8 2 8281507 sda2
8 16 1048576 sdb
8 32 1048576 sdc
8 48 1048576 sdd
253 0 7176192 dm-0
253 1 1048576 dm-1
The major number corresponds to the device type (or driver) and can be found in
/proc/devices. In this case 3 corresponds to ide and 8 to sd. The major number
determines the device driver to be used with this device.
The minor number is a unique identification of an instance of this device type. The
devices.txt file in the kernel tree contains a full list of major and minor numbers.
other tools
You might be interested in alternatives to fdisk like parted, cfdisk, sfdisk and
gparted. This course mainly uses fdisk to partition hard disks.
35
Disk management
First, we check with fdisk -l whether Linux can see the new disk. Yes it does, the
new disk is seen as /dev/sdb, but it does not have any partitions yet.
root@RHELv4u2:~# fdisk -l
Then we create a partition with fdisk on /dev/sdb. First we start the fdisk tool with /
dev/sdb as argument. Be very very careful not to partition the wrong disk!!
Inside the fdisk tool, we can issue the p command to see the current disks partition
table.
36
Disk management
We can now issue p again to verify our changes, but they are not yet written to disk.
This means we can still cancel this operation! But it looks good, so we use w to write
the changes to disk, and then quit the fdisk tool.
Let's verify again with fdisk -l to make sure reality fits our dreams. Indeed, the
screenshot below now shows a partition on /dev/sdb.
root@RHELv4u2:~# fdisk -l
This example copies the master boot record from the first SCSI hard disk.
dd if=/dev/sda of=/SCSIdisk.mbr bs=512 count=1
The same tool can also be used to wipe out all information about partitions on a disk.
This example writes zeroes over the master boot record.
37
Disk management
partprobe
Don't forget that after restoring a master boot record with dd, that you need to force
the kernel to reread the partition table with partprobe. After running partprobe, the
partitions can be used again.
logical drives
The partition table does not contain information about logical drives. So the dd
backup of the mbr only works for primary and extended partitions. To backup the
partition table including the logical drives, you can use sfdisk.
This example shows how to backup all partition and logical drive information to a file.
sfdisk -d /dev/sda < parttable.sda.sfdisk
The following example copies the mbr and all logical drive info from /dev/sda to /
dev/sdb.
sfdisk -d /dev/sda | sfdisk /dev/sdb
38
Disk management
5. Create a 400MB primary partition and two 300MB logical drives on a big disk.
7. Compare the output again of fdisk and df. Do both commands display the new
partitions ?
8. Create a backup with dd of the mbr that contains your 200MB primary partition.
9. Take a backup of the partition table containing your 400MB primary and 300MB
logical drives. Make sure the logical drives are in the backup.
10. (optional) Remove all your partitions with fdisk. Then restore your backups.
39
Disk management
5. Create a 400MB primary partition and two 300MB logical drives on a big disk.
Choose one of the disks you added (this example uses /dev/sdb)
fdisk /dev/sdb
inside fdisk : n p 1 +400m enter --- n e 2 enter enter --- n l +300m (twice)
7. Compare the output again of fdisk and df. Do both commands display the new
partitions ?
The newly created partitions are visible with fdisk.
8. Create a backup with dd of the mbr that contains your 200MB primary partition.
dd if=/dev/sdc of=bootsector.sdc.dd count=1 bs=512
9. Take a backup of the partition table containing your 400MB primary and 300MB
logical drives. Make sure the logical drives are in the backup.
sfdisk -d /dev/sdb > parttable.sdb.sfdisk
40
Disk management
The properties (length, character set, ...) of filenames are determined by the file
system you choose. Directories are usually implemented as files, you will have to
learn how this is implemented! Access control in file systems is tracked by user
ownership (and group owner- and membership) in combination with one or more
access control lists.
The manual page about filesystems(5) is usually accessed by typing man fs. You can
also look at /proc/filesystems for currently loaded file system drivers.
Once the most common Linux file systems is the ext2 (the second extended) file
system. A disadvantage is that file system checks on ext2 can take a long time.
You will see that ext2 is being replaced by ext3 on most Linux machines. They are
essentially the same, except for the journaling which is only present in ext3.
Journaling means that changes are first written to a journal on the disk. The journal
is flushed regularly, writing the changes in the file system. Journaling keeps the file
system in a consistent state, so you don't need a file system check after an unclean
shutdown or power failure.
You can create these file systems with the /sbin/mkfs or /sbin/mke2fs commands.
Use mke2fs -j to create an ext3 file system. You can convert an ext2 to ext3
with tune2fs -j. You can mount an ext3 file system as ext2, but then you lose the
journaling. Do not forget to run mkinitrd if you are booting from this device.
41
Disk management
ext4
Since 2009 the newest incarnation of the ext file system is ext4 is available in the
Linux kernel. ext4 support larger files (up to 16 terabyte) and larger file systems than
ext3 (and many more features).
vfat
The vfat file system exists in a couple of forms : fat12 for floppy disks, fat16 on ms-
dos, and fat32 for larger disks. The Linux vfat implementation supports all of these,
but vfat lacks a lot of features like security and links. fat disks can be read by every
operating system, and are used a lot for digital cameras, usb sticks and to exchange
data between different OS'ses on a home user's computer.
iso 9660
iso 9660 is the standard format for cdroms. Chances are you will encounter this
file system also on your hard disk in the form of images of cdroms (often with
the .iso extension). The iso 9660 standard limits filenames to the 8.3 format. The Unix
world didn't like this, and thus added the rock ridge extensions, which allows for
filenames up to 255 characters and Unix-style file-modes, ownership and symbolic
links. Another extensions to iso 9660 is joliet, which adds 64 unicode characters to
the filename. The el torito standard extends iso 9660 to be able to boot from CD-
ROM's.
udf
Most optical media today (including cd's and dvd's) use udf, the Universal Disk
Format.
swap
All things considered, swap is not a file system. But to use a partition as a swap
partition it must be formatted and mounted as swap space.
others...
You might encounter reiserfs on older Linux systems. Maybe you will see Sun's zfs,
or one of the dozen other file systems available.
42
Disk management
It is time for you to read the manual pages of mkfs and mke2fs. In the example below,
you see the creation of an ext2 file system on /dev/sdb1. In real life, you might want
to use options like -m0 and -j.
This example changes this value to ten percent. You can use tune2fs while the file
system is active, even if it is the root file system (as in this example).
43
Disk management
The last column in /etc/fstab is used to determine whether a file system should be
checked at boot-up.
check aborted.
But after unmounting fsck and e2fsck can be used to check an ext2 file system.
44
Disk management
3. Create an ext3 filesystem on the 400MB partition and one of the 300MB logical
drives.
4. Set the reserved space for root on the logical drive to 0 percent.
45
Disk management
cat /proc/filesystems
3. Create an ext3 filesystem on the 400MB partition and one of the 300MB logical
drives.
mke2fs -j /dev/sdb1 (replace sdb1 with the correct partition)
4. Set the reserved space for root on the logical drive to 0 percent.
tune2fs -m 0 /dev/sdb5
46
Disk management
2.10. mounting
Once you've put a file system on a partition, you can mount it. Mounting a file system
makes it available for use, usually as a directory. We say mounting a file system
instead of mounting a partition because we will see later that we can also mount file
systems that do not exists on partitions.
/bin/mkdir
This example shows how to create a new mount point with mkdir.
/bin/mount
When the mount point is created, and a file system is present on the partition, then
mount can mount the file system on the mount point directory.
/etc/filesystems
Actually the explicit -t ext2 option to set the file system is not always necessary. The
mount command is able to automatically detect a lot of file systems.
When mounting a file system without specifying explicitly the file system, then
mount will first probe /etc/filesystems. Mount will skip lines with the nodev
directive.
47
Disk management
iso9660
vfat
hfs
paul@RHELv4u4:~$
/proc/filesystems
When /etc/filesystems does not exist, or ends with a single * on the last line, then
mount will read /proc/filesystems.
/bin/mount
The simplest and most common way to view all mounts is by issuing the mount
command without any arguments.
/proc/mounts
The kernel provides the info in /proc/mounts in file form, but /proc/mounts does not
exist as a file on any hard disk. Looking at /proc/mounts is looking at information
that comes directly from the kernel.
/etc/mtab
The /etc/mtab file is not updated by the kernel, but is maintained by the mount
command. Do not edit /etc/mtab manually.
48
Disk management
/bin/df
A more user friendly way to look at mounted file systems is df. The df (diskfree)
command has the added benefit of showing you the free space on each mounted disk.
Like a lot of Linux commands, df supports the -h switch to make the output more
human readable.
root@RHELv4u2:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
11707972 6366996 4746240 58% /
/dev/sda1 101086 9300 86567 10% /boot
none 127988 0 127988 0% /dev/shm
/dev/sdb1 108865 1550 101694 2% /home/project55
root@RHELv4u2:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
12G 6.1G 4.6G 58% /
/dev/sda1 99M 9.1M 85M 10% /boot
none 125M 0 125M 0% /dev/shm
/dev/sdb1 107M 1.6M 100M 2% /home/project55
In the df -h example below you can see the size, free space, used gigabytes and
percentage and mount point of a partition.
/bin/du
The du command can summarize disk usage for files and directories. Preventing du
to go into subdirectories with the -s option will give you a total for that directory.
This option is often used together with -h, so du -sh on a mount point gives the total
amount used in that partition.
49
Disk management
/etc/fstab
This is done using the file system table located in the /etc/fstab file. Below is a sample
/etc/fstab file.
By adding the following line, we can automate the mounting of a file system.
mount /mountpoint
Adding an entry to /etc/fstab has the added advantage that you can simplify the
mount command. The command in the screenshot below forces mount to look for
the partition info in /etc/fstab.
# mount /home/project55
ro
The ro option will mount a file system as read only, preventing anyone from writing.
root@rhel53 ~# mount -t ext2 -o ro /dev/hdb1 /home/project42
root@rhel53 ~# touch /home/project42/testwrite
touch: cannot touch `/home/project42/testwrite': Read-only file system
noexec
The noexec option will prevent the execution of binaries and scripts on the mounted
file system.
root@rhel53 ~# mount -t ext2 -o noexec /dev/hdb1 /home/project42
root@rhel53 ~# cp /bin/cat /home/project42
root@rhel53 ~# /home/project42/cat /etc/hosts
-bash: /home/project42/cat: Permission denied
root@rhel53 ~# echo echo hello > /home/project42/helloscript
50
Disk management
nosuid
The nosuid option will ignore setuid bit set binaries on the mounted file system.
Note that you can still set the setuid bit on files.
root@rhel53 ~# mount -o nosuid /dev/hdb1 /home/project42
root@rhel53 ~# cp /bin/sleep /home/project42/
root@rhel53 ~# chmod 4555 /home/project42/sleep
root@rhel53 ~# ls -l /home/project42/sleep
-r-sr-xr-x 1 root root 19564 Jun 24 17:57 /home/project42/sleep
root@rhel53 ~# su - paul
[paul@rhel53 ~]$ /home/project42/sleep 500 &
[1] 2876
[paul@rhel53 ~]$ ps -f 2876
UID PID PPID C STIME TTY STAT TIME CMD
paul 2876 2853 0 17:58 pts/0 S 0:00 /home/project42/sleep 500
[paul@rhel53 ~]$
noacl
To prevent cluttering permissions with acl's, use the noacl option.
root@rhel53 ~# mount -o noacl /dev/hdb1 /home/project42
51
Disk management
2. Mount the big 400MB primary partition on /mnt, the copy some files to it
(everything in /etc). Then umount, and mount the file system as read only on /srv/
nfs/salesnumbers. Where are the files you copied ?
3. Verify your work with fdisk, df and mount. Also look in /etc/mtab and /proc/
mounts.
5. What happens when you mount a file system on a directory that contains some
files ?
6. What happens when you mount two file systems on the same mount point ?
7. (optional) Describe the difference between these file searching commands: find,
locate, updatedb, whereis, apropos and which.
52
Disk management
2. Mount the big 400MB primary partition on /mnt, the copy some files to it
(everything in /etc). Then umount, and mount the file system as read only on /srv/
nfs/salesnumbers. Where are the files you copied ?
mount /dev/sdb1 /mnt
cp -r /etc /mnt
ls -l /mnt
umount /mnt
ls -l /mnt
mkdir -p /srv/nfs/salesnumbers
mount /dev/sdb1 /srv/nfs/salesnumbers
3. Verify your work with fdisk, df and mount. Also look in /etc/mtab and /proc/
mounts.
fdisk -l
df -h
mount
All three the above commands should show your mounted partitions.
5. What happens when you mount a file system on a directory that contains some
files ?
The files are hidden until umount.
6. What happens when you mount two file systems on the same mount point ?
Only the last mounted fs is visible.
7. (optional) Describe the difference between these file searching commands: find,
locate, updatedb, whereis, apropos and which.
man is your friend
53
Disk management
54
Disk management
/sbin/vol_id
Below we use the vol_id utility to display the uuid of an ext3 file system.
/lib/udev/vol_id
Red Hat Enterprise Linux 5 puts vol_id in /lib/udev/vol_id, which is not in the
$PATH. The syntax is also a bit different from Debian/Ubuntu.
/sbin/tune2fs
We can also use tune2fs to find the uuid of a file system.
55
Disk management
Then we check that it is properly added to /etc/fstab, the uuid replaces the variable
devicename /dev/sdc1.
Now we can mount the volume using the mount point defined in /etc/fstab.
The real test now, is to remove /dev/sdb from the system, reboot the machine and
see what happens. After the reboot, the disk previously known as /dev/sdc is now /
dev/sdb.
And thanks to the uuid in /etc/fstab, the mountpoint is mounted on the same disk
as before.
The screenshot above contains only four lines. The line starting with root= is the
continuation of the kernel line.
56
Disk management
2. Use this uuid in /etc/fstab and test that it works with a simple mount.
3. (optional) Test it also by removing a disk (so the device name is changed). You
can edit settings in vmware/Virtualbox to remove a hard disk.
4. Display the root= directive in /boot/grub/menu.lst. (We see later in the course
how to maintain this file.)
57
Disk management
2. Use this uuid in /etc/fstab and test that it works with a simple mount.
tail -1 /etc/fstab
UUID=60926898-2c78-49b4-a71d-c1d6310c87cc /home/pro42 ext3 defaults 0 0
3. (optional) Test it also by removing a disk (so the device name is changed). You
can edit settings in vmware/Virtualbox to remove a hard disk.
4. Display the root= directive in /boot/grub/menu.lst. (We see later in the course
how to maintain this file.)
paul@deb503:~$ grep ^[^#] /boot/grub/menu.lst | grep root=
kernel /boot/vmlinuz-2.6.26-2-686 root=/dev/hda1 ro selinux=1 quiet
kernel /boot/vmlinuz-2.6.26-2-686 root=/dev/hda1 ro selinux=1 single
58
Disk management
2.16. RAID
RAID 0
RAID 0 uses two or more disks, and is often called striping (or stripe set, or striped
volume). Data is divided in chunks, those chunks are evenly spread across every disk
in the array. The main advantage of RAID 0 is that you can create larger drives.
RAID 0 is the only RAID without redundancy.
JBOD
JBOD uses two or more disks, and is often called concatenating (spanning, spanned
set, or spanned volume). Data is written to the first disk, until it is full. Then data is
written to the second disk... The main advantage of JBOD (Just a Bunch of Disks) is
that you can create larger drives. JBOD offers no redundancy.
RAID 1
RAID 1 uses exactly two disks, and is often called mirroring (or mirror set, or
mirrored volume). All data written to the array is written on each disk. The main
advantage of RAID 1 is redundancy. The main disadvantage is that you lose at least
half of your available disk space (in other words, you at least double the cost).
RAID 2, 3 and 4 ?
RAID 2 uses bit level striping, RAID 3 byte level, and RAID 4 is the same as RAID 5,
but with a dedicated parity disk. This is actually slower than RAID 5, because every
write would have to write parity to this one (bottleneck) disk. It is unlikely that you
will ever see these RAID levels in production.
RAID 5
RAID 5 uses three or more disks, each divided into chunks. Every time chunks are
written to the array, one of the disks will receive a parity chunk. Unlike RAID 4,
59
Disk management
the parity chunk will alternate between all disks. The main advantage of this is that
RAID 5 will allow for full data recovery in case of one hard disk failure.
RAID 6
RAID 6 is very similar to RAID 5, but uses two parity chunks. RAID 6 protects
against two hard disk failures.
RAID 0+1
RAID 0+1 is a mirror(1) of stripes(0). This means you first create two RAID 0 stripe
sets, and then you set them up as a mirror set. For example, when you have six 100GB
disks, then the stripe sets are each 300GB. Combined in a mirror, this makes 300GB
total. RAID 0+1 will survive one disk failure. It will only survive the second disk
failure if this disk is in the same stripe set as the previous failed disk.
RAID 1+0
RAID 1+0 is a stripe(0) of mirrors(1). For example, when you have six 100GB disks,
then you first create three mirrors of 100GB each. You then stripe them together into
a 300GB drive. In this example, as long as not all disks in the same mirror fail, it can
survive up to three hard disk failures.
RAID 50
RAID 5+0 is a stripe(0) of RAID 5 arrays. Suppose you have nine disks of 100GB,
then you can create three RAID 5 arrays of 200GB each. You can then combine them
into one large stripe set.
many others
There are many other nested RAID combinations, like RAID 30, 51, 60, 100, 150, ...
First, you have to attach some disks to your computer. In this scenario, three brand
new disks of one gigabyte each are added. Check with fdisk -l that they are connected.
root@RHELv4u2:~# fdisk -l
60
Disk management
So far so good! Next step is to create a partition of type fd on every disk. The fd type
is to set the partition as Linux RAID auto. Like this screenshot shows.
61
Disk management
Now all three disks are ready for RAID, so we have to tell the system what to do
with these disks.
root@RHELv4u2:~# fdisk -l
The next step used to be create the RAID table in /etc/raidtab. Nowadays, you can
just issue the command mdadm with the correct parameters. The command below
is split on two lines to fit this print, but you should type it on one line, without the
backslash (\).
root@RHELv4u2:~# fdisk -l
<cut>
62
Disk management
We will use this software RAID 5 array in the next topic, LVM.
2.16.4. /proc/mdstat
The status of the raid devices can be seen in /proc/mdstat. This example shows a
RAID 5 in the process of rebuilding.
63
Disk management
64
Disk management
2. Create a software RAID 5 on the three disks. (It is not necessary to put a filesystem
on it)
4. (optional) Stop and remove the RAID, unless you want to use it in the next chapter
LVM.
65
Chapter 3. Logical volume management
Table of Contents
3.1. introduction to lvm ........................................................................................ 67
3.2. lvm terminology ............................................................................................. 68
3.3. example: using lvm ........................................................................................ 69
3.4. example: extend a logical volume ................................................................. 71
3.5. example: resize a physical Volume ............................................................... 73
3.6. example: mirror a logical volume .................................................................. 75
3.7. example: snapshot a logical volume .............................................................. 76
3.8. verifying existing physical volumes .............................................................. 77
3.9. verifying existing volume groups .................................................................. 79
3.10. verifying existing logical volumes ............................................................... 81
3.11. manage physical volumes ............................................................................ 82
3.12. manage volume groups ................................................................................ 84
3.13. manage logical volumes ............................................................................... 86
3.14. practice : lvm ............................................................................................... 89
66
Logical volume management
In the example above, consider the options when you want to enlarge the space
available for /project42. What can you do ? The solution will always force you to
unmount the filesystem, take a backup of the data, remove and recreate partitions,
and then restore the data and remount the file system.
Physical storage grouping is a fancy name for grouping multiple physical devices
(hard disks) into a logical mass storage device. To enlarge this physical group, hard
disks or even single partitions can be added at a later time. The size of lvm volumes
on this physical group is independent of the individual size of the components. The
total size of the group is the limit.
67
Logical volume management
One of the nicest features of lvm is the logical volume resizing. You can increase the
size of an lvm volume, sometimes even without any downtime. Additionally, you
can migrate data away from a failing hard disk device.
68
Logical volume management
First thing to do, is create physical volumes that can join the volume group with
pvcreate. This command makes a disk or partition available for use in Volume
Groups. The screenshot shows how to present the SCSI Disk device to LVM.
Note for home users: lvm will work fine when using the complete disk, but another
operating system on the same computer will not recognize lvm and will mark the disk
as being empty! You can avoid this by creating a partition that spans the whole disk,
then run pvcreate on the partition instead of the disk.
Then vgcreate creates a volume group using one device. Note that more devices
could be added to the volume group.
The logical volume /dev/vg/lvol0 can now be formatted with ext2, and mounted for
normal use.
69
Logical volume management
A logical volume is very similar to a partition, it can be formatted with a file system,
and can be mounted so users can access it.
70
Logical volume management
The fdisk command shows us newly added scsi-disks that will serve our lvm volume.
This volume will then be extended. First, take a look at these disks.
You already know how to partition a disk, below the first disk is partitioned (in one
big primary partition), the second disk is left untouched.
You also know how to prepare disks for lvm with pvcreate, and how to create a
volume group with vgcreate. This example adds both the partitioned disk and the
untouched disk to the volume group named vg2.
You can use pvdisplay to verify that both the disk and the partition belong to the
volume group.
And you are familiar both with the lvcreate command to create a small logical volume
and the mke2fs command to put ext2 on it.
71
Logical volume management
As you see, we end up with a mounted logical volume that according to df is almost
200 megabyte in size.
But as you can see, there is a small problem: it appears that df is not able to display
the extended volume in its full size. This is because the filesystem is only set for the
size of the volume before the extension was added.
With lvdisplay however we can see that the volume is indeed extended.
To finish the extension, you need resize2fs to span the filesystem over the full size
of the logical volume.
72
Logical volume management
Now we can use pvcreate to create the Physical Volume, followed by pvs to verify
the creation.
The next step is ti use fdisk to enlarge the partition (actually deleting it and then
recreating /dev/sde1 with more cylinders).
73
Logical volume management
When we now use fdisk and pvs to verify the size of the partition and the Physical
Volume, then there is a size difference. LVM is still using the old size.
Executing pvresize on the Physical Volume will make lvm aware of the size change
of the partition. The correct size can be displayed with pvs.
74
Logical volume management
Then we create the Volume Group and verify again with pvs. Notice how the three
physical volumes now belong to vg33, and how the size is rounded down (in steps
of the extent size, here 4MB).
The last step is to create the Logical Volume with lvcreate. Notice the -m 1 switch to
create one mirror. Notice also the change in free space in all three Physical Volumes!
You can see the copy status of the mirror with lvs. It currently shows a 100 percent
copy.
75
Logical volume management
You can see with lvs that the snapshot snapLV is indeed a snapshot of bigLV.
Moments after taking the snapshot, there are few changes to bigLV (0.02 percent).
But after using bigLV for a while, more changes are done. This means the snapshot
volume has to keep more original data (10.22 percent).
You can now use regular backup tools (dump, tar, cpio, ...) to take a backup of the
snapshot Logical Volume. This backup will contain all data as it existed on bigLV
at the time the snapshot was taken. When the backup is done, you can remove the
snapshot.
76
Logical volume management
3.8.1. lvmdiskscan
To get a list of block devices that can be used with LVM, use lvmdiskscan. The
example below uses grep to limit the result to SCSI devices.
3.8.2. pvs
The easiest way to verify whether devices are known to lvm is with the pvs command.
The screenshot below shows that only /dev/sda2 is currently known for use with
LVM. It shows that /dev/sda2 is part of Volgroup00 and is almost 16GB in size. It
also shows /dev/sdc and /dev/sdd as part of vg33. The device /dev/sdb is knwon to
lvm, but not linked to any Volume Group.
3.8.3. pvscan
The pvscan command will scan all disks for existing Physical Volumes. The
information is similar to pvs, plus you get a line with total sizes.
77
Logical volume management
3.8.4. pvdisplay
Use pvdisplay to get more information about physical volumes. You can also use
pvdisplay without an argument to display information about all physical (lvm)
volumes.
[root@RHEL5 ~]#
78
Logical volume management
3.9.1. vgs
Similar to pvs is the use of vgs to display a quick overview of all volume groups.
There is only one volume group in the screenshot below, it is named VolGroup00
and is almost 16GB in size.
3.9.2. vgscan
The vgscan command will scan all disks for existing Volume Groups. It will also
update the /etc/lvm/.cache file. This file contains a list of all current lvm devices.
LVM will run the vgscan automatically at boot-up, so if you add hot swap devices,
then you will need to run vgscan to update /etc/lvm/.cache with the new devices.
3.9.3. vgdisplay
The vgdisplay command will give you more detailed information about a volume
group (or about all volume groups if you omit the argument).
79
Logical volume management
[root@RHEL5 ~]#
80
Logical volume management
3.10.1. lvs
Use lvs for a quick look at all existing logical volumes. Below you can see two logical
volumes named LogVol00 and LogVol01.
3.10.2. lvscan
The lvscan command will scan all disks for existing Logical Volumes.
3.10.3. lvdisplay
More detailed information about logical volumes is available through the
lvdisplay(1) command.
[root@RHEL5 ~]#
81
Logical volume management
3.11.1. pvcreate
Use the pvcreate command to add devices to lvm. This example shows how to add
a disk (or hardware RAID device) to lvm.
You can also add multiple disks or partitions as target to pvcreate. This example adds
three disks to lvm.
3.11.2. pvremove
Use the pvremove command to remove physical volumes from lvm. The devices
may not be in use.
3.11.3. pvresize
When you used fdisk to resize a partition on a disk, then you must use pvresize to
make lvm recognize the new size of the physical volume that represents this partition.
82
Logical volume management
3.11.4. pvchange
With pvchange you can prevent the allocation of a Physical Volume in a new Volume
Group or Logical Volume. This can be useful if you plan to remove a Physical
Volume.
To revert your previous decision, this example shows you how te re-enable the
Physical Volume to allow allocation.
3.11.5. pvmove
With pvmove you can move Logical Volumes from within a Volume Group to
another Physical Volume. This must be done before removing a Physical Volume.
83
Logical volume management
3.12.1. vgcreate
Use the vgcreate command to create a volume group. You can immediately name all
the physical volumes that span the volume group.
3.12.2. vgextend
Use the vgextend command to extend an existing volume group with a physical
volume.
3.12.3. vgremove
Use the vgremove command to remove volume groups from lvm. The volume groups
may not be in use.
3.12.4. vgreduce
Use the vgreduce command to remove a Physical Volume from the Volume Group.
The following example adds Physical Volume /dev/sdg to the vg1 Volume Group
using vgextend. And then removes it again using vgreduce.
84
Logical volume management
3.12.5. vgchange
Use the vgchange command to change parameters of a Volume Group.
This example shows how to prevent Physical Volumes from being added or removed
to the Volume Group vg1.
You can also use vgchange to change most other properties of a Volume Group. This
example changes the maximum number of Logical Volumes and maximum number
of Physical Volumes that vg1 can serve.
3.12.6. vgmerge
Merging two Volume Groups into one is done with vgmerge. The following example
merges vg2 into vg1, keeping all the properties of vg1.
85
Logical volume management
3.13.1. lvcreate
Use the lvcreate command to create Logical Volumes in a Volume Group. This
example creates an 8GB Logical Volume in Volume Group vg42.
As you can see, lvm automatically names the Logical Volume lvol0. The next
example creates a 200MB Logical Volume named MyLV in Volume Group vg42.
The next example does the same thing, but with different syntax.
This example creates a Logical Volume that occupies 10 percent of the Volume
Group.
This example creates a Logical Volume that occupies 30 percent of the remaining
free space in the Volume Group.
3.13.2. lvremove
Use the lvremove command to remove Logical Volumes from a Volume Group.
Removing a Logical Volume requires the name of the Volume Group.
86
Logical volume management
Removing multiple Logical Volumes will request confirmation for each individual
volume.
3.13.3. lvextend
Extending the volume is easy with lvextend. This example extends a 200MB Logical
Volume with 100 MB.
The next example creates a 100MB Logical Volume, and then extends it to 500MB.
3.13.4. lvrename
Renaming a Logical Volume is done with lvrename. This example renames extLV
to bigLV in the vg42 Volume Group.
87
Logical volume management
88
Logical volume management
2. Create two logical volumes (a small one and a bigger one) in this volumegroup.
Format them wih ext3, mount them and copy some files to them.
3. Verify usage with fdisk, mount, pvs, vgs, lvs, pvdisplay, vgdisplay, lvdisplay and
df. Does fdisk give you any information about lvm?
4. Enlarge the small logical volume by 50 percent, and verify your work!
5. Take a look at other commands that start with vg* , pv* or lv*.
9. Create a snapshot of a Logical Volume, take a backup of the snapshot. Then delete
some files on the Logical Volume, then restore your backup.
10. Move your volume group to another disk (keep the Logical Volumes mounted).
11. If time permits, split a Volume Group with vgsplit, then merge it again with
vgmerge.
89
Chapter 4. Booting the system
Table of Contents
4.1. boot terminology ............................................................................................ 90
4.2. grub ................................................................................................................ 93
4.3. lilo .................................................................................................................. 98
4.4. practice : bootloader ....................................................................................... 99
4.5. solution : bootloader .................................................................................... 100
4.1.1. post
A computer starts booting the moment you turn on the power (no kidding). This first
process is called post or power on self test. If all goes well then this leads to the bios.
If all goes not so well, then you might hear nothing, or hear beeping, or see an error
message on the screen, or maybe see smoke coming out of the computer (burning
hardware smells bad!).
4.1.2. bios
All Intel x86 computers will have a basic input/output system or bios to detect,
identify and initialize hardware. The bios then goes looking for a boot device. This
can be a floppy, hard disk, cdrom, network card or usb drive.
During the bios you can see a message on the screen telling you which key (often
Del or F2) to press to enter the bios setup.
90
Booting the system
4.1.3. openboot
Sun sparc systems start with openboot to test the hardware and to boot the operating
system. Bill Callkins explains openboot in his Solaris System Administration books.
The details of openboot are not the focus of this course.
91
Booting the system
The mbr is 512 bytes long and can be copied with dd.
4.1.7. bootloader
The mbr is executed by the bios and contains either (a small) bootloader or code
to load a bootloader.
Looking at the mbr with od can reveal information about the bootloader.
92
Booting the system
4.1.8. kernel
The goal of all this is to load an operating system, or rather the kernel of an operating
system. A typical bootloader like grub will copy a kernel from hard disk to memory,
and will then hand control of the computer to the kernel (execute the kernel).
Once the Linux kernel is loaded, the bootloader turns control over to it. From that
moment on, the kernel is in control of the system. After discussing bootloaders, we
continue with the init system that starts all the daemons.
4.2. grub
One of the big advantages of grub over lilo is the capability to change the
configuration during boot (by pressing e to edit the boot command line).
4.2.2. /boot/grub/menu.lst
grub's configuration file is called menu.lst and is located in /boot/grub. The
screenshot below show the location and size of menu.lst on Debian.
root@barry:~# ls -l /boot/grub/menu.lst
-rw-r--r-- 1 root root 5155 2009-03-31 18:20 /boot/grub/menu.lst
4.2.3. /boot/grub/grub.conf
Some distributions like Red Hat Enterprise Linux 5 use grub.conf and provide
a symbolic link to menu.lst. This is the same file, only the name changed from
grub.conf to menu.lst. Notice also in this screenshot that this file is a lot smaller
on Red Hat.
93
Booting the system
default
The default command sets a default entry to start. The first entry has number 0.
default 0
fallback
In case the default does not boot, use the fallback entry instead.
fallback 1
timeout
The timeout will wait a number of seconds before booting the default entry.
timeout 5
hiddenmenu
The hiddenmenu will hide the grub menu unless the user presses Esc before the
timeout expires.
hiddenmenu
title
With title we can start a new entry or stanza.
password
You can add a password to prevent interactive selection of a boot environment while
grub is running.
94
Booting the system
grub> md5crypt
Password: ********
Encrypted: $1$Ec.id/$T2C2ahI/EG3WRRsmmu/HN/
boot
Technically the boot command is only mandatory when running the grub command
line. This command does not have any parameters and can only be set as the last
command of a stanza.
boot
kernel
The kernel command points to the location of the kernel. To boot Linux this means
booting a gzip compressed zImage or bzip2 compressed bzImage.
This screenshot shows a typical kernel command used to load a Debian kernel.
initrd
Many Linux installations will need an initial ramdisk at boot time. This can be set
in grub with the initrd command.
95
Booting the system
initrd /boot/initrd.img-2.6.17-2-686
initrd /initrd-2.6.18-128.el5.img
root
The root command accepts the root device as a parameter.
The root command will point to the hard disk and partition to use, with hd0 as the
first hard disk device and hd1 as the second hard disk device. The same numbering
is used for partitions, so hd0,0 is the first partition on the first disk and hd0,1 is the
second partition on that disk.
root (hd0,0)
savedefault
The savedefault command can be used together with default saved as a menu
command. This combination will set the currently booted stanza as the next default
stanza to boot.
default saved
timeout 10
title Linux
root (hd0,0)
kernel /boot/vmlinuz
savedefault
title DOS
root (hd0,1)
makeactive
chainloader +1
savedefault
4.2.6. chainloading
With grub booting, there are two choices: loading an operating system or
chainloading another bootloader. The chainloading feature of grub loads the
bootsector of a partition (that contains an operating system).
Some older operating systems require a primary partition that is set as active. Only
one partition can be set active so grub can do this on the fly just before chainloading.
This screenshot shows how to set the first primary partition active with grub.
96
Booting the system
root (hd0,0)
makeactive
One such parameter, useful when you lost the root password, is single. This will boot
the kernel in single user mode (although some distributions will still require you to
type the root password.
97
Booting the system
# grub-install /dev/hda
4.3. lilo
4.3.2. lilo.conf
Here is an example of a typical lilo.conf file. The delay switch receives a number in
tenths of a second. So the delay below is three seconds, not thirty!
boot = /dev/hda
delay = 30
image = /boot/vmlinuz
root = /dev/hda1
label = Red Hat 5.2
image = /boot/vmlinuz
root = /dev/hda2
label = S.U.S.E. 8.0
other = /dev/hda4
table = /dev/hda
label = MS-DOS 6.22
The configration file shows three example stanzas. The first one boots Red Hat from
the first partition on the first disk (hda1). The second stanza boots Suse 8.0 from the
next partition. The last one loads MS-DOS.
98
Booting the system
2. Add a stanza in grub for the 3.0 files. Make sure the title is different.
99
Booting the system
cd /boot
cp vmlinuz-2.6.18-8.e15 vmlinuz-3.0
cp initrd-2.6.18-8.e15.img initrd-3.0.img
cp System.map-2.6.18-8.e15 System.map-3.0
2. Add a stanza in grub for the 3.0 files. Make sure the title is different.
100
Chapter 5. init
Table of Contents
5.1. about sysv init .............................................................................................. 101
5.2. system init(ialization) ................................................................................... 101
5.3. daemon or demon ? ...................................................................................... 106
5.4. starting and stopping daemons ..................................................................... 106
5.5. chkconfig ...................................................................................................... 106
5.6. update-rc.d .................................................................................................... 108
5.7. bum ............................................................................................................... 109
5.8. runlevels ....................................................................................................... 110
5.9. practice: init ................................................................................................. 113
5.10. solution : init .............................................................................................. 114
Init starts daemons by using scripts, where each script starts one daemon, and where
each script waits for the previous script to finish. This serial process of starting
daemons is slow, and although slow booting is not a problem on servers where
uptime is measured in years, the recent uptake of Linux on the desktop results in user
complaints.
To improve Linux startup speed, Canonical has developed upstart, which was
first used in Ubuntu. Solaris also used init up to Solaris 9, for Solaris 10 Sun has
developed Service Management Facility. Both systems start daemons in parallel
and can replace the SysV init scripts. There is also an ongoing effort to create initng
(init next generation).
5.2.1. process id 1
The kernel receives system control from the bootloader. After a while the kernel starts
the init daemon. The init daemon (/sbin/init) is the first daemon that is started and
receives process id 1 (PID 1). Init never dies.
101
init
5.2.3. initdefault
The value found in initdefault indicates the default runlevel. Some Linux
distributions have a brief description of runlevels in /etc/inittab, like here on Red Hat
Enterprise Linux 4.
/etc/rc.d/rc.sysinit
The next line in /etc/inittab in Red Hat and derivatives is the following.
si::sysinit:/etc/rc.d/rc.sysinit
This means that independent of the selected runlevel, init will run the /etc/rc.d/
rc.sysinit script. This script initializes hardware, sets some basic environment,
populates /etc/mtab while mounting file systems, starts swap and more.
[paul@rhel ~]$ egrep -e"^# Ini" -e"^# Sta" -e"^# Che" /etc/rc.d/rc.sysinit
# Check SELinux status
# Initialize hardware
# Start the graphical boot, if necessary; /usr may not be mounted yet...
# Initialiaze ACPI bits
# Check filesystems
# Start the graphical boot, if necessary and not done yet.
# Check to see if SELinux requires a relabel
# Initialize pseudo-random number generator
102
init
# Start up swapping.
# Initialize the serial ports.
That egrep command could also have been written with grep like this :
/etc/init.d/rcS
Debian has the following line after initdefault.
si::sysinit:/etc/init.d/rcS
The /etc/init.d/rcS script will always run on Debian (independent of the selected
runlevel). The script is actually running all scripts in the /etc/rcS.d/ directory in
alphabetical order.
exec /etc/init.d/rc S
5.2.5. rc scripts
Init will continue to read /etc/inittab and meets this section on Debian Linux.
l0:0:wait:/etc/init.d/rc 0
l1:1:wait:/etc/init.d/rc 1
l2:2:wait:/etc/init.d/rc 2
l3:3:wait:/etc/init.d/rc 3
l4:4:wait:/etc/init.d/rc 4
l5:5:wait:/etc/init.d/rc 5
l6:6:wait:/etc/init.d/rc 6
l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
l2:2:wait:/etc/rc.d/rc 2
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6
103
init
In both cases, this means that init will start the rc script with the runlevel as the
only parameter. Actually /etc/inittab has fields seperated by colons. The second field
determines the runlevel in which this line should be executed. So in both cases, only
one line of the seven will be executed, depending on the runlevel set by initdefault.
5.2.6. rc directories
When you take a look any of the /etc/rcX.d/ directories, then you will see a lot of
(links to) scripts who's name start with either uppercase K or uppercase S.
The /etc/rcX.d/ directories only contain links to scripts in /etc/init.d/. Links allow
for the script to have a different name. When entering a runlevel, all scripts that start
with uppercase K or uppercase S will be started in alphabetical order. Those that start
with K will be started first, with stop as the only parameter. The remaining scripts
with S will be started with start as the only parameter.
All this is done by the /etc/rc.d/rc script on Red Hat and by the /etc/init.d/rc script
on Debian.
5.2.7. mingetty
mingetty in /etc/inittab
Almost at the end of /etc/inittab there is a section to start and respawn several
mingetty daemons.
104
init
login program will verify whether that user exists in /etc/passwd and prompt for (and
verify) a password. If the password is correct, /bin/login passes control to the shell
listed in /etc/passwd.
respawning mingetty
The mingetty daemons are started by init and watched until they die (user exits the
shell and is logged out). When this happens, the init daemon will respawn a new
mingetty. So even if you kill a mingetty daemon, it will be restarted automatically.
This example shows that init respawns mingetty daemons. Look at the PID's of the
last two mingetty processes.
When we kill the last two mingettys, then init will notice this and start them again
(with a different PID).
disabling a mingetty
You can disable a mingetty for a certain tty by removing the runlevel from the
second field in its line in /etc/inittab. Don't forget to tell init about the change of its
configuration file with kill -1 1.
The example below shows how to disable mingetty on tty3 to tty6 in runlevels 4 and 5.
105
init
6:23:respawn:/sbin/mingetty tty6
Unix daemons are not to be confused with demons. Evi Nemeth, co-author of the
UNIX System Administration Handbook has the following to say about daemons:
Many people equate the word "daemon" with the word "demon", implying some
kind of satanic connection between UNIX and the underworld. This is an egregious
misunderstanding. "Daemon" is actually a much older form of "demon"; daemons
have no particular bias towards good or evil, but rather serve to help define a person's
character or personality. The ancient Greeks' concept of a "personal daemon" was
similar to the modern concept of a "guardian angel" ....
You can achieve the same result on RHEL/Fedora with the service command.
5.5. chkconfig
The purpose of chkconfig is to relieve system administrators of manually managing
all the links and scripts in /etc/init.d and /etc/rcX.d/.
106
init
When you compare the screenshot above with the one below, you can see that off
equals to a K link to the script, whereas on equals to an S link.
107
init
5.6. update-rc.d
When there are existing links in /etc/rcX.d/ then update-rc.d does not do anything.
This is to avoid that post installation scripts using update-rc.d are overwriting
changes made by a system administrator.
As you can see in the next screenshot, nothing changed for the cron daemon.
108
init
5.7. bum
This screenshot shows bum in advanced mode.
109
init
5.8. runlevels
The runlevel command is typical Linux and will output the previous and the current
runlevel. If there was no previous runlevel, then it will mark it with the letter N.
The history of who -r dates back to Seventies Unix, it still works on Linux.
This screenshot shows how to switch from runlevel 2 to runlevel 3 without reboot.
root@barry:~# runlevel
N 2
root@barry:~# init 3
110
init
root@barry:~# runlevel
2 3
5.8.3. /sbin/shutdown
The shutdown command is used to properly shut down a system.
Common switches used with shutdown are -a, -t, -h and -r.
This screenshot shows how to use shutdown with five seconds between TERM and
KILL signals.
The now is the time argument. This can be +m for the number of minutes to wait
before shutting down (with now as an alias for +0. The command will also accept
hh:mm instead of +m.
When in runlevel 0 or 6 halt, reboot and poweroff will tell the kernel to halt, reboot
or poweroff the system.
When not in runlevel 0 or 6, typing reboot as root actually calls the shutdown
command with the -r switch and typing poweroff will switch off the power when
halting the system.
5.8.5. /var/log/wtmp
halt, reboot and poweroff all write to /var/log/wtmp. To look at /var/log/wtmp, we
need to use th last.
111
init
5.8.6. Ctrl-Alt-Del
When rc is finished starting all those scripts, init will continue to read /etc/inittab.
The next line is about what to do when the user hits Ctrl-Alt-Delete on the keyboard.
Which is very similar to the default Red Hat Enterprise Linux 5.2 action.
112
init
2. Use the Red Hat Enterprise Linux virtual machine. Go to runlevel 5, display the
current and previous runlevel, then go back to runlevel 3.
3. Is the sysinit script on your computers setting or changing the PATH environment
variable ?
5. Write a script that acts like a daemon script in /etc/init.d/. It should have a case
statement to act on start/stop/restart and status. Test the script!
6. Use chkconfig to setup your script to start in runlevels 3,4 and 5, and to stop in
any other runlevel.
113
init
Killing the mingetty's will result in init respawning them. You can edit /etc/inittab
so it looks like the screenshot below. Don't forget to also run kill -1 1.
2. Use the Red Hat Enterprise Linux virtual machine. Go to runlevel 5, display the
current and previous runlevel, then go back to runlevel 3.
3. Is the sysinit script on your computers setting or changing the PATH environment
variable ?
5. Write a script that acts like a daemon script in /etc/init.d/. It should have a case
statement to act on start/stop/restart and status. Test the script!
#!/bin/bash
#
# chkconfig: 345 99 01
# description: pold demo script
#
# /etc/init.d/pold
114
init
case "$1" in
start)
echo -n "Starting pold..."
sleep 1;
touch /var/lock/subsys/pold
echo "done."
echo pold started >> /var/log/messages
;;
stop)
echo -n "Stopping pold..."
sleep 1;
rm -rf /var/lock/subsys/pold
echo "done."
echo pold stopped >> /var/log/messages
;;
*)
echo "Usage: /etc/init.d/pold {start|stop}"
exit 1
;;
esac
exit 0
The touch /var/lock/subsys/pold is mandatory and must be the same filename as the
script name, if you want the stop sequence (the K01pold link) to be run.
6. Use chkconfig to setup your script to start in runlevels 3,4 and 5, and to stop in
any other runlevel.
chkconfig --add pold
The command above will only work when the # chkconfig: and # description: lines
in the pold script are there.
115
Chapter 6. Linux Kernel
Table of Contents
6.1. about the Linux kernel ................................................................................. 116
6.2. Linux kernel source ..................................................................................... 118
6.3. kernel boot files ........................................................................................... 122
6.4. Linux kernel modules .................................................................................. 123
6.5. compiling a kernel ....................................................................................... 127
6.6. compiling one module ................................................................................. 130
Major Linux kernel versions used to come in even and odd numbers. Versions 2.0,
2.2, 2.4 and 2.6 are considered stable kernel versions. Whereas 2.1, 2.3 and 2.5 were
unstable (read development) versions. Since the release of 2.6.0 in January 2004, all
development has been done in the 2.6 tree. There is currently no v2.7.x and according
to Linus the even/stable vs odd/development scheme is abandoned forever.
6.1.2. uname -r
To see your current Linux kernel version, issue the uname -r command as shown
below.
This first example shows Linux major version 2.6 and minor version 24. The rest -22-
generic is specific to the distribution (Ubuntu in this case).
paul@laika:~$ uname -r
2.6.24-22-generic
The same command on Red Hat Enterprise Linux shows an older kernel (2.6.18) with
-92.1.17.el5 being specific to the distribution.
116
Linux Kernel
6.1.3. /proc/cmdline
The parameters that were passed to the kernel at boot time are in /proc/cmdline.
Some distributions prevent the use of this feature (at kernel compile time).
6.1.5. init=/bin/bash
Normally the kernel invokes init as the first daemon process. Adding init=/bin/bash
to the kernel parameters will instead invoke bash (again with root logged on without
providing a password).
6.1.6. /var/log/messages
The kernel reports during boot to syslog which writes a lot of kernel actions in /var/
log/messages. Looking at this file reveals when the kernel was started, including all
the devices that were detected at boot time.
This example shows how to use /var/log/messages to see kernel information about
/dev/sda.
117
Linux Kernel
6.1.7. dmesg
The dmesg command prints out all the kernel bootup messages (from the last boot).
Thus to find information about /dev/sda, using dmesg will yield only kernel messages
from the last boot.
6.2.1. ftp.kernel.org
The home of the Linux kernel source is ftp.kernel.org. It contains all official releases
of the Linux kernel source code from 1991. It provides free downloads over http, ftp
and rsync of all these releases, as well as changelogs and patches. More information
can be otained on the website www.kernel.org.
118
Linux Kernel
All the Linux kernel versions are located in the pub/linux/kernel/ directory.
ftp> ls pub/linux/kernel/v*
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
drwxrwsr-x 2 536 536 4096 Mar 20 2003 v1.0
drwxrwsr-x 2 536 536 20480 Mar 20 2003 v1.1
drwxrwsr-x 2 536 536 8192 Mar 20 2003 v1.2
drwxrwsr-x 2 536 536 40960 Mar 20 2003 v1.3
drwxrwsr-x 3 536 536 16384 Feb 08 2004 v2.0
drwxrwsr-x 2 536 536 53248 Mar 20 2003 v2.1
drwxrwsr-x 3 536 536 12288 Mar 24 2004 v2.2
drwxrwsr-x 2 536 536 24576 Mar 20 2003 v2.3
drwxrwsr-x 5 536 536 28672 Dec 02 08:14 v2.4
drwxrwsr-x 4 536 536 32768 Jul 14 2003 v2.5
drwxrwsr-x 7 536 536 110592 Dec 05 22:36 v2.6
226 Directory send OK.
ftp>
6.2.2. /usr/src
On your local computer, the kernel source is located in /usr/src. Note though that
the structure inside /usr/src might be different depending on the distribution that you
are using.
First let's take a look at /usr/src on Debian. There appear to be two versions of the
complete Linux source code there. Looking for a specific file (e1000_main.c) with
find reveals it's exact location.
paul@barry:~$ ls -l /usr/src/
drwxr-xr-x 20 root root 4096 2006-04-04 22:12 linux-source-2.6.15
drwxr-xr-x 19 root root 4096 2006-07-15 17:32 linux-source-2.6.16
paul@barry:~$ find /usr/src -name e1000_main.c
/usr/src/linux-source-2.6.15/drivers/net/e1000/e1000_main.c
/usr/src/linux-source-2.6.16/drivers/net/e1000/e1000_main.c
This is very similar to /usr/src on Ubuntu, except there is only one kernel here (and
it is newer).
paul@laika:~$ ls -l /usr/src/
119
Linux Kernel
We will have to dig a little deeper to find the kernel source on Red Hat!
Debian
Installing the kernel source on Debian is really simple with aptitude install linux-
source. You can do a search for all linux-source packeges first, like in this screenshot.
And then use aptitude install to download and install the Debian Linux kernel source
code.
When the aptitude is finished, you will see a new file named /usr/src/linux-source-
<version>.tar.bz2
root@barry:/usr/src# ls -lh
drwxr-xr-x 20 root root 4.0K 2006-04-04 22:12 linux-source-2.6.15
drwxr-xr-x 19 root root 4.0K 2006-07-15 17:32 linux-source-2.6.16
-rw-r--r-- 1 root root 45M 2008-12-02 10:56 linux-source-2.6.24.tar.bz2
Ubuntu
Ubuntu is based on Debian and also uses aptitude, so the task is very similar.
120
Linux Kernel
oot@laika:~# ll /usr/src
total 45M
-rw-r--r-- 1 root root 45M 2008-11-24 23:30 linux-source-2.6.24.tar.bz2
To download the kernel source on RHEL, use this long wget command (on one line,
without the trailing \).
wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/\
SRPMS/kernel-`uname -r`.src.rpm
When the wget download is finished, you end up with a 60M .rpm file.
[root@RHEL52 src]# ll
total 60M
-rw-r--r-- 1 root root 60M Dec 5 20:54 kernel-2.6.18-92.1.17.el5.src.rpm
drwxr-xr-x 5 root root 4.0K Dec 5 19:23 kernels
drwxr-xr-x 7 root root 4.0K Oct 11 13:22 redhat
We will need to perform some more steps before this can be used as kernel source
code.
[root@RHEL52 src]# ll
total 60M
-rw-r--r-- 1 root root 60M Dec 5 20:54 kernel-2.6.18-92.1.17.el5.src.rpm
drwxr-xr-x 5 root root 4.0K Dec 5 19:23 kernels
drwxr-xr-x 7 root root 4.0K Oct 11 13:22 redhat
[root@RHEL52 src]# rpm -i kernel-2.6.18-92.1.17.el5.src.rpm
121
Linux Kernel
The rpmbuild command put the RHEL Linux kernel source code in /usr/src/redhat/
BUILD/kernel-<version>/.
6.3.1. vmlinuz
The vmlinuz file in /boot is the compressed kernel.
6.3.2. initrd
The kernel uses initrd (an initial RAM disk) at boot time. The initrd is mounted before
the kernel loads, and can contain additional drivers and modules. It is a compressed
cpio archive, so you can look at the contents in this way.
122
Linux Kernel
6.3.3. System.map
The System.map contains the symbol table and changes with every kernel compile.
The symbol table is also present in /proc/kallsyms (pre 2.6 kernels name this file /
proc/ksyms).
6.3.4. .config
The last file copied to the /boot directory is the kernel configuration used for
compilation. This file is not necessary in the /boot directory, but it is common practice
to put a copy there. It allows you to recompile a kernel, starting from the same
configuration as an existing working one.
123
Linux Kernel
6.4.2. /lib/modules
The modules are stored in the /lib/modules/<kernel-version> directory. There is a
separate directory for each kernel that was compiled for your system.
paul@laika:~$ ll /lib/modules/
total 12K
drwxr-xr-x 7 root root 4.0K 2008-11-10 14:32 2.6.24-16-generic
drwxr-xr-x 8 root root 4.0K 2008-12-06 15:39 2.6.24-21-generic
drwxr-xr-x 8 root root 4.0K 2008-12-05 12:58 2.6.24-22-generic
6.4.3. <module>.ko
The file containing the modules usually ends in .ko. This screenshot shows the
location of the isdn module files.
6.4.4. lsmod
To see a list of currently loaded modules, use lsmod. You see the name of each loaded
module, the size, the use count, and the names of other modules using this one.
6.4.5. /proc/modules
/proc/modules lists all modules loaded by the kernel. The output would be too long
to display here, so lets grep for the vm module.
We see that vmmon and vmnet are both loaded. You can display the same information
with lsmod. Actually lsmod only reads and reformats the output of /proc/modules.
124
Linux Kernel
6.4.7. insmod
Kernel modules can be manually loaded with the insmod command. This is a very
simple (and obsolete) way of loading modules. The screenshot shows insmod loading
the fat module (for fat file system support).
root@barry:/lib/modules/2.6.17-2-686# pwd
/lib/modules/2.6.17-2-686
root@barry:/lib/modules/2.6.17-2-686# lsmod | grep fat
root@barry:/lib/modules/2.6.17-2-686# insmod kernel/fs/fat/fat.ko
root@barry:/lib/modules/2.6.17-2-686# lsmod | grep fat
fat 46588 0
insmod is not detecting dependencies, so it fails to load the isdn module (because the
isdn module depends on the slhc module).
6.4.8. modinfo
As you can see in the screenshot of modinfo below, the isdn module depends in the
slhc module.
125
Linux Kernel
6.4.9. modprobe
The big advantage of modprobe over insmod is that modprobe will load all
necessary modules, whereas insmod requires manual loading of dependencies.
Another advantage is that you don't need to point to the filename with full path.
This screenshot shows how modprobe loads the isdn module, automatically loading
slhc in background.
6.4.10. /lib/modules/<kernel>/modules.dep
Module dependencies are stored in modules.dep.
6.4.11. depmod
The modules.dep file can be updated (recreated) with the depmod command. In this
screenshot no modules were added, so depmod generates the same file.
root@barry:/lib/modules/2.6.17-2-686# ls -l modules.dep
-rw-r--r-- 1 root root 310676 2008-03-01 16:32 modules.dep
root@barry:/lib/modules/2.6.17-2-686# depmod
root@barry:/lib/modules/2.6.17-2-686# ls -l modules.dep
-rw-r--r-- 1 root root 310676 2008-12-07 13:54 modules.dep
6.4.12. rmmod
Similar to insmod, the rmmod command is rarely used anymore.
126
Linux Kernel
6.4.13. modprobe -r
Contrary to rmmod, modprobe will automatically remove unneeded modules.
6.4.14. /etc/modprobe.conf
The /etc/modprobe.conf file and the /etc/modprobe.d directory can contain aliases
(used by humans) and options (for dependent modules) for modprobe.
6.5.1. extraversion
Enter into /usr/src/redhat/BUILD/kernel-2.6.9/linux-2.6.9/ and change the
extraversion in the Makefile.
127
Linux Kernel
6.5.3. .config
Now copy a working .config from /boot to our kernel directory. This file contains the
configuration that was used for your current working kernel. It determines whether
modules are included in compilation or not.
128
Linux Kernel
This command will end with telling you the location of the bzImage file (and with
time info if you also specified the time command.
real 13m59.573s
user 1m22.631s
sys 11m51.034s
[root@RHEL52 linux-2.6.18.i686]#
You can already copy this image to /boot with cp arch/i386/boot/bzImage /boot/
vmlinuz-<kernel-version>.
And here is the same directory after. Notice that make modules_install created a
new directory for the new kernel.
129
Linux Kernel
6.5.9. /boot
We still need to copy the kernel, the System.map and our configuration file to /boot.
Strictly speaking the .config file is not obligatory, but it might help you in future
compilations of the kernel.
[root@RHEL52 ]# pwd
/usr/src/redhat/BUILD/kernel-2.6.18/linux-2.6.18.i686
[root@RHEL52 ]# cp System.map /boot/System.map-2.6.18-paul2008
[root@RHEL52 ]# cp .config /boot/config-2.6.18-paul2008
[root@RHEL52 ]# cp arch/i386/boot/bzImage /boot/vmlinuz-2.6.18-paul2008
6.5.10. mkinitrd
The kernel often uses an initrd file at bootup. We can use mkinitrd to generate this
file. Make sure you use the correct kernel name!
[root@RHEL52 ]# pwd
/usr/src/redhat/BUILD/kernel-2.6.18/linux-2.6.18.i686
[root@RHEL52 ]# mkinitrd /boot/initrd-2.6.18-paul2008 2.6.18-paul2008
6.5.11. bootloader
Compilation is now finished, don't forget to create an additional stanza in grub or lilo.
6.6.1. hello.c
A little C program that will be our module.
int init_module(void)
{
printk(KERN_INFO "Start Hello World...\n");
return 0;
}
130
Linux Kernel
void cleanup_module(void)
{
printk(KERN_INFO "End Hello World... \n");
}
6.6.2. Makefile
The make file for this module.
[root@rhel4a kernel_module]# ll
total 16
-rw-rw-r-- 1 paul paul 250 Feb 15 19:14 hello.c
-rw-rw-r-- 1 paul paul 153 Feb 15 19:15 Makefile
6.6.3. make
The running of the make command.
[root@rhel4a kernel_module]# ll
total 172
-rw-rw-r-- 1 paul paul 250 Feb 15 19:14 hello.c
-rw-r--r-- 1 root root 64475 Feb 15 19:15 hello.ko
-rw-r--r-- 1 root root 632 Feb 15 19:15 hello.mod.c
-rw-r--r-- 1 root root 37036 Feb 15 19:15 hello.mod.o
-rw-r--r-- 1 root root 28396 Feb 15 19:15 hello.o
-rw-rw-r-- 1 paul paul 153 Feb 15 19:15 Makefile
[root@rhel4a kernel_module]#
131
Linux Kernel
6.6.4. hello.ko
Use modinfo to verify that it is really a module.
132
Chapter 7. Introduction to network sniffing
Table of Contents
7.1. about sniffing ............................................................................................... 133
7.2. wireshark ...................................................................................................... 133
7.3. tcpdump ........................................................................................................ 135
7.4. practice: network sniffing ............................................................................ 136
7.5. solution: network sniffing ............................................................................ 137
7.2. wireshark
133
Introduction to network sniffing
On some distributions only root is allowed to sniff the network. You might need to
use sudo wireshark.
You can combine two protocols with a logical or between them. The example below
shows how to filter only arp and bootp (or dhcp) packets.
This example shows how to filter for dns traffic containing a certain ip address.
134
Introduction to network sniffing
7.3. tcpdump
Sniffing on the command line can be done with tcpdump. Here are some examples.
Using the tcpdump host $ip command displays all traffic with one host
(192.168.1.38 in this example).
Capturing only ssh (tcp port 22) traffic can be done with tcpdump tcp port $port.
This screenshot is cropped to 76 characters for readability in the pdf.
Same as above, but write the output to a file with the tcpdump -w $filename
command.
135
Introduction to network sniffing
4. Display only the ping echo's in the top pane using a filter.
5. Now ping to a name (like www.linux-training.be) and try to sniff the DNS query
and response. Which DNS server was used ? Was it a tcp or udp query and response ?
136
Introduction to network sniffing
4. Display only the ping echo's in the top pane using a filter.
type 'icmp' (without quotes) in the filter box, and then click 'apply'
5. Now ping to a name (like www.linux-training.be) and try to sniff the DNS query
and response. Which DNS server was used ? Was it a tcp or udp query and response ?
First start the sniffer.
The details in wireshark will say the DNS query was inside a udp packet.
137
Chapter 8. Introduction to networking
Table of Contents
8.1. introduction to computer networks .............................................................. 139
8.2. about tcp/ip .................................................................................................. 144
8.3. practice : about tcp/ip ................................................................................... 146
8.4. solution : about tcp/ip .................................................................................. 147
8.5. using tcp/ip ................................................................................................... 148
8.6. practice : using tcp/ip ................................................................................... 154
8.7. solution : using tcp/ip ................................................................................... 155
8.8. multiple ip-addresses .................................................................................... 157
8.9. practice : multiple ip-addresses .................................................................... 157
8.10. solution : multiple ip-addresses .................................................................. 158
8.11. multihomed hosts ....................................................................................... 159
8.12. practice : multihomed hosts ....................................................................... 161
8.13. solution : multihomed hosts ....................................................................... 162
8.14. introduction to iptables .............................................................................. 164
8.15. practice : iptables ....................................................................................... 165
8.16. solution : iptables ....................................................................................... 166
8.17. xinetd and inetd ......................................................................................... 167
8.18. practice : inetd and xinetd .......................................................................... 169
8.19. openssh ....................................................................................................... 170
8.20. practice: ssh ................................................................................................ 174
8.21. network file system .................................................................................... 175
8.22. practice : network file system .................................................................... 177
138
Introduction to networking
When talking about protocol layers, people usually mention the seven layers of the
osi protocol (Application, Presentation, Session, Transport, Network, Data Link and
Physical). We will discuss layers 2 and 3 in depth, and focus less on the other layers.
The reason is that these layers are important for understanding networks. You will
hear administrators use words like "this is a layer 2 device" or "this is a layer 3
broadcast", and you should be able to understand what they are talking about.
The DoD (or tcp/ip) model has only four layers, roughly mapping its network access
layer to OSI layers 1 and 2 (Physical and Datalink), its internet (IP) layer to the
OSI network layer, its host-to-host (tcp, udp) layer to OSI layer 4 (transport) and
its application layer to OSI layers 5, 6 and 7.
Below an attempt to put OSI and DoD layers next to some protocols and devices.
139
Introduction to networking
Devices like repeaters and hubs are part of this layer. You cannot use software to
'see' a repeater or hub on the network. The only thing these devices are doing is
amplifying electrical signals on cables. Passive hubs are multiport amplifiers that
amplify an incoming electrical signal on all other connections. Active hubs do this
by reading and retransmitting bits, without interpreting any meaning in those bits.
Network technologies like csma/cd and token ring are defined on this layer.
On this layer we find devices like bridges and switches. A bridge is more intelligent
than a hub because a bridge can make decisions based on the mac address of
computers. A switch also understands mac addresses.
In this book we will discuss commands like arp and ifconfig to explore this layer.
On this layer we find devices like routers and layer 3 switches, devices that know
(and have) an ip address.
140
Introduction to networking
Sniffing for ntp (Network Time Protocol) packets gives us this line, which makes us
conclude to put ntp next to bootp in the protocol chart below.
Sniffing an arp broadcast makes us put arp next to ip. All these protocols are
explained later in this chapter.
8.1.3. tcp/ip
In the Sixties development of the tcp/ip protocol stack was started by the US
Department of Defense. In the Eighties a lot of commercial enterprises developed
their own protocol stack: IBM created sna, Novell had ipx/spx, Microsoft completed
netbeui and Apple worked with appletalk. All the efforts from the Eighties failed to
survive the Nineties. By the end of the Nineties, almost all computers in the world
were able to speak tcp/ip.
In my humble opinion, the main reason for the survival of tcp/ip over all the other
protocols is its openness. Everyone is free to develop and use the tcp/ip protocol suite.
141
Introduction to networking
The official website for the rfc's is http://www.rfc-editor.org. This website contains
all rfc's in plain text, for example rfc2132 (which defines dhcp and bootp) is accessible
at http://www.rfc-editor.org/rfc/rfc2132.txt.
Careful, a layer 2 broadcast is very different from a layer 3 broadcast. A layer two
broadcast is received by all network cards on the same segment (it does not pass any
router), whereas a layer 3 broadcast is received by all hosts in the same ip subnet.
The origin of the internet is the arpanet. The arpanet was created in 1969, that year
only four computers were connected in the network. In 1971 e-mail was invented,
taking 75 percent of all arpanet traffic in 1973. 1973 was the year ftp was introduced,
and also saw the connection of the first European countries (Norway and UK). In
2009 the internet is available to 25 percent of the world population.
An intranet is a private internet. An intranet uses the same protocols as the internet,
but is only accessible to people from within one organization.
142
Introduction to networking
143
Introduction to networking
In a couple of years we will all be using ipv6! At least, that is what people say since
1999...
These protocols are visible in the protocol field of the ip header, and are listed in the
/etc/protocols file.
8.2.4. arp
The ip to mac resolution is handled by the layer two broadcast protocol arp. The
arp table can be displayed with the arp tool.
root@barry:~# arp -a
? (192.168.1.191) at 00:0C:29:3B:15:80 [ether] on eth1
agapi (192.168.1.73) at 00:03:BA:09:7F:D2 [ether] on eth1
anya (192.168.1.1) at 00:12:01:E2:87:FB [ether] on eth1
faith (192.168.1.41) at 00:0E:7F:41:0D:EB [ether] on eth1
kiss (192.168.1.49) at 00:D0:E0:91:79:95 [ether] on eth1
laika (192.168.1.40) at 00:90:F5:4E:AE:17 [ether] on eth1
144
Introduction to networking
Anya is a Cisco Firewall, Faith is an HP Color printer, Kiss is a Kiss DP600, laika is
a Clevo laptop and Agapi, Shaka and Pasha are SPARC servers. The question mark
is a Red Hat Enterprise Linux server running in vmware.
8.2.5. hostname
Every host receives a hostname, usually placed in a DNS name space forming the
fqdn or Fully Qualified Domain Name.
root@rhel6 ~# hostname
rhel6
root@rhel6 ~# hostname --fqdn
rhel6.classroom.local
8.2.6. ip services
Common application level protocols like SMTP, HTTP, SSH, telnet and FTP have
fixed port numbers. To find a port number, look in /etc/services.
145
Introduction to networking
2. Explain why e-mail and websites are sent over tcp and not udp.
146
Introduction to networking
2. Explain why e-mail and websites are sent over tcp and not udp.
Because tcp is reliable and udp is not.
147
Introduction to networking
Now that we settled this, let's take a look at the files, commands and scripts that
configure your network.
8.5.2. /sbin/ifconfig
You can use the ifconfig command to see the tcp/ip configuration of a network
interface. The first ethernet network card on linux is eth0.
[root@RHEL4b ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0C:29:3B:15:80
inet addr:192.168.1.191 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe3b:1580/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:84 errors:0 dropped:0 overruns:0 frame:0
TX packets:80 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9216 (9.0 KiB) TX bytes:8895 (8.6 KiB)
Interrupt:185 Base address:0x1400
You can also disable a network interface with ifconfig eth0 down, or enable it with
ifconfig eth0 up. Using these commands does not change the configuration of this
network card.
Every user has access to /sbin/ifconfig, providing the path is set. Normal users cannot
use it to disable or enable interfaces, or set the ip address.
The ip address change will be valid until the next change, or until reboot. You can
also supply the subnet mask with ifconfig.
Careful, if you try this via an ssh connection, then you might lose your ssh connection.
148
Introduction to networking
8.5.3. /etc/init.d/network(ing)
If you have a problem with network interfaces, you can try to restart the network init
script, as shown here on Ubuntu 7.04. The script stops and starts the interfaces, and
renews an ip configuration with the DHCP server.
Listening on LPF/eth0/00:90:f5:4e:ae:17
Sending on LPF/eth0/00:90:f5:4e:ae:17
Sending on Socket/fallback
DHCPRELEASE on eth0 to 192.168.1.1 port 67
There is already a pid file /var/run/dhclient.eth0.pid with pid 134993416
Internet Systems Consortium DHCP Client V3.0.4
Copyright 2004-2006 Internet Systems Consortium.
All rights reserved.
For info, please visit http://www.isc.org/sw/dhcp/
Listening on LPF/eth0/00:90:f5:4e:ae:17
Sending on LPF/eth0/00:90:f5:4e:ae:17
Sending on Socket/fallback
DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5
DHCPOFFER from 192.168.1.1
DHCPREQUEST on eth0 to 255.255.255.255 port 67
DHCPACK from 192.168.1.1
bound to 192.168.1.40 -- renewal in 249143 seconds.
root@laika:~#
8.5.4. /etc/sysconfig
Red Hat derived Linux systems store their network configuration files in the /etc/
sysconfig/ directory. Debian derived systems do not have this directory.
/etc/sysconfig/network
Routing and host information for all network interfaces is specified in the /etc/
sysconfig/network file. Below an example, setting 192.168.1.1 as the router (default
gateway), and leaving the default hostname of localhost.localdomain. Common
options not shown in this screenshot are GATEWAYDEV to set one of your network
cards as the gateway device, and NISDOMAIN to specify the NIS domain name.
root@rhel6 ~# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rhel6
149
Introduction to networking
NETWORKING=yes
HOSTNAME=RHEL4b
GATEWAY=192.168.1.1
/etc/sysconfig/network-scripts
For every network card in your computer, you should have an interface configuration
file named /etc/sysconfig/network-scripts/ifcfg-$IFNAME. Be careful when
editing these files, your edits will work, until you start the system-config-network
(might soon be renamed to redhat-config-network) tool. This tool can and will
overwrite your manual edits.
The first ethernet NIC will get ifcfg-eth0, the next one ifcfg-eth1 and so on. Below
is an example.
When the second nic is configured for dhcp, then this is the ifcfg-eth1.
Besides dhcp and bootp the BOOTPROTO variable can be static or none, both
meaning there should be no protocol used at boot time to set the interface values. The
BROADCAST variable is no longer needed, it will be calculated.
The HWADDR can be used to make sure that the nic's get the correct name when
multiple nic's are present in the computer. It can not be used to set the MAC address of
a nic. For this, you need to specify the MACADDR variable. Do not use HWADDR
and MACADDR in the same ifcfg file.
150
Introduction to networking
Listening on LPF/eth0/00:90:f5:4e:ae:17
Sending on LPF/eth0/00:90:f5:4e:ae:17
Sending on Socket/fallback
DHCPRELEASE on eth0 to 192.168.1.1 port 67
Listening on LPF/eth0/00:90:f5:4e:ae:17
Sending on LPF/eth0/00:90:f5:4e:ae:17
Sending on Socket/fallback
DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8
DHCPOFFER from 192.168.1.1
DHCPREQUEST on eth0 to 255.255.255.255 port 67
DHCPACK from 192.168.1.1
bound to 192.168.1.40 -- renewal in 231552 seconds.
root@laika:~#
8.5.6. /sbin/dhclient
Home and client Linux desktops often have dhclient running. This is a daemon that
enables a network interface to lease an ip configuration from a DHCP server. When
your adapter is configured for DHCP or BOOTP, then /sbin/ifup will start the dhclient
daemon.
8.5.7. /sbin/route
You can see the computer's local routing table with the route command (and also
with netstat -r ).
151
Introduction to networking
It appears this computer does not have a gateway configured, so we use route add
default gw to add a default gateway.
8.5.8. ping
If you can ping to another host, then ip is configured.
For other Linux Systems, take a backup of the relevant portions in /etc.
152
Introduction to networking
8.5.11. ethtool
To display or change network card settings, use ethtool. The results depend on the
capabilities of your network card. The example shows a network that auto-negotiates
it's bandwidth.
This example shows how to use ethtool to switch the bandwidth from 1000Mbit to
100Mbit and back. Note that some time passes before the nic is back to 1000Mbit.
153
Introduction to networking
2. Use the GUI tool of your distro to set a fix ip address (use the same address as the
one you got from dhcp). Verify with ifconfig and ping to a neighbour that it works.
Also look at the configuration files in /etc/network or /etc/sysconfig to see how the
GUI tool sets a fixed address.
3. Use the GUI tool to enable dhcp again (and verify the changes in the config files).
4. Use ifdown or ifconfig to disable and enable a network card. Verify the results
with ifconfig.
7. Ping the default gateway, then look at the mac-address of the default gateway.
154
Introduction to networking
2. Use the GUI tool of your distro to set a fix ip address (use the same address as the
one you got from dhcp). Verify with ifconfig and ping to a neighbour that it works.
Also look at the configuration files in /etc/network or /etc/sysconfig to see how the
GUI tool sets a fixed address.
3. Use the GUI tool to enable dhcp again (and verify the changes in the config files).
root@rhel55 ~# cat /etc/sysconfig/network-scripts/ifcfg-eth2
# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
DEVICE=eth2
BOOTPROTO=dhcp
ONBOOT=yes
HWADDR=08:00:27:a7:57:46
root@rhel55 ~# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
DEVICE=eth1
BOOTPROTO=dhcp
ONBOOT=yes
HWADDR=08:00:27:40:74:2f
4. Use ifdown or ifconfig to disable and enable a network card. Verify the results
with ifconfig.
root@rhel55 ~# ifconfig eth2 | grep inet
inet addr:192.168.1.35 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fea7:5746/64 Scope:Link
root@rhel55 ~# ifdown eth2
root@rhel55 ~# ifconfig eth2 | grep inet
root@rhel55 ~# ifup eth2
155
Introduction to networking
7. Ping the default gateway, then look at the mac-address of the default gateway.
[email protected]:~$ ping -c1 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=254 time=4.77 ms
156
Introduction to networking
157
Introduction to networking
158
Introduction to networking
8.11.1. bonding
You can combine (bond) two physical network interfaces as one logical interface.
Having two network cards serve the same ip-address doubles the bandwidth, and
provides hardware redundancy. For bonding to work, you have to load the kernel
module for bonding. You can do this manually with modprobe.
You need two network cards to enable bonding, and add the MASTER and SLAVE
variables. In this case we used eth0 and eth1, configured like this.
And you need to set up a bonding interface. In this case, we call it bond0.
159
Introduction to networking
root@RHELv4u2:~# ifconfig
bond0 Link encap:Ethernet HWaddr 00:0C:29:5A:86:D7
inet addr:192.168.1.229 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:3835 errors:0 dropped:0 overruns:0 frame:0
TX packets:1001 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:469645 (458.6 KiB) TX bytes:139816 (136.5 KiB)
8.11.2. /proc/net/bond*
You can verify the proper working of the bonding interfaces by looking at /proc/net/
bonding/. Below is a screenshot of a Red Hat Enterprise 5 server, with eth1 and eth2
in bonding.
160
Introduction to networking
161
Introduction to networking
Here is a succinct version of the config files to bond eth1 and eth2 on RHEL5.
root@rhel55 network-scripts# cat ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
HWADDR=08:00:27:40:74:2f
MASTER=bond0
SLAVE=yes
USERCTL=no
root@rhel55 network-scripts# cat ifcfg-eth2
DEVICE=eth2
ONBOOT=yes
HWADDR=08:00:27:a7:57:46
MASTER=bond0
SLAVE=yes
USERCTL=no
162
Introduction to networking
collisions:0 txqueuelen:1000
RX bytes:95248 (93.0 KiB) TX bytes:26754 (26.1 KiB)
Interrupt:9 Base address:0xd240
163
Introduction to networking
The easy way to configure iptables, is to use a graphical tool like KDE's kmyfirewall
or Security Level Configuration Tool. You can find the latter in the graphical menu,
somewhere in System Tools - Security, or you can start it by typing system-config-
securitylevel in bash. These tools allow for some basic firewall configuration. You
can decide whether to enable or disable the firewall, and what typical standard ports
are allowed when the firewall is active. You can even add some custom ports. When
you are done, the configuration is written to /etc/sysconfig/iptables on Red Hat.
To start the service, issue the service iptables start command. You can configure
iptables to start at boot time with chkconfig.
164
Introduction to networking
One of the nice features of iptables is that it displays extensive status information
when queried with the service iptables status command.
root@RHELv4u4:~#
165
Introduction to networking
166
Introduction to networking
Recent Linux distributions like RHEL5 and Ubuntu10.04 do not activate inetd or
xinetd by default, unless an application requires it.
Both daemons have the same functionality (listening to many ports, starting other
daemons when they are needed), but they have different configuration files.
defaults
{
167
Introduction to networking
instances = 60
log_type = SYSLOG authpriv
log_on_success = HOST PID
log_on_failure = HOST
cps = 25 30
}
includedir /etc/xinetd.d
paul@RHELv4u2:~$
According to the settings in this file, xinetd can handle 60 client requests at once. It
uses the authpriv facility to log the host ip-address and pid of successful daemon
spawns. When a service (aka protocol linked to daemon) gets more than 25 cps
(connections per second), it holds subsequent requests for 30 seconds.
The directory /etc/xinetd.d contains more specific configuration files. Let's also take
a look at one of them.
paul@RHELv4u2:~$ ls /etc/xinetd.d
amanda chargen-udp echo klogin rexec talk
amandaidx cups-lpd echo-udp krb5-telnet rlogin telnet
amidxtape daytime eklogin kshell rsh tftp
auth daytime-udp finger ktalk rsync time
chargen dbskkd-cdb gssftp ntalk swat time-udp
paul@RHELv4u2:~$ cat /etc/xinetd.d/swat
# default: off
# description: SWAT is the Samba Web Admin Tool. Use swat \
# to configure your Samba server. To use SWAT, \
# connect to port 901 with your favorite web browser.
service swat
{
port = 901
socket_type = stream
wait = no
only_from = 127.0.0.1
user = root
server = /usr/sbin/swat
log_on_failure += USERID
disable = yes
}
paul@RHELv4u2:~$
The services should be listed in the /etc/services file. Port determines the service port,
and must be the same as the port specified in /etc/services. The socket_type should be
set to stream for tcp services (and to dgram for udp). The log_on_failure += concats
the userid to the log message formatted in /etc/xinetd.conf. The last setting disable
can be set to yes or no. Setting this to no means the service is enabled!
Check the xinetd and xinetd.conf manual pages for many more configuration options.
168
Introduction to networking
You can disable a service in inetd.conf above by putting a # at the start of that line.
Here an example of the disabled vmware web interface (listening on tcp port 902).
3. (If telnet is installable, then replace swat in these questions with telnet) Is
swat installed ? If not, then install swat and look at the changes in the (x)inetd
configuration. Is swat enabled or disabled ?
169
Introduction to networking
8.19. openssh
The openssh package is maintained by the OpenBSD people and is distributed with
a lot of operating systems (it may even be the most popular package in the world).
Below sample use of ssh to connect from one server (RHELv4u2) to another one
(RHELv4u4).
The second time ssh remembers the connection. It added an entry to the ~/.ssh/
known_hosts file.
170
Introduction to networking
When Alice wants to send an encrypted message to Bob, she uses the public key of
Bob. Bob shares his public key with Alice, but keeps his private key private! Since
Bob is the only one to have Bob's private key, Alice is sure that Bob is the only one
that can read the encrypted message.
When Bob wants to verify that the message came from Alice, Bob uses the public
key of Alice to verify that Alice signed the message with her private key. Since Alice
is the only one to have Alice's private key, Bob is sure the message came from Alice.
In the example that follows, we will set up ssh without password between Alice and
Bob. Alice has an account on a Red Hat Enterprise Linux server, Bob is using Ubuntu
on his laptop. Bob wants to give Alice access using ssh and the public and private
key system. This means that even if Bob changes his password on his laptop, Alice
will still have access.
ssh-keygen
The example below shows how Alice uses ssh-keygen to generate a key pair. Alice
does not enter a passphrase.
171
Introduction to networking
~/.ssh
While ssh-keygen generates a public and a private key, it will also create a hidden
.ssh directory with proper permissions. If you create the .ssh directory manually, then
you need to chmod 700 it! Otherwise ssh will refuse to use the keys (world readable
private keys are not secure!).
As you can see, the .ssh directory is secure in Alice's home directory.
Bob is using Ubuntu at home. He decides to manually create the .ssh directory, so
he needs to manually secure it.
scp
To copy the public key from Alice's server tot Bob's laptop, Alice decides to use scp.
Be careful when copying a second key! Do not overwrite the first key, instead append
the key to the same ~/.ssh/authorized_keys file!
authorized_keys
In your ~/.ssh directory, you can create a file called authorized_keys. This file can
contain one or more public keys from people you trust. Those trusted people can
172
Introduction to networking
use their private keys to prove their identity and gain access to your account via ssh
(without password). The example shows Bob's authorized_keys file containing the
public key of Alice.
passwordless ssh
Alice can now use ssh to connect passwordless to Bob's laptop. In combination with
ssh's capability to execute commands on the remote host, this can be useful in pipes
across different machines.
Below an example of X11 forwarding: user paul logs in as user greet on her computer
to start the graphical application mozilla-thunderbird. Although the application will
run on the remote computer from greet, it will be displayed on the screen attached
locally to paul's computer.
173
Introduction to networking
2. Create a bookmark in Firefox, then close your firefox! Use ssh -X to run firefox
on your screen, but on your neighbour's computer. Do you see your neighbour's
bookmark ?
3. Verify in the ssh configuration files that only protocol version 2 is allowed.
4. Use ssh-keygen to create a key pair without passphrase. Setup passwordless ssh
between you and your neighbour. (or between the ubuntu and the Red Hat)
174
Introduction to networking
root@RHELv4u2:~# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 32768 status
100024 1 tcp 32769 status
100011 1 udp 985 rquotad
100011 2 udp 985 rquotad
100011 1 tcp 988 rquotad
100011 2 tcp 988 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100021 1 udp 32770 nlockmgr
100021 3 udp 32770 nlockmgr
100021 4 udp 32770 nlockmgr
100021 1 tcp 32789 nlockmgr
100021 3 tcp 32789 nlockmgr
100021 4 tcp 32789 nlockmgr
100005 1 udp 1004 mountd
100005 1 tcp 1007 mountd
100005 2 udp 1004 mountd
100005 2 tcp 1007 mountd
100005 3 udp 1004 mountd
100005 3 tcp 1007 mountd
root@RHELv4u2:~#
175
Introduction to networking
nfs version 4 requires tcp (port 2049) and supports Kerberos user authentication as
an option. nfs authentication only takes place when mounting the share. nfs versions
2 and 3 authenticate only the host.
# Only the computers barry and pasha can readwrite this one
/var/www pasha(rw) barry(rw)
You don't need to restart the nfs server to start exporting your newly created exports.
You can use the exportfs -va command to do this. It will write the exported directories
to /var/lib/nfs/etab, where they are immediately applied.
Here is another simple example. Suppose the project55 people tell you they only
need a couple of CD-ROM images, and you already have them available on an nfs
server. You could issue the following command to mount this storage on their /home/
project55 mount point.
176
Introduction to networking
177
Chapter 9. Scheduling
Table of Contents
9.1. about scheduling .......................................................................................... 178
9.2. one time jobs with at ................................................................................... 178
9.3. cron ............................................................................................................... 180
9.4. practice : scheduling .................................................................................... 182
9.5. solution : scheduling .................................................................................... 183
9.2.1. at
Simple scheduling can be done with the at command. This screenshot shows the
scheduling of the date command at 22:01 and the sleep command at 22:03.
root@laika:~# at 22:01
at> date
at> <EOT>
job 1 at Wed Aug 1 22:01:00 2007
root@laika:~# at 22:03
at> sleep 10
at> <EOT>
job 2 at Wed Aug 1 22:03:00 2007
root@laika:~#
In real life you will hopefully be scheduling more useful commands ;-)
9.2.2. atq
It is easy to check when jobs are scheduled with the atq or at -l commands.
root@laika:~# atq
1 Wed Aug 1 22:01:00 2007 a root
2 Wed Aug 1 22:03:00 2007 a root
root@laika:~# at -l
1 Wed Aug 1 22:01:00 2007 a root
2 Wed Aug 1 22:03:00 2007 a root
root@laika:~#
178
Scheduling
The at command understands English words like tomorrow and teatime to schedule
commands the next day and at four in the afternoon.
9.2.3. atrm
Jobs in the at queue can be removed with atrm.
root@laika:~# atq
6 Thu Aug 2 16:00:00 2007 a root
5 Thu Aug 2 10:05:00 2007 a root
root@laika:~# atrm 5
root@laika:~# atq
6 Thu Aug 2 16:00:00 2007 a root
root@laika:~#
The /etc/at.allow file can contain a list of users that are allowed to schedule at jobs.
When /etc/at.allow does not exist, then everyone can use at unless their username
is listed in /etc/at.deny.
179
Scheduling
9.3. cron
The following example means : run script42 eight minutes after two, every day of the
month, every month and every day of the week.
8 14 * * * script42
Run script8472 every month on the first of the month at 25 past midnight.
25 0 1 * * script8472
Run this script33 every two minutes on Sunday (both 0 and 7 refer to Sunday).
*/2 * * * 0
Instead of these five fields, you can also type one of these: @reboot, @yearly or
@annually, @monthly, @weekly, @daily or @midnight, and @hourly.
These files work in the same way as at.allow and at.deny. When the cron.allow file
exists, then your username has to be in it, otherwise you cannot use cron. When the
cron.allow file does not exists, then your username cannot be in the cron.deny file
if you want to use cron.
9.3.4. /etc/crontab
The /etc/crontab file contains entries for when to run hourly/daily/weekly/monthly
tasks. It will look similar to this output.
180
Scheduling
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
9.3.5. /etc/cron.*
The directories shown in the next screenshot contain the tasks that are run at the times
scheduled in /etc/crontab. The /etc/cron.d directory is for special cases, to schedule
jobs that require finer control than hourly/daily/weekly/monthly.
181
Scheduling
2. As normal user, use crontab -e to schedule a script to run every four minutes.
5. Take a look at the cron files and directories in /etc and understand them. What is
the run-parts command doing ?
182
Scheduling
2. As normal user, use crontab -e to schedule a script to run every four minutes.
paul@rhel55 ~$ crontab -e
no crontab for paul - using an empty one
crontab: installing new crontab
5. Take a look at the cron files and directories in /etc and understand them. What is
the run-parts command doing ?
run-parts runs a script in a directory
183
Chapter 10. Logging
Table of Contents
10.1. about logging ............................................................................................. 184
10.2. login logging .............................................................................................. 185
10.3. syslogd daemon .......................................................................................... 187
10.4. logger ......................................................................................................... 189
10.5. watching logs ............................................................................................. 190
10.6. rotating logs ............................................................................................... 190
10.7. practice : logging ........................................................................................ 191
10.8. solution : logging ....................................................................................... 192
10.1.1. /var/log
The location for log files according to the FHS is /var/log. You will find a lot of log
files and directories for common applications in /var/log.
10.1.2. /var/log/messages
A typical first file to check when troubleshooting is the /var/log/messages file. By
default this file will contain information on what just happened to the system.
184
Logging
The last command can also be used to get a list of last reboots.
185
Logging
The reason given for this is that users sometimes type their password by mistake
instead of their login, so this world readable file poses a security risk. You can
enable bad login logging by simply creating the file. Doing a chmod o-r /var/log/
btmp improves security.
Failed logins via ssh, rlogin or su are not registered in /var/log/btmp. Failed logins
via tty are.
186
Logging
You can enable this yourself, with a custom log file by adding the following line tot
syslog.conf.
auth.*,authpriv.* /var/log/customsec.log
187
Logging
many Unix applications and was much later written as rfc 3164. The syslog daemon
can receive messages on udp port 514 from many applications (and appliances), and
can append to log files, print, display messages on terminals and forward logs to
other syslogd daemons on other machines. The syslogd daemon is configured in /
etc/syslog.conf.
Each line in the configuration file uses a facility to determine where the message is
coming from. It also contains a level for the severity of the message, and an action
to decide on what to do with the message.
10.3.2. facilities
The man syslog.conf will explain the different default facilities for certain daemons,
such as mail, lpr, news and kern(el) messages. The local0 to local7 facility can be
used for appliances (or any networked device that supports syslog). Here is a list of
all facilities for syslog.conf version 1.3. The security keyword is deprecated.
auth (security)
authpriv
cron
daemon
ftp
kern
lpr mail
mark (internal use only)
news
syslog
user
uucp
local0-7
10.3.3. levels
The worst severity a message can have is emerg followed by alert and crit. Lowest
priority should go to info and debug messages. Specifying a severity will also log
all messages with a higher severity. You can prefix the severity with = to obtain only
messages that match that severity. You can also specify .none to prevent a specific
action from any message from a certain facility.
Here is a list of all levels, in ascending order. The keywords warn, error and panic
are deprecated.
debug
info
notice
warning (warn)
err (error)
crit
alert
emerg (panic)
188
Logging
10.3.4. actions
The default action is to send a message to the username listed as action. When the
action is prefixed with a / then syslog will send the message to the file (which can be
a regular file, but also a printer or terminal). The @ sign prefix will send the message
on to another syslog server. Here is a list of all possible actions.
In addition, you can prefix actions with a - to omit syncing the file after every logging.
10.3.5. configuration
Below a sample configuration of custom local4 messages in /etc/syslog.conf.
local4.crit /var/log/critandabove
local4.=crit /var/log/onlycrit
local4.* /var/log/alllocal4
10.4. logger
The logger command can be used to generate syslog test messages. You can aslo use
it in scripts. An example of testing syslogd with the logger tool.
189
Logging
190
Logging
4. Examine syslog to find the location of the log file containing ssh failed logins.
6. Configure /var/log/Mysu.log, all the su to root messages should go in that log. Test
that it works!
7. Send the local5 messages to the syslog server of your neighbour. Test that it works.
8. Write a script that executes logger to local4 every 15 seconds (different message).
Use tail -f and watch on your local4 log files.
191
Logging
4. Examine syslog to find the location of the log file containing ssh failed logins.
Debian/Ubuntu: /var/log/auth.log
/etc/init.d/syslog restart
cat /var/log/l4e.log
cat /var/log/l4i.log
6. Configure /var/log/Mysu.log, all the su to root messages should go in that log. Test
that it works!
echo authpriv.* /var/log/Mysu.log >> /etc/syslog.conf
192
Logging
7. Send the local5 messages to the syslog server of your neighbour. Test that it works.
8. Write a script that executes logger to local4 every 15 seconds (different message).
Use tail -f and watch on your local4 log files.
193
Chapter 11. Library Management
Table of Contents
11.1. introduction ................................................................................................ 194
11.2. /lib and /usr/lib ........................................................................................... 194
11.3. ldd ............................................................................................................... 194
11.4. ltrace ........................................................................................................... 195
11.5. dpkg -S and debsums ................................................................................. 195
11.6. rpm -qf and rpm -V ................................................................................... 196
11.1. introduction
With libraries we are talking about dynamically linked libraries (aka shared objects).
These are binaries that contain functions and are not started themselves as programs,
but are called by other binaries.
Several programs can use the same library. The name of the library file usually starts
with lib, followed by the actual name of the library, then the chracters .so and finally
a version number.
root@rhel53 ~# ls -l /lib/libext*
lrwxrwxrwx 1 root root 16 Feb 18 16:36 /lib/libext2fs.so.2 -> libext2fs.so.2.4
-rwxr-xr-x 1 root root 113K Jun 30 2009 /lib/libext2fs.so.2.4
11.3. ldd
Many programs have dependencies on the installation of certain libraries. You can
display these dependencies with ldd.
194
Library Management
11.4. ltrace
The ltrace program allows to see all the calls made to library functions by a program.
The example below uses the -c option to get only a summary count (there can be
many calls), and the -l option to only show calls in one library file. All this to see
what calls are made when executing su - serena as root.
You can then verify the integrity of all files in this package using debsums.
195
Library Management
You can then use rpm -V to verify all files in this package. In the example below
the output shows that the Size and the Time stamp of the file have changed since
installation.
You can then use yum reinstall $package to overwrite the existing library with an
original version.
196
Chapter 12. Memory management
Table of Contents
12.1. about memory ............................................................................................ 197
12.2. /proc/meminfo ............................................................................................ 197
12.3. swap space ................................................................................................. 198
12.4. practice : memory ...................................................................................... 199
12.2. /proc/meminfo
You will rarely want to look at /proc/meminfo...
...since the free command displays the same information in a more user friendly
output.
197
Memory management
The swap space can be a file, a partition, or a combination of files and partitions. You
can see the swap space with the free command, or with cat /proc/swaps.
The amount of swap space that you need depends heavily on the services that the
computer provides.
Now you can see that /proc/swaps displays all swap spaces separately, whereas the
free -om command only makes a human readable summary.
198
Memory management
3. On the Red Hat, create a swap partition on one of your new disks, and a swap file
on the other new disk.
4. Put all swap spaces in /etc/fstab and activate them. Use free again to verify that
it works.
199
Chapter 13. Installing Linux
Table of Contents
13.1. about ........................................................................................................... 200
13.2. installation by cdrom ................................................................................. 200
13.3. installation with rarp and tftp .................................................................... 200
13.4. about Red Hat kickstart ............................................................................. 201
13.5. using kickstart ............................................................................................ 202
13.1. about
The past couple of years the installation of linux has become a lot easier then before,
at least for end users installing a distro like Ubuntu, Fedora, Debian or Mandrake
on their home computer. Servers usually come pre-installed, and if not pre-installed,
then setup of a linux server today is very easy.
Linux can be installed in many different ways. End users most commonly use cdrom's
or dvd's for installation, most of the time with a working internet connection te receive
updates. Administrators might prefer network installations using protocols like tftp,
bootp, rarp and/or nfs or response file solutions like Red Hat Kickstart or Solaris
Jumpstart.
200
Installing Linux
00:03:ba:02:c3:82 192.168.1.71
00:03:ba:09:7c:f9 192.168.1.72
00:03:ba:09:7f:d2 192.168.1.73
We need to install the rarpd and tftpd daemons on the (Debian) machine that will be
hosting the install image.
And finally the linux install image must be present in the tftp served directory. The
filename of the image must be the hex ip-address, this is accomplished with symbolic
links.
root@laika:~# ll /srv/tftp/
total 7.5M
lrwxrwxrwx 1 root root 13 2007-03-02 21:49 C0A80147 -> ubuntu610.img
lrwxrwxrwx 1 root root 13 2007-03-03 14:13 C0A80148 -> ubuntu610.img
lrwxrwxrwx 1 root root 13 2007-03-02 21:49 C0A80149 -> ubuntu610.img
-rw-r--r-- 1 paul paul 7.5M 2007-03-02 21:42 ubuntu610.img
Time to enter boot net now in the openboot prompt. Twenty minutes later the three
servers where humming with linux.
You can modify the sample kickstart file RH-DOCS/sample.ks (can be found on the
documentation dvd). Put this file so anaconda can read it.
Anaconda is the Red Hat installer written in python. The name is chose because
anacondas are lizard-eating pythons. Lizard is the name of the Caldera Linux
installation program.
Do not change the order of the sections inside your kickstart file! The Red Hat System
Administration Guide contains about 25 pages describing all the options, most of
them are easy ti understand if you already performed a couple of installations.
201
Installing Linux
filename "/export/kickstart"
next-server remote.installation.server
Leaving out the next-server line will result in the client looking for the file on the
dhcp server itself.
Booting from cdrom with kickstart requires the following command at the boot:
prompt.
linux ks=cdrom:/ks.cfg
When the kickstart file is on the network, use nfs or http like in these examples.
linux ks=nfs:servername:/path/to/ks.cfg
linux ks=http://servername/path/to/ks.cfg
202
Chapter 14. Package management
Table of Contents
14.1. terminology ................................................................................................ 203
14.2. Red Hat package manager ......................................................................... 204
14.3. yum ............................................................................................................. 205
14.4. rpm2cpio .................................................................................................... 210
14.5. Debian package management ..................................................................... 210
14.6. alien ............................................................................................................ 212
14.7. Downloading software ............................................................................... 213
14.8. Compiling software .................................................................................... 213
14.9. Practice: Installing software ....................................................................... 213
14.10. Solution: Installing software .................................................................... 214
14.1. terminology
14.1.1. repositories
Most software for your Linux distribution is available in a central distributed
repository. This means that applications in the repository are tested for your
distribution and very easy to install with a GUI or command line installer.
The GUI is available via the standard menu (look for Add/Remove Software). The
command line is explained below in detail.
203
Package management
14.1.5. dependency
Some packages need other packages to function. Tools like aptitude and yum will
install all dependencies you need. When using dpkg or the rpm command, or when
building from source, you will need to install dependencies yourself.
14.2.3. rpm -q
To verify whether one package is installed, use rpm -q.
204
Package management
14.2.6. rpm -e
To remove a package, use the -e switch.
rpm -e verifies dependencies, and thus will prevent you from accidentailly erasing
packages that are needed by other packages.
14.2.7. /var/lib/rpm
The rpm database is located at /var/lib/rpm. This database contains all meta
information about packages that are installed (via rpm). It keeps track of all files,
which enables complete removes of software.
14.3. yum
205
Package management
Issue yum list $package to get all versions (in different repositories) of one package.
206
Package management
Resolving Dependencies
--> Running transaction check
---> Package sudo.i386 0:1.7.2p1-7.el5_5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================
Package Arch Version Repository Size
=======================================================================
Installing:
sudo i386 1.7.2p1-7.el5_5 rhel-i386-server-5 230 k
Transaction Summary
=======================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
Installed:
sudo.i386 0:1.7.2p1-7.el5_5
Complete!
If you only want to update one package, use yum update $package.
Dependencies Resolved
207
Package management
=====================================================================
Package Arch Version Repository Size
=====================================================================
Updating:
sudo i386 1.7.2p1-7.el5_5 rhel-i386-server-5 230 k
Transaction Summary
=====================================================================
Install 0 Package(s)
Upgrade 1 Package(s)
Updated:
sudo.i386 0:1.7.2p1-7.el5_5
Complete!
208
Package management
Available Groups:
Engineering and Scientific
FTP Server
Games and Entertainment
Java Development
KDE (K Desktop Environment)
KDE Software Development
MySQL Database
News Server
OpenFabrics Enterprise Distribution
PostgreSQL Database
Sound and Video
Done
To install a set of applications, brought together via a group, use yum groupinstall
$groupname.
[root@rhel55 ~]# yum groupinstall 'Sound and video'
Loaded plugins: rhnplugin, security
Setting up Group Process
Package alsa-utils-1.0.17-1.el5.i386 already installed and latest version
Package sox-12.18.1-1.i386 already installed and latest version
Package 9:mkisofs-2.01-10.7.el5.i386 already installed and latest version
Package 9:cdrecord-2.01-10.7.el5.i386 already installed and latest version
Package cdrdao-1.2.1-2.i386 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package cdda2wav.i386 9:2.01-10.7.el5 set to be updated
---> Package cdparanoia.i386 0:alpha9.8-27.2 set to be updated
---> Package sound-juicer.i386 0:2.16.0-3.el5 set to be updated
--> Processing Dependency: libmusicbrainz >= 2.1.0 for package: sound-juicer
--> Processing Dependency: libmusicbrainz.so.4 for package: sound-juicer
---> Package vorbis-tools.i386 1:1.1.1-3.el5 set to be updated
--> Processing Dependency: libao >= 0.8.4 for package: vorbis-tools
--> Processing Dependency: libao.so.2 for package: vorbis-tools
--> Running transaction check
---> Package libao.i386 0:0.8.6-7 set to be updated
---> Package libmusicbrainz.i386 0:2.1.1-4.1 set to be updated
--> Finished Dependency Resolution
...
Read the manual page of yum for more information about managing groups in yum.
Configurating yum itself is done in /etc/yum.conf. This file will contain the location
of a log file and a cache directory for yum and can also contain a list of repositories.
Recently yum started accepting several repo files with each file containing a list of
repositories. These repo files are located in the /etc/yum.repos.d/ directory.
One important flag for yum is enablerepo. Use this command if you want to use a
repository that is not enabled by default.
209
Package management
[$repo]
name=My Repository
baseurl=http://path/to/MyRepo
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-MyRep
14.4. rpm2cpio
We can use rpm2cpio to convert an rpm to a cpio archive.
210
Package management
14.5.2. dpkg -l
The low level tool to work with .deb packages is dpkg. Here you see how to obtain
a list of all installed packages. The ii at the beginning means the package is installed.
14.5.3. dpkg
You could use dpkg -i to install a package and dpkg -r to remove a package, but
you'd have to manually keep track of dependencies.
14.5.4. aptitude
Most people use aptitude for package management on Debian and Ubuntu Systems.
211
Package management
To patch and upgrade all software to the latest version on Ubuntu and Mint.
aptitude safe-upgrade
To search the repositories for applications that contain a certain string in their name
or description.
aptitude search $string
14.5.5. apt-get
We could also use apt-get, but aptitude is better at handling dependencies than apt-
get. Whenever you see apt-get in a howto, feel free to type aptitude.
14.5.6. /etc/apt/sources.list
The resource list for both apt-get and aptitude is located in /etc/apt/sources.list.
This file contains a list of http or ftp sources where packages for the distribution can
be downloaded.
14.6. alien
alien is experimental software that converts between rpm and deb package formats
(and others).
Below an example of how to use alien to convert an rpm package to a deb package.
paul@barry:~$ ls -l netcat*
-rw-r--r-- 1 paul paul 123912 2009-06-04 14:58 netcat-0.7.1-1.i386.rpm
paul@barry:~$ alien --to-deb netcat-0.7.1-1.i386.rpm
netcat_0.7.1-2_i386.deb generated
paul@barry:~$ ls -l netcat*
212
Package management
In real life, use the netcat tool provided by your distribution, or use the .deb file from
their website.
Normally the readme will explain what to do after download. You will probably
receive a .tar.gz or a .tgz file. Read the documentation, then put the compressed file in
a directory. You can use the following to find out where the package wants to install.
tar tvzpf $downloadedFile.tgz
You unpack them like with tar xzf, it will create a directory called
applicationName-1.2.3
tar xzf $applicationName.tgz
Replace the z with a j when the file ends in .tar.bz2. The tar, gzip and bzip2
commands are explained in detail later.
If you download a .deb file, then you'll have to use dpkg to install it, .rpm's can be
installed with the rpm command.
Usually the steps are always the same three : running ./configure followed by make
(which is the actual compiling) and then by make install to copy the files to their
proper location.
./configure
make
make install
213
Package management
3. Use aptitude or yum to search for and install the 'dict', 'samba' and 'wesnoth'
applications. Did you find all them all ?
5. If time permits, uninstall Samba from the ubuntu machine, download the latest
version from samba.org and install it.
3. Use aptitude or yum to search for and install the 'dict', 'samba' and 'wesnoth'
applications. Did you find all them all ?
aptitude search wesnoth (Debian, Ubuntu and family)
There are several formats available there choose .rpm, .deb or .tgz .
5. If time permits, uninstall Samba from the ubuntu machine, download the latest
version from samba.org and install it.
214
Chapter 15. Backup
Table of Contents
15.1. About tape devices ..................................................................................... 215
15.2. Compression ............................................................................................... 216
15.3. tar ............................................................................................................... 217
15.4. Backup Types ............................................................................................ 219
15.5. dump and restore ....................................................................................... 219
15.6. cpio ............................................................................................................. 220
15.7. dd ................................................................................................................ 220
15.8. split ............................................................................................................. 222
15.9. Practice backup .......................................................................................... 222
By default, SCSI tapes on linux will use the highest hardware compression that is
supported by the tape device. To lower the compression level, append one of the
letters l (low), m (medium) or a (auto) to the tape name.
215
Backup
15.1.3. mt
To manage your tapes, use mt (Magnetic Tape). Some examples.
To rewind a tape...
mt -f /dev/st0 rewind
To erase a tape...
mt -f /dev/st0 erase
15.2. Compression
It can be beneficial to compress files before backup. The two most popular tools for
compression of regular files on linux are gzip/gunzip and bzip2/bunzip2. Below you
can see gzip in action, notice that it adds the .gz extension to the file.
paul@RHELv4u4:~/test$ ls -l allfiles.tx*
-rw-rw-r-- 1 paul paul 8813553 Feb 27 05:38 allfiles.txt
paul@RHELv4u4:~/test$ gzip allfiles.txt
paul@RHELv4u4:~/test$ ls -l allfiles.tx*
-rw-rw-r-- 1 paul paul 931863 Feb 27 05:38 allfiles.txt.gz
paul@RHELv4u4:~/test$ gunzip allfiles.txt.gz
paul@RHELv4u4:~/test$ ls -l allfiles.tx*
-rw-rw-r-- 1 paul paul 8813553 Feb 27 05:38 allfiles.txt
paul@RHELv4u4:~/test$
In general, gzip is much faster than bzip2, but the latter one compresses a lot better.
Let us compare the two.
216
Backup
real 0m0.050s
user 0m0.041s
sys 0m0.009s
paul@RHELv4u4:~/test$ time bzip2 bllfiles.txt
real 0m5.968s
user 0m5.794s
sys 0m0.076s
paul@RHELv4u4:~/test$ ls -l ?llfiles.tx*
-rw-rw-r-- 1 paul paul 931863 Feb 27 05:38 allfiles.txt.gz
-rw-rw-r-- 1 paul paul 708871 May 12 10:52 bllfiles.txt.bz2
paul@RHELv4u4:~/test$
15.3. tar
The tar utility gets its name from Tape ARchive. This tool will receive and send
files to a destination (typically a tape or a regular file). The c option is used to create
a tar archive (or tarfile), the f option to name/create the tarfile. The example below
takes a backup of /etc into the file /backup/etc.tar .
Compression can be achieved without pipes since tar uses the z flag to compress with
gzip, and the j flag to compress with bzip2.
The t option is used to list the contents of a tar file. Verbose mode is enabled with
v (also useful when you want to see the files being archived during archiving).
To list a specific file in a tar archive, use the t option, added with the filename
(without leading /).
217
Backup
Use the x flag to restore a tar archive, or a single file from the archive. Remember
that by default tar will restore the file in the current directory.
You can preserve file permissions with the p flag. And you can exclude directories
or file with --exclude.
You can also create a text file with names of files and directories to archive, and then
supply this file to tar with the -T flag.
The tar utility can receive filenames from the find command, with the help of xargs.
You can also use tar to copy a directory, this is more efficient than using cp -r.
Another example of tar, this copies a directory securely over the network.
(cd /etc;tar -cf - . )|(ssh user@srv 'cd /backup/cp_of_etc/; tar -xf - ')
tar can be used together with gzip and copy a file to a remote server through ssh
218
Backup
Compress the tar backup when it is on the network, but leave it uncompressed at the
destination.
Suppose you take a full backup on Monday (level 0) and a level 1 backup on Tuesday,
then the Tuesday backup will contain all changes since Monday. Taking a level 2
on Wednesday will contain all changes since Tuesday (the last level 2-1). A level 3
backup on Thursday will contain all changes since Wednesday (the last level 3-1).
Another level 3 on Friday will also contain all changes since Wednesday. A level 2
backup on Saturday would take all changes since the last level 1 from Tuesday.
Restoring files that were backed up with dump is done with the restore command.
In the example below we take a full level 0 backup of two partitions to a SCSI tape.
The no rewind is mandatory to put the volumes behind each other on the tape.
Listing files in a dump archive is done with dump -t, and you can compare files with
dump -C.
You can omit files from a dump by changing the dump attribute with the chattr
command. The d attribute on ext will tell dump to skip the file, even during a full
backup. In the following example, /etc/hosts is excluded from dump archives.
chattr +d /etc/hosts
219
Backup
To restore the complete file system with restore, use the -r option. This can be useful
to change the size or block size of a file system. You should have a clean file system
mounted and cd'd into it. Like this example shows.
mke2fs /dev/hda3
mount /dev/hda3 /mnt/data
cd /mnt/data
restore rf /dev/nst0
To extract only one file or directory from a dump, use the -x option.
15.6. cpio
Different from tar and dump is cpio (Copy Input and Output). It can be used to receive
filenames, but copies the actual files. This makes it an easy companion with find!
Some examples below.
Now pipe it through ssh (backup files to a compressed file on another machine)
find /etc -depth -print|cpio -oaV|gzip -c|ssh server "cat - > etc.cpio.gz"
find sends filenames to cpio | cpio sends files to ssh | ssh sends files to cpio 'cpio
extracts files'
find /etc -depth -print | cpio -oaV | ssh user@host 'cpio -imVd'
the same but reversed: copy a dir from the remote host to the local machine
ssh user@host "find path -depth -print | cpio -oaV" | cpio -imVd
15.7. dd
15.7.1. About dd
Some people use dd to create backups. This can be very powerful, but dd backups
can only be restored to very similar partitions or devices. There are however a lot of
useful things possible with dd. Some examples.
220
Backup
dd if=/dev/hdb2 of=/image_of_hdb2.IMG
dd if=/dev/hdb2 | gzip > /image_of_hdb2.IMG.gz
221
Backup
15.8. split
The split command is useful to split files into smaller files. This can be useful to fit
the file onto multiple instances of a medium too small to contain the complete file.
In the example below, a file of size 5000 bytes is split into three smaller files, with
maximum 2000 bytes each.
paul@laika:~/test$ ls -l
total 8
-rw-r--r-- 1 paul paul 5000 2007-09-09 20:46 bigfile1
paul@laika:~/test$ split -b 2000 bigfile1 splitfile.
paul@laika:~/test$ ls -l
total 20
-rw-r--r-- 1 paul paul 5000 2007-09-09 20:46 bigfile1
-rw-r--r-- 1 paul paul 2000 2007-09-09 20:47 splitfile.aa
-rw-r--r-- 1 paul paul 2000 2007-09-09 20:47 splitfile.ab
-rw-r--r-- 1 paul paul 1000 2007-09-09 20:47 splitfile.ac
1. Create a directory (or partition if you like) for backups. Link (or mount) it under /
mnt/backup.
2a. Use tar to backup /etc in /mnt/backup/etc_date.tgz, the backup must be gzipped.
(Replace date with the current date)
2c. Choose a file in /etc and /bin and verify with tar that the file is indeed backed up.
3a. Create a backup directory for your neighbour, make it accessible under /mnt/
neighbourName
3b. Combine ssh and tar to put a backup of your /boot on your neighbours computer
in /mnt/YourName
4b. Choose a file in /etc and restore it from the cpio archive into your home directory.
222
Backup
5. Use dd and ssh to put a backup of the master boot record on your neighbours
computer.
6. (On the real computer) Create and mount an ISO image of the ubuntu cdrom.
7. Combine dd and gzip to create a 'ghost' image of one of your partitions on another
partition.
8. Use dd to create a five megabyte file in ~/testsplit and name it biggest. Then split
this file in smaller two megabyte parts.
mkdir testsplit
223
Chapter 16. Performance monitoring
Table of Contents
16.1. About Monitoring ...................................................................................... 224
16.2. top ............................................................................................................... 224
16.3. free ............................................................................................................. 225
16.4. watch .......................................................................................................... 225
16.5. vmstat ......................................................................................................... 225
16.6. iostat ........................................................................................................... 226
16.7. mpstat ......................................................................................................... 227
16.8. sadc and sar ............................................................................................... 227
16.9. ntop ............................................................................................................. 228
16.10. iftop .......................................................................................................... 228
Let us look at some tools that go beyond ps fax, df -h, lspci, fdisk -l and du -sh.
16.2. top
To start monitoring, you can use top. This tool will monitor Memory, CPU and
running processes. Top will automatically refresh. Inside top you can use many
commands, like k to kill processes, or t and m to toggle displaying task and memory
information, or the number 1 to have one line per cpu, or one summary line for all
cpu's.
top - 12:23:16 up 2 days, 4:01, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 61 total, 1 running, 60 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3% us, 0.5% sy, 0.0% ni, 98.9% id, 0.2% wa, 0.0% hi, 0.0% si
Mem: 255972k total, 240952k used, 15020k free, 59024k buffers
Swap: 524280k total, 144k used, 524136k free, 112356k cached
224
Performance monitoring
You can customize top to display the columns of your choice, or to display only the
processes that you find interesting.
[paul@RHELv4u3 ~]$ top p 3456 p 8732 p 9654
16.3. free
The free command is common on Linux to monitor free memory. You can use free
to display information every x seconds, but the output is not ideal.
[paul@RHELv4u3 gen]$
16.4. watch
It might be more interesting to combine free with the watch program. This program
can also run commands with a delay, and can highlight changes (with the -d switch).
16.5. vmstat
To monitor CPU, disk and memory statistics in one line there is vmstat. The
screenshot below shows vmstat running every two seconds 100 times (or until the
Ctrl-C). Below the r, you see the number of processes waiting for the CPU, sleeping
processes go below b. Swap usage (swpd) stayed constant at 144 kilobytes, free
memory dropped from 16.7MB to 12.9MB. See man vmstat for the rest
225
Performance monitoring
[paul@RHELv4u3 ~]$
16.6. iostat
The iostat tool can display disk and cpu statistics. The -d switch below makes iostat
only display disk information (500 times every two seconds). The first block displays
statistics since the last reboot.
You can have more statistics using iostat -d -x, or display only cpu statistics with
iostat -c.
226
Performance monitoring
[paul@RHELv4u3 ~]$
16.7. mpstat
On multi-processor machines, mpstat can display statistics for all, or for a selected
cpu.
CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
all 1.77 0.03 1.37 1.03 0.02 0.39 0.00 95.40 1304.91
0 1.73 0.02 1.47 1.93 0.04 0.77 0.00 94.04 1304.91
1 1.81 0.03 1.27 0.13 0.00 0.00 0.00 96.76 0.00
paul@laika:~$
You can also use sar to display a portion of the statistics that were gathered. Like this
example for cpu statistics.
There are other useful sar options, like sar -I PROC to display interrupt activity per
interrupt and per CPU, or sar -r for memory related statistics. Check the manual page
of sar for more.
227
Performance monitoring
16.9. ntop
The ntop tool is not present in default Red Hat installs. Once run, it will generate a
very extensive analysis of network traffic in html on http://localhost:3000 .
16.10. iftop
The iftop tool will display bandwidth by socket statistics for a specific network
device. Not available on default Red Hat servers.
228