lpi-todo
lpi-todo
Professional
Institute's
( L.P.I. )
LPI certification 101 (release 2) exam
prep, Part 4
Presented by developerWorks, your source for great tutorials
ibm.com/developerWorks
Table of Contents
If you're viewing this document online, you can click any of the topics below to link directly to that section.
This tutorial is particularly appropriate for someone who may be serving as the primary
sysadmin for the first time, since we cover a lot of low-level issues that all system
administrators should know. If you are new to Linux, we recommend that you start with Part 1
and work through the series from there. For some, much of this material will be new, but
more experienced Linux users may find this tutorial to be a great way of "rounding out" their
foundational Linux system administration skills and preparing for the next LPI certification
level.
By the end of this series of tutorials (eight in all covering the LPI 101 and 102 exams), you
will have the knowledge you need to become a Linux Systems Administrator and will be
ready to attain an LPIC Level 1 certification from the Linux Professional Institute if you so
choose.
For those who have taken the release 1 version of this tutorial for reasons other than LPI
exam preparation, you probably don't need to take this one. However, if you do plan to take
the exams, you should strongly consider reading this revised tutorial.
Daniel Robbins lives in Albuquerque, New Mexico, and is the Chief Architect of Gentoo
Technologies, Inc., the creator of Gentoo Linux, an advanced Linux for the PC, and the
Portage system, a next-generation ports system for Linux. He has also served as a
contributing author for the Macmillan books Caldera OpenLinux Unleashed, SuSE Linux
Unleashed, and Samba Unleashed. Daniel has been involved with computers in some
fashion since the second grade, when he was first exposed to the Logo programming
language as well as a potentially dangerous dose of Pac Man. This probably explains why he
has since served as a Lead Graphic Artist at SONY Electronic Publishing/Psygnosis. Daniel
enjoys spending time with his wife, Mary, and their daughter, Hadassah.
Chris Houser, known to his friends as "Chouser," has been a UNIX proponent since 1994
when joined the administration team for the computer science network at Taylor University in
Indiana, where he earned his Bachelor's degree in Computer Science and Mathematics.
Since then, he has gone on to work in Web application programming, user interface design,
professional video software support, and now Tru64 UNIX device driver programming at
Compaq. He has also contributed to various free software projects, most recently to Gentoo
Linux. He lives with his wife and two cats in New Hampshire.
Aron Griffis graduated from Taylor University with a degree in Computer Science and an
award that proclaimed him the "Future Founder of a Utopian UNIX Commune". Working
towards that goal, Aron is employed by Compaq writing network drivers for Tru64 UNIX, and
spending his spare time plunking out tunes on the piano or developing Gentoo Linux. He
lives with his wife Amy (also a UNIX engineer) in Nashua, NH.
To begin, I'll introduce "block devices." The most famous block device is probably the one
that represents the first IDE drive in a Linux system:
/dev/hda
If your system uses SCSI drives, then your first hard drive will be:
/dev/sda
Layers of abstraction
The block devices above represent an abstract interface to the disk. User programs can use
these block devices to interact with your disk without worrying about whether your drivers are
IDE, SCSI, or something else. The program can simply address the storage on the disk as a
bunch of contiguous, randomly-accessible 512-byte blocks.
Partitions
Under Linux, we create filesystems by using a special command called mkfs (or mke2fs,
mkreiserfs, etc.), specifying a particular block device as a command-line argument.
However, although it is theoretically possible to use a "whole disk" block device (one that
represents the entire disk) like /dev/hda or /dev/sda to house a single filesystem, this is
almost never done in practice. Instead, full disk block devices are split up into smaller, more
manageable block devices called partititons. Partitions are created using a tool called fdisk,
which is used to create and edit the partition table that's stored on each disk. The partition
table defines exactly how to split up the full disk.
Introducing fdisk
We can take a look at a disk's partition table by running fdisk, specifying a block device
that represents a full disk as an argument.
Note: Alternate interfaces to the disk's partition table include cfdisk, parted, and
partimage. I recommend that you avoid using cfdisk (despite what the fdisk manual page
may say) because it sometimes calculates disk geometry incorrectly.
# fdisk /dev/hda
or
# fdisk /dev/sda
Important: You should not save or make any changes to a disk's partition table if any of its
partitions contain filesystems that are in use or contain important data. Doing so will
generally cause data on the disk to be lost.
Inside fdisk
Once in fdisk, you'll be greeted with a prompt that looks like this:
This particular disk is configured to house seven Linux filesystems (each with a
corresponding partition listed as "Linux") as well as a swap partition (listed as "Linux swap").
All partitions hda5 and higher are logical partitions. The numbers 1 through 4 are reserved
for primary or extended partitions.
In our example, hda1 through hda3 are primary partitions. hda4 is an extended partition that
contains logical partitions hda5 through hda9. You would never actually use /dev/hda4 for
storing any filesystems directly -- it simply acts as a container for partitions hda5 through
hda9.
Partition types
Also, notice that each partition has an "Id," also called a partition type. Whenever you create
a new partition, you should ensure that the partition type is set correctly. 83 is the correct
partition type for partitions that will be housing Linux filesystems, and 82 is the correct
partition type for Linux swap partitions. You set the partition type using the t option in fdisk.
The Linux kernel uses the partition type setting to auto-detect fileystems and swap devices
on the disk at boot-time.
Important: To follow these steps, you need to have a hard drive that does not contain any
important data, since these steps will erase the data on your disk. If this is all new to you,
you may want to consider just reading the steps, or using a Linux boot disk on a test system
so that no data will be at risk.
(/dev/hda1) at the beginning of the disk is a small partition called a boot partition. The boot
partition's purpose is to hold all the critical data related to booting -- GRUB boot loader
information (if you will be using GRUB) as well as your Linux kernel(s). The boot partition
gives us a safe place to store everything related to booting Linux. During normal day-to-day
Linux use, your boot partition should remain unmounted for safety. If you are setting up a
SCSI system, your boot partition will likely end up being /dev/sda1.
The second partition (/dev/hda2) is used for swap space. The kernel uses swap space as
virtual memory when RAM becomes low. This partition, relatively speaking, isn't very big
either, typically somewhere around 512 MB. If you're setting up a SCSI system, this partition
will likely end up being called /dev/sda2.
The third partition (/dev/hda3) is quite large and takes up the rest of the disk. This partition is
called our root partition and will be used to store your main filesystem that houses the main
Linux filesystem. On a SCSI system, this partition would likely end up being /dev/sda3.
Getting started
Okay, now to create the partitions as in the example and table above. First, enter fdisk by
typing fdisk /dev/hda or fdisk /dev/sda, depending on whether you're using IDE or
SCSI. Then, type p to view your current partition configuration. Is there anything on the disk
that you need to keep? If so, stop now. If you continue with these directions, all existing data
on your disk will be erased.
Important: Following the instructions below will cause all prior data on your disk to be
erased! If there is anything on your drive, please be sure that it is non-critical
information that you don't mind losing. Also make sure that you have selected the
correct drive so that you don't mistakenly wipe data from the wrong drive.
The partition has been scheduled for deletion. It will no longer show up if you type p, but it
will not be erased until your changes have been saved. If you made a mistake and want to
abort without saving your changes, type q immediately and hit <enter> and your partition will
not be deleted.
Now, assuming that you do indeed want to wipe out all the partitions on your system,
repeatedly type p to print out a partition listing and then type d and the number of the
partition to delete it. Eventually, you'll end up with a partition table with nothing in it:
Now, when you type p, you should see the following partition printout:
Making it bootable
Finally, we need to set the "bootable" flag on our boot partition and then write our changes to
disk. To tag /dev/hda1 as a "bootable" partition, type a at the menu and then type 1 for the
partition number. If you type p now, you'll now see that /dev/hda1 has an "*" in the "Boot"
column. Now, let's write our changes to disk. To do this, type w and hit <enter>. Your disk
partitions are now properly configured for the installation of Linux.
Note: If fdisk instructs you to do so, please reboot to allow your system to detect the new
partition configuration.
While this is a common way to set up a Linux system, there is another approach that you
should be familiar with. This approach uses multiple partitions that house multiple filesystems
and are then "linked" together to form a cohesive filesystem tree. For example, it is common
to put /home and /var on their own filesystems.
We could have made hda2 into an extended rather than a primary partition. Then, we could
have created the hda5, hda6, and hda7 logical partitions (which would technically be
contained "inside" hda2), which would house the /, /home, and /var filesystems respectively.
You can learn more about these types of multi-filesystem configurations by studying the
resources listed on the next page.
Partitioning resources
For more information on partitioning, take a look at the following partitioning tips:
Creating filesystems
Now that the partitions have been created, it's time to set up filesystems on the boot and root
partitions so that they can be mounted and used to store data. We will also configure the
swap partition to serve as swap storage.
Linux supports a variety of different types of filesystems; each type has its strengths and
weaknesses and its own set of performance characteristics. We will cover the creation of
ext2, ext3, XFS, JFS, and ReiserFS filesystems in this tutorial. Before we create filesystems
on our example system, let's briefly review the various filesystems available under Linux.
We'll go into more detail on the filesystems later in the tutorial.
One of the nice things about ext3 is that an existing ext2 filesystem can be upgraded
"in-place" to ext3 quite easily. This allows for a seamless upgrade path for existing Linux
systems that are already using ext2.
Filesystem recommendations
If you're looking for the most rugged journaling filesystem, use ext3. If you're looking for a
good general-purpose high-performance filesystem with journaling support, use ReiserFS;
both ext3 and ReiserFS are mature, refined and recommended for general use.
Based on our example above, we will use the following commands to initialize all our
partitions for use:
# mke2fs -j /dev/hda1
# mkswap /dev/hda2
# mkreiserfs /dev/hda3
We choose ext3 for our /dev/hda1 boot partition because it is a robust journaling filesystem
supported by all major boot loaders. We used mkswap for our /dev/hda2 swap partition -- the
choice is obvious here. And for our main root filesystem on /dev/hda3 we choose ReiserFS,
since it is a solid journaling filesystem offering excellent performance. Now, go ahead and
initialize your partitions.
Making swap
mkswap is the command that used to initialize swap partitions:
# mkswap /dev/hda2
Unlike regular filesystems, swap partitions aren't mounted. Instead, they are enabled using
the swapon command:
# swapon /dev/hdc6
Your Linux system's startup scripts will take care of automatically enabling your swap
partitions. Therefore, the swapon command is generally only needed when you need to
immediately add some swap that you just created. To view the swap devices currently
enabled, type cat /proc/swaps.
# mke2fs /dev/hda1
If you would like to use ext3, you can create ext3 filesystems using mke2fs -j:
# mke2fs -j /dev/hda3
Note: You can find out more about using ext3 under Linux 2.4 on this site maintained by
Andrew Morton>.
# mkreiserfs /dev/hda3
# mkfs.xfs /dev/hda3
Note: You may want to add a couple of additional flags to the mkfs.xfs command: -d
agcount=3 -l size=32m. The -d agcount=3 command will lower the number of
allocation groups. XFS will insist on using at least one allocation group per 4GB of your
partition, so, for example, if you have a 20GB partition you will need a minimum agcount of
5. The l size=32m command increases the journal size to 32MB, increasing performance.
# mkfs.jfs /dev/hda3
Mounting filesystems
Once the filesystem is created, we can mount it using the mount command:
To mount a filesystem, specify the partition block device as a first argument and a
"mountpoint" as a second argument. The new filesystem will be "grafted in" at the
mountpoint. This also has the effect of "hiding" any files that were in the /mnt directory on
the parent filesystem. Later, when the filesystem is unmounted, these files will reappear.
After executing the mount command, any files created or copied inside /mnt will be stored on
the new ReiserFS filesystem you mounted.
Let's say we wanted to mount our boot partition inside /mnt. We could do this by performing
the following steps:
# mkdir /mnt/boot
# mount /dev/hda1 /mnt/boot
Now, our boot filesystem is available inside /mnt/boot. If we create files inside /mnt/boot, they
will be stored on our ext3 filesystem that physically resides on /dev/hda1. If we create file
inside /mnt but not /mnt/boot, then they will be stored on our ReiserFS filesystem that resides
on /dev/hda3. And if we create files outside of /mnt, they will not be stored on either
filesystem but on the filesystem of our current Linux system or boot disk.
Mounting, continued
To see what filesystems are mounted, type mount by itself. Here is the output of mount on
one of our currently-running Linux system, which has partitions configured identically to those
in the example above:
You can also view similar information by typing cat /proc/mounts. The "root" filesystem,
/dev/hda3 gets mounted automatically by the kernel at boot-time, and gets the symbolic
name /dev/hda3. On our system, both /dev/hda3 and /dev/root point to the same underlying
block device using a symbolic link:
# ls -l /dev/root
lr-xr-xr-x 1 root root 33 Mar 26 20:39 /dev/root -> ide/host0/bus0/target0/lun0/pa
# ls -l /dev/hda3
When using the mount command to mount filesystems, it attempts to auto-detect the
filesystem type. Sometimes, this may not work and you will need to specify the
to-be-mounted filesystem type manually using the -t option, as follows:
or
Mount options
It's also possible to customize various attributes of the to-be-mounted filesystem by
specifying mount options. For example, you can mount a filesystem as "read-only" by using
the "ro" option:
With /dev/hdc6 mounted read-only, no files can be modified in /mnt -- only read. If your
filesystem is already mounted "read/write" and you want to switch it to read-only mode, you
can use the "remount" option to avoid having to unmount and remount the filesystem again:
Notice that we didn't need to specify the partition block device because the filesystem is
already mounted and mount knows that /mnt is associated with /dev/hdc6. To make the
filesystem writeable again, we can remount it as read-write:
Note that these remount commands will not complete successfully if any process has opened
any files or directories in /mnt. To familiarize yourself with all the mount options available
under Linux, type man mount.
Introducing fstab
So far, we've seen how partition an example disk and mount filesystems manually from a
boot disk. But once we get a Linux system installed, how do we configure that Linux system
to mount the right filesystems at the right time? For example, Let's say that we installed
Gentoo Linux on our current example filesystem configuration. How would our system know
how to to find the root filesystem on /dev/hda3? And if any other filesystems -- like swap --
needed to be mounted at boot time, how would it know which ones?
Well, the Linux kernel is told what root filesystem to use by the boot loader, and we'll take a
look at the linux boot loaders later in this tutorial. But for everything else, your Linux system
has a file called /etc/fstab that tells it about what filesystems are available for mounting. Let's
take a look at it.
A sample fstab
Let's take a look at a sample /etc/fstab file:
Look at the /dev/hda1 line; you'll see that /dev/hda1 is an ext3 filesystem that should be
mounted at the /boot mountpoint. Now, look at /dev/hda1's mount options in the <opts>
column. The noauto option tells the system to not mount /dev/hda1 automatically at boot
time; without this option, /dev/hda1 would be automatically mounted to /boot at system boot
time.
Also note the noatime option, which turns off the recording of atime (last access time)
information on the disk. This information is generally not needed, and turning off atime
updates has a positive effect on filesystem performance.
Now, take a look at the /proc line and notice the defaults option. Use defaults whenever
you want a filesystem to be mounted with just the standard mount options. Since /etc/fstab
has multiple fields, we can't simply leave the option field blank.
Also notice the /etc/fstab line for /dev/hda2. This line defines /dev/hda2 as a swap device.
Since swap devices aren't mounted like filesystems, none is specified in the mountpoint field.
Thanks to this /etc/fstab entry, our /dev/hda2 swap device will be enabled automatically when
the system starts up.
With an /etc/fstab entry for /dev/cdrom like the one above, mounting the CD-ROM drive
becomes easier. Instead of typing:
# mount /dev/cdrom
In fact, using /etc/fstab allows us to take advantage of the user option. The user mount
option tells the system to allow this particular filesystem to be mounted by any user. This
comes in handy for removable media devices like CD-ROM drives. Without this fstab mount
option, only the root user would be able to use the CD-ROM drive.
Unmounting filesystems
Generally, all mounted filesystems are unmounted automatically by the system when it is
rebooted or shut down. When a filesystem is unmounted, any cached filesystem data in
memory is flushed to the disk.
However, it's also possible to unmount filesystems manually. Before a filesystem can be
unmounted, you first need to ensure that there are no processes running that have open files
on the filesystem in question. Then, use the umount command, specifying either the device
name or mount point as an argument:
# umount /mnt
or
# umount /dev/hda3
Once unmounted, any files in /mnt that were "covered" by the previously-mounted filesystem
will now reappear.
Introducing fsck
If your system crashes or locks up for some reason, the system won't have an opportunity to
cleanly unmount your filesystems. When this happens, the filesystems are left in an
inconsistent (unpredictable) state. When the system reboots, the fsck program will detect
that the filesystems were not cleanly unmounted and will want to perform a consistency
check of filesystems listed in /etc/fstab.
Sometimes, you may find that after a reboot fsck is unable to fully repair a partially
damaged filesystem. In these instances, all you need to do is to bring your system down to
single-user mode and run fsck manually, supplying the partition block device as an
argument. As fsck performs its filesystem repairs, it may ask you whether to fix particular
filesystem defects. In general, you should say y (yes) to all these questions and allow fsck
to do its thing.
In order to solve this problem, a new type of filesystem was designed, called a journaling
filesystem. Journaling filesystems record an on-disk log of recent changes to the filesystem
metadata. In the event of a crash, the filesystem driver inspects the log. Because the log
contains an accurate account of recent changes on disk, only these parts of the filesystem
metadata need to be checked for errors. Thanks to this important design difference, checking
a journalled filesystem for consistency typically takes just a matter of seconds, regardless of
filesystem size. For this reason, journaling filesystems are gaining popularity in the Linux
community. For more information on journaling filesystems, see the Advanced filesystem
implementor's guide, part 1: Journaling and ReiserFS.
Let's cover the major filesystems available for Linux, along with their associated commands
and options.
• In kernels: 2.0+
• journaling: no
• mkfs command: mke2fs
• mkfs example: mke2fs /dev/hdc7
• related commands: debugfs, tune2fs, chattr
• performance-related mount options: noatime
• In kernels: 2.4.16+
• journaling: metadata, ordered data writes, full metadata+data
• mkfs command: mke2fs -j
• mkfs example: mke2fs -j /dev/hdc7
• related commands: debugfs, tune2fs, chattr
• performance-related mount options: noatime
• other mount options:
• data=writeback (disable journaling)
• data=ordered (the default, metadata journaling and data is written out to disk with
metadata)
• data=journal (full data journaling for data and metadata integrity. Halves write
performance.)
• ext3 resources:
• Andrew Morton's ext3 page
• Andrew Morton's excellent ext3 usage documentation (recommended)
• Advanced filesystem implementor's guide, part 7: Introducing ext3
• Advanced filesystem implementor's guide, part 8: Surprises in ext3
• journaling: metadata
• mkfs command: mkreiserfs
• mkfs example: mkreiserfs /dev/hdc7
• performance-related mount options: noatime, notail
• ReiserFS Resources:
• The home of ReiserFS
• Advanced filesystem implementor's guide, part 1: Journaling and ReiserFS
• Advanced filesystem implementor's guide, part 2: Using ReiserFS and Linux 2.4
• In kernels: 2.4.20+
• journaling: metadata
• mkfs command: mkfs.jfs
• mkfs example: mkfs.jfs /dev/hdc7
• performance-related mount options: noatime
• JFS Resources: the JFS project Web site (IBM)
VFAT
The VFAT filesystem isn't really a filesystem that you would choose for storing Linux files.
Instead, it's a DOS-compatible filesystem driver that allows you to mount and exchange data
with DOS and Windows FAT-based filesystems. The VFAT filesystem driver is present in the
standard Linux kernel.
The MBR
The boot process is similar for all machines, regardless of which distribution is installed.
Consider the following example hard disk:
+----------------+
| MBR |
+----------------+
| Partition 1: |
| Linux root (/) |
| containing |
| kernel and |
| system. |
+----------------+
| Partition 2: |
| Linux swap |
+----------------+
| Partition 3: |
| Windows 3.0 |
| (last booted |
| in 1992) |
+----------------+
First, the computer's BIOS reads the first few sectors of your hard disk. These sectors
contain a very small program, called the "Master Boot Record," or "MBR." The MBR has
stored the location of the Linux kernel on the hard disk (partition 1 in the example above), so
it loads the kernel into memory and starts it.
This is the first line printed by the kernel when it starts running. The first part is the kernel
version, followed by the identification of the user that built the kernel (usually root), the
compiler that built it, and the timestamp when it was built.
Following that line is a whole slew of output from the kernel regarding the hardware in your
system: the processor, PCI bus, disk controller, disks, serial ports, floppy drive, USB devices,
network adapters, sound cards, and possibly others will each in turn report their status.
/sbin/init
When the kernel finishes loading, it starts a program called init. This program remains
running until the system is shut down. It is always assigned process ID 1, as you can see:
$ ps --pid 1
PID TTY TIME CMD
1 ? 00:00:04 init.system
The init program boots the rest of your distribution by running a series of scripts. These
scripts typically live in /etc/rc.d/init.d or /etc/init.d, and they perform services such as setting
the system's hostname, checking the filesystem for errors, mounting additional filesystems,
enabling networking, starting print services, etc. When the scripts complete, init starts a
program called getty which displays the login prompt, and you're good to go!
Of the two, LILO is the older and more common boot loader. LILO's presence on your system
is reported at boot, with the short "LILO boot:" prompt. Note that you may need to hold down
the shift key during boot to get the prompt, since often a system is configured to whiz straight
through without stopping.
There's not much fanfare at the LILO prompt, but if you press the <tab> key, you'll be
presented with a list of potential kernels (or other operating systems) to boot. Often there's
only one in the list. You can boot one of them by typing it and pressing <enter>. Alternatively
you can simply press <enter> and the first item on the list will boot by default.
Using LILO
Occasionally you may want to pass an option to the kernel at boot time. Some of the more
common options are root= to specify an alternative root filesystem, init= to specify an
alternative init program (such as init=/bin/sh to rescue a misconfigured system), and
mem= to specify the amount of memory in the system (for example mem=512M in the case
that Linux only autodetects 128M). You could pass these to the kernel at the LILO boot
prompt:
If you need to regularly specify command-line options, you might consider adding them to
/etc/lilo.conf. The format of that file is described in the lilo.conf(5) man-page.
Before moving on to GRUB, there is an important gotcha to LILO. Whenever you make
changes to /etc/lilo.conf, or whenever you install a new kernel, you must run lilo. The lilo
program rewrites the MBR to reflect the changes you made, including recording the absolute
disk location of the kernel. The example here makes use of the -v flag for verboseness:
# lilo -v
LILO version 21.4-4, Copyright (C) 1992-1998 Werner Almesberger
'lba32' extensions Copyright (C) 1999,2000 John Coffman
GRUB is usually installed with the grub-install command. Once installed, GRUB's menu
is administrated by editing the file /boot/grub/grub.conf. Both of these tasks are beyond the
scope of this document; you should read the GRUB info pages before attempting to install or
administrate GRUB.
Using GRUB
To give parameters to the kernel, you can press e at the boot menu. This provides you with
the opportunity to edit (by again pressing e) either the name of the kernel to load or the
parameters passed to it. When you're finished editing, press <enter> then b to boot with your
changes.
A significant difference between LILO and GRUB that bears mentioning is that GRUB does
not need to re-install its boot loader each time the configuration changes or a new kernel is
installed. This is because GRUB understands the Linux filesystem, whereas LILO just stores
the absolute disk location of the kernel to load. This single fact about GRUB alleviates the
frustration system administrators feel when they forget to type lilo after installing a new
kernel!
dmesg
The boot messages from the kernel and init scripts typically scroll by quickly. You might
notice an error, but it's gone before you can properly read it. In that case, there are two
places you can look after the system boots to see what went wrong (and hopefully get an
idea how to fix it).
If the error occurred while the kernel was loading or probing hardware devices, you can
retrieve a copy of the kernel's log using the dmesg command:
# dmesg | head -1
Linux version 2.4.16 ([email protected]) (gcc version 2.95.3 20010315 (release)) #1
Hey, we recognize that line! It's the first line the kernel prints when it loads. Indeed, if you
pipe the output of dmesg into a pager, you can view all of the messages the kernel printed on
boot, plus any messages the kernel has printed to the console in the meantime.
/var/log/messages
The second place to look for information is in the /var/log/messages file. This file is recorded
by the syslog daemon, which accepts input from libraries, daemons, and the kernel. Each
line in the messages file is timestamped. This file is a good place to look for errors that
occurred during the init scripts stage of booting. For example, to see the last few messages
from the nameserver:
Additional information
Additional information related to this section can be found here:
Section 4. Runlevels
Single-user mode
Recall from the section regarding boot loaders that it's possible to pass parameters to the
kernel when it boots. One of the most often used parameters is s, which causes the system
to start in "single-user" mode. This mode usually mounts only the root filesystem, starts a
minimal subset of the init scripts, and starts a shell rather than providing a login prompt.
Additionally, networking is not configured, so there is no chance of external factors affecting
your work.
Runlevels
In fact, it's not actually necessary to reboot in order to reach single-user mode. The init
program manages the current mode, or "runlevel," for the system. The standard runlevels for
a Linux system are defined as follows:
telinit
To change to single-user mode, use the telinit command, which instructs init to change
runlevels:
# telinit 1
You can see from the table above that you can also shutdown or reboot the system in this
manner. telinit 0 will halt the computer; telinit 6 will reboot the computer. When you
issue the telinit command to change runlevels, a subset of the init scripts will run to either
shut down or start up system services.
Runlevel etiquette
However, note that this is rather rude if there are users on the system at the time (who may
be quite angry with you). The shutdown command provides a method for changing runlevels
in a way that treats users reasonably. Similarly to the kill command's ability to send a
variety of signals to a process, shutdown can be used to halt, reboot, or change to
single-user mode. For example, to change to single-user mode in 5 minutes:
# shutdown 5
Broadcast message from root (pts/2) (Tue Jan 15 19:40:02 2002):
The system is going DOWN to maintenance mode in 5 minutes!
If you press control-c at this point, you can cancel the pending switch to single-user mode.
The message above would appear on all terminals on the system, so users have a
reasonable amount of time to save their work and log off. (Some might argue whether or not
5 minutes is "reasonable.")
# shutdown -r now
No chance to hit control-c in this case; the system is already on its way down. Finally, the -h
option halts the system:
# shutdown -h 1
Broadcast message from root (pts/2) (Tue Jan 15 19:50:58 2002):
The system is going DOWN for system halt in 1 minute!
On my system, runlevel 3 is the default runlevel. It can be useful to change this value if you
prefer your system to boot immediately into a graphical login (usually runlevel 4 or 5). To do
so, simply edit the file and change the value on that line. But be careful! If you change it to
something invalid, you'll probably have to employ the init=/bin/sh trick we mentioned
earlier.
Additional information
Additional information related to this section can be found at:
Kernel support
Quotas are a feature of the filesystem; therefore, they require kernel support. The first thing
you'll need to do is verify that you have quota support in your kernel. You can do this using
grep:
# cd /usr/src/linux
# grep -i quota .config
CONFIG_QUOTA=y
CONFIG_XFS_QUOTA=y
If this command returns something less conclusive (such as CONFIG_QUOTA is not set)
then you should rebuild your kernel to include quota support. This is not a difficult process,
but is outside of the scope of this section of the tutorial. If you're unfamiliar with the steps to
build and install a new kernel, you might consider referencing this tutorial.
Filesystem support
Before diving into the administration of quotas, please note that quota support on Linux as of
the 2.4.x kernel series is not complete. There are currently problems with quotas in the ext2
and ext3 filesystems, and ReiserFS does not appear to support quotas at all. This tutorial
bases its examples on XFS, which seems to properly support quotas.
Configuring quotas
To begin configuring quotas on your system, you should edit /etc/fstab to mount the affected
filesystems with quotas enabled. For our example, we use an XFS filesystem mounted with
user and group quotas enabled:
# quotaon /usr/users
There is a corresponding quotaoff command should you desire to disable quotas in the
future:
# quotaoff /usr/users
But for the moment, if you're trying some of the examples in this tutorial, be sure to have
quotas enabled.
# quota -v
The first column, blocks, shows how much disk space the root user is currently using on
each filesystem listed. The following columns, quota and limit, refer to the limits currently in
place for disk space. We will explain the difference between quota and limit, and the meaning
of the grace column later on. The files column shows how many files the root user owns on
the particular filesystem. The following quota and limit columns refer to the limits for files.
Viewing quota
Any user can use the quota command to view their own quota report as shown in the
previous example. However only the root user can look at the quotas for other users and
groups. For example, say we have a filesystem, /dev/hdc1 mounted on /usr/users, with two
users: jane and john. First, let's look at jane's disk usage and limits.
# quota -v jane
In this example, we see that jane's quotas are set to zero, which indicates no limit.
edquota
Now let's say we want to give the user jane a quota. We do this with the edquota command.
Before we start editing quotas, let's see how much space we have available on /usr/users:
# df /usr/users
This isn't a particularly large filesystem, only 600MB or so. It seems prudent to give jane a
quota so that she can't use more than her fair share. When you run edquota, a temporary
file is created for each user or group you specify on the command line.
edquota, continued
The edquota command puts you in an editor, which enables you to add and/or modify
quotas via this temporary file.
# edquota jane
Similar to the output from the quota command above, the blocks and inodes columns in this
temporary file refer to the disk space and number of files jane is currently using. You cannot
modify the number of blocks or inodes; any attempt to do so will be summarily discarded by
the system. The soft and hard columns show jane's quota, which we can see is currently
unlimited (again, zero indicates no quota).
Understanding edquota
The soft limit is the maximum amount of disk usage that jane has allocated to her on the
filesystem (in other words, her quota). If jane uses more disk space than is allocated in her
soft limit, she will be issued warnings about her quota violation via e-mail. The hard limit
indicates the absolute limit on disk usage, which a user can't exceed. If jane tries to use more
disk space than is specified in the hard limit, she will get a "Disk quota exceeded" error and
will not be able to complete the operation.
Making changes
So here we change jane's soft and hard limits and save the file:
# quota jane
Copying quotas
You'll remember that we also have another user, john, on this filesystem. If we want to give
john the same quota as jane, we can use the -p option to edquota, which uses jane's
quotas as a prototype for all following users on the command line. This is an easy way to set
up quotas for groups of users.
Group restrictions
We can also use edquota to restrict the allocation of disk space based on the group
ownership of files. For example, to edit the quotas for the users group:
# edquota -g users Disk quotas for group users (gid 100): Filesystem blocks soft hard inodes
soft hard /dev/hdc1 4100 500000 510000 7 100000 125000
Then to view the modified quotas for the users group:
# quota -g users Disk quotas for group users (gid 100): Filesystem blocks quota limit grace
files quota limit grace /dev/hdc1 4100 500000 510000 7 100000 125000
Repquota options
There are a couple of other options to repquota that are worth mentioning. repquota -a
will report on all currently mounted read-write filesystems that have quotas enabled.
repquota -n will not resolve uids and gids to names. This can speed up the output for
large lists.
Monitoring quotas
If you are a system administrator, you will want to have a way to monitor quotas to ensure
that they are not being exceeded. An easy way to do this is to use warnquota. The
warnquota command sends e-mail to users who have exceeded their soft limit. Typically
warnquota is run as a cron-job.
When a user exceeds his or her soft limit, the grace column in the output from the quota
command will indicate the grace period -- how long before the soft limit is enforced for that
filesystem.
By default, the grace period for blocks and inodes is seven days.
# edquota -t
This puts you in an editor of a temporary file that looks like this:
The text in the file is nicely explanatory. Be sure to leave your users enough time to receive
their warning e-mail and find some files to delete!
Also remember what we mentioned previously regarding quotaon and quotaoff. You
should incorporate quotaon into your boot script so that quotas are enabled. To enable
quotas on all filesystems where quotas are supported, use the -a option:
# quotaon -a
Reading logs
Let's jump right in and look at the contents of a syslog-recorded log file. Afterward, we can
come back to syslog configuration. The FHS (see Part 2 of this tutorial series) mandates that
log files be placed in /var/log. Here we use the tail command to display the last 10 lines in
the "messages" file:
# cd /var/log
# tail messages
Jan 12 20:17:39 bilbo init: Entering runlevel: 3
Jan 12 20:17:40 bilbo /usr/sbin/named[337]: starting BIND 9.1.3
Jan 12 20:17:40 bilbo /usr/sbin/named[337]: using 1 CPU
Jan 12 20:17:41 bilbo /usr/sbin/named[350]: loading configuration from '/etc/bind/named.
Jan 12 20:17:41 bilbo /usr/sbin/named[350]: no IPv6 interfaces found
Jan 12 20:17:41 bilbo /usr/sbin/named[350]: listening on IPv4 interface lo, 127.0.0.1#53
Jan 12 20:17:41 bilbo /usr/sbin/named[350]: listening on IPv4 interface eth0, 10.0.0.1#5
Jan 12 20:17:41 bilbo /usr/sbin/named[350]: running
Jan 12 20:41:58 bilbo gnome-name-server[11288]: starting
Jan 12 20:41:58 bilbo gnome-name-server[11288]: name server starting
You may remember from the text-processing whirlwind that the tail command displays the
last lines in a file. In this case, we can see that the nameserver named was recently started
on this system, which is named bilbo. If we were deploying IPv6, we might notice that
named found no IPv6 interfaces, indicating a potential problem. Additionally, we can see that
a user may have recently started GNOME, indicated by the presence of
gnome-name-server.
# tail -f /var/log/messages
For example, in the case of debugging our theoretical IPv6 problem, running the above
command in one terminal while stopping and starting named would immediately display the
messages from that daemon. This can be a useful technique when debugging. Some
administrators even like to keep a constantly running tail -f messages in a terminal
where they can keep an eye on system events.
Grepping logs
Another useful technique is to search a log file using the grep utility, described in Part 2 of
this tutorial series. In the above case, we might use grep to find where "named" behavior
has changed:
Log overview
The following summarizes the log files typically found in /var/log and maintained by syslog:
• messages: Informational and error messages from general system programs and
daemons.
• secure : Authentication messages and errors, kept separate from "messages" for extra
security.
• maillog: Mail-related messages and errors.
• cron: Cron-related messages and errors.
• spooler: UUCP and news-related messages and errors.
syslog.conf
As a matter of fact, now would be a good time to investigate the syslog configuration file,
/etc/syslog.conf. (Note: If you don't have syslog.conf, keep reading for the sake of
information, but you may be using an alternative syslog daemon.) Browsing that file, we see
there are entries for each of the common log files mentioned above, plus possibly some
other entries. The file has the format facility.priority action, where those fields are
defined as follows:
syslog.conf, continued
facility
Specifies the subsystem that produced the message. The valid keywords for facility are auth,
authpriv, cron, daemon, kern, lpr, mail, news, syslog, user, uucp and local0 through local7.
priority
Specifies the minimum severity of the message, meaning that messages of this priority and
higher will be matched by this rule. The valid keywords for priority are debug, info, notice,
warning, err, crit, alert, and emerg.
action
The action field should be either a filename, tty (such as /dev/console), remote machine
prefixed by @ , comma-separated list of users, or * to send the message to everybody logged
on. The most common action is a simple filename.
Note that you need to inform the syslog daemon of changes to the configuration file before
they are put into effect. Sending it a SIGHUP is the right method, and you can use the
killall command to do this easily:
A security note
You should beware that the log files written to by syslogd will be created by the program if
they don't exist. Regardless of your current umask setting, the files will be created
world-readable. If you're concerned about the security, you should chmod the files to be
read-write by root only. Additionally, the logrotate program (described below) can be
configured to create new log files with the appropriate permissions. The syslog daemon
always preserves the current attributes of an existing log file, so you don't need to worry
about it once the file is created.
logrotate
The log files in /var/log will grow over time, and potentially could fill the filesystem. It is
advisable to employ a program such as logrotate to manage the automatic archiving of
the logs. The logrotate program usually runs as a daily cron job, and can be configured to
rotate, compress, remove, or mail the log files.
For example, a default configuration of logrotate might rotate the logs weekly, keeping 4
weeks worth of backlogs (by appending a sequence number to the filename), and compress
the backlogs to save space. Additionally, the program can be configured to deliver a SIGHUP
to syslogd so that the daemon will notice the now-empty log files and append to them
appropriately.
For more information on logrotate, see the logrotate(8) man page, which contains a
description of the program and the syntax of the configuration file.
First, the syslog daemon is actually part of the sysklogd package, which contains a second
daemon called klogd. It's klogd's job to receive information and error messages from the
kernel, and pass them on to syslogd for categorization and logging. The messages received
by klogd are exactly the same as those you can retrieve using the dmesg command. The
difference is that dmesg prints the current contents of a ring buffer in the kernel, whereas
klogd is passing the messages to syslogd so that they won't be lost when the ring wraps
around.
Third, you can log messages in your scripts using the logger command. See the logger(1)
man page for more information.
We didn't have quite enough room to cover the important topic of system backups in this
tutorial. Fortunately, IBM developerWorks already has a tutorial on this subject, called
Backing up your Linux machines. In this tutorial, you'll learn how to back up Linux systems
using a tar variant called star. You'll also learn how to use the mt command to control
tape functions.
The second topic that we weren't quite able to fit in was periodic scheduling. Fortunately,
there's some good cron documentation available at Indiana University. cron is used to
schedule jobs to be executed at a specific time, and is an important tool for any system
administrator.
On the next page, you'll find a number of resources that you will find helpful in learning more
about the subjects presented in this tutorial.
Resources
To find out more about quota support under Linux, be sure to check out the Linux Quota
mini-HOWTO . Also be sure to consult the quota(1), edquota(8), repquota(8), quotacheck(8),
and quotaon(8) man pages on your system.
Additional information to the system boot process and boot loaders can be found at:
• IBM developerWorks' Getting to know GRUB tutorial
• LILO Mini-HOWTO
• GRUB home
• Kernel command-line options in /usr/src/linux/Documentation/kernel-parameters.txt
• Sysvinit docs at Redhat
To learn more about Linux filesystems, read the multi-part advanced filesystem
implementor's guide on the IBM developerWorks Linux zone, covering:
• The benefits of journalling and ReiserFS (Part 1)
• Setting up a ReiserFS system (Part 2)
• Using the tmpfs virtual memory filesystem and bind mounts (Part 3)
• The benefits of devfs, the device management filesystem (Part 4)
• Beginning the conversion to devfs (Part 5)
• Completing the conversion to devfs using an init wrapper (Part 6)
• The benefits of the ext3 filesystem (Part 7)
For more information on partitioning, take a look at the following partitioning tips on the IBM
developerWorks Linux zone:
• Partition planning tips
• Partitioning in action: consolidating data
• Partitioning in action: moving /home
ReiserFS Resources:
• The home of ReiserFS
• Advanced filesystem implementor's guide, Part 1: Journalling and ReiserFS on
developerWorks
• Advanced filesystem implementor's guide, Part 2: Using ReiserFS and Linux 2.4 on
developerWorks
ext3 resources:
• Andrew Morton's ext3 page
• Andrew Morton's excellent ext3 usage documentation (recommended)
Don't forget linuxdoc.org. You'll find linuxdoc's collection of guides, HOWTOs, FAQs, and
man pages to be invaluable. Be sure to check out Linux Gazette and LinuxFocus as well.
The Linux System Administrators guide, available from Linuxdoc.org's "Guides" section, is a
good complement to this series of tutorials -- give it a read! You may also find Eric S.
Raymond's Unix and Internet Fundamentals HOWTO to be helpful.
In the Bash by example article series on developerWorks, Daniel shows you how to use
bash programming constructs to write your own bash scripts. This bash series (particularly
Parts 1 and 2) will be excellent additional preparation for the LPIC Level 1 exam:
• Bash by example, part 1: Fundamental programming in the Bourne-again shell
• Bash by example, part 2: More bash programming fundamentals
• Bash by example, part 3: Exploring the ebuild system
We highly recommend the Technical FAQ by Linux Users by Mark Chapman, a 50-page
in-depth list of frequently-asked Linux questions, along with detailed answers. The FAQ itself
is in PDF (Adobe Acrobat) format. If you're a beginning or intermediate Linux user, you really
owe it to yourself to check this FAQ out. We also recommend the Linux glossary for Linux
If you're not familiar with the vi editor, we strongly recommend that you check out Daniel's Vi
intro -- the cheat sheet method tutorial . This tutorial will give you a gentle yet fast-paced
introduction to this powerful text editor. Consider this must-read material if you don't know
how to use vi.
Feedback
Please send any tutorial feedback you may have to the authors:
• Daniel Robbins, at [email protected]
• Chris Houser, at [email protected]
• Aron Griffis, at [email protected]
Colophon
This tutorial was written entirely in XML, using the developerWorks Toot-O-Matic tutorial
generator. The open source Toot-O-Matic tool is an XSLT stylesheet and several XSLT
extension functions that convert an XML file into a number of HTML pages, a zip file, JPEG
heading graphics, and two PDF files. Our ability to generate multiple text and binary formats
from a single source file illustrates the power and flexibility of XML. (It also saves our
production team a great deal of time and effort.)
Table of Contents
If you're viewing this document online, you can click any of the topics below to link directly to that section.
This particular tutorial (Part 2) is ideal for those who have a good basic knowledge of bash
and want to receive a solid introduction to basic Linux administration tasks. If you are new to
Linux, we recommend that you complete Part 1 of this tutorial series first before continuing.
For some, much of this material will be new, but more experienced Linux users may find this
tutorial to be a great way of "rounding out" their basic Linux administration skills.
By the end of this tutorial, you'll have a solid grounding in basic Linux administration and will
be ready to begin learning some more advanced Linux system administration skills. By the
end of this series of tutorials (eight in all), you'll have the knowledge you need to become a
Linux Systems Administrator and will be ready to attain an LPIC Level 1 certification from the
Linux Professional Institute if you so choose.
For those who have taken the release 1 version of this tutorial for reasons other than LPI
exam preparation, you probably don't need to take this one. However, if you do plan to take
the exams, you should strongly consider reading this revised tutorial.
Residing in Albuquerque, New Mexico, Daniel Robbins is the Chief Architect of Gentoo
Linux an advanced ports-based Linux metadistribution. Besides writing articles, tutorials, and
tips for the developerWorks Linux zone and Intel Developer Services, he has also served as
a contributing author for several books, including Samba Unleashed and SuSE Linux
Unleashed. Daniel enjoys spending time with his wife, Mary, and his daughter, Hadassah.
You can contact Daniel at [email protected].
Chris Houser, known to his friends as "Chouser," has been a UNIX proponent since 1994
when he joined the administration team for the computer science network at Taylor
University in Indiana, where he earned his Bachelor's degree in Computer Science and
Mathematics. Since then, he has gone on to work in Web application programming, user
interface design, professional video software support, and now Tru64 UNIX device driver
programming at Compaq. He has also contributed to various free software projects, most
recently to Gentoo Linux). He lives with his wife and two cats in New Hampshire. You can
contact Chris at [email protected].
Aron Griffis graduated from Taylor University with a degree in Computer Science and an
award that proclaimed, "Future Founder of a Utopian UNIX Commune." Working towards that
goal, Aron is employed by Compaq writing network drivers for Tru64 UNIX, and spending his
spare time plunking out tunes on the piano or developing Gentoo Linux. He lives with his wife
Amy (also a UNIX engineer) in Nashua, New Hampshire.
Glob comparison
As we take a look at regular expressions, you may find that regular expression syntax looks
similar to the filename "globbing" syntax that we looked at in Part 1. However, don't let this
fool you; their similarity is only skin deep. Both regular expressions and filename globbing
patterns, while they may look similar, are fundamentally different beasts.
Above, the first parameter to grep is a regex; the second is a filename. Grep read each line
in /etc/passwd and applied the simple substring regex bash to it, looking for a match. If a
match was found, grep printed out the entire line; otherwise, the line was ignored.
Metacharacters
With regular expressions, you can perform much more complex searches than the examples
we've looked at so far by taking advantage of metacharacters. One of these metacharacters
is the . (a period), which matches any single character:
In this example, the literal text dev.hda didn't appear on any of the lines in /etc/fstab.
However, grep wasn't scanning them for the literal dev.hda string, but for the dev.hda
pattern. Remember that the . will match any single character. As you can see, the .
metacharacter is functionally equivalent to how the ? metacharacter works in "glob"
expansions.
Using []
If we wanted to match a character a bit more specifically than ., we could use [ and ]
(square brackets) to specify a subset of characters that should be matched:
As you can see, this particular syntactical feature works identically to the [] in "glob"
filename expansions. Again, this is one of the tricky things about learning regular
expressions -- the syntax is similar but not identical to "glob" filename expansion syntax,
which often makes regexes a bit confusing to learn.
Using [^]
You can reverse the meaning of the square brackets by putting a ^ immediately after the [.
In this case, the brackets will match any character that is not listed inside the brackets.
Again, note that we use [^] with regular expressions, but [!] with globs:
Differing syntax
It's important to note that the syntax inside square brackets is fundamentally different from
that in other parts of the regular expression. For example, if you put a . inside square
brackets, it allows the square brackets to match a literal ., just like the 1 and 2 in the
examples above. In comparison, a literal . outside the square brackets is interpreted as a
metacharacter unless prefixed by a \. We can take advantage of this fact to print a list of all
lines in /etc/fstab that contain the literal string dev.hda by typing:
Neither regular expression is likely to match any lines in your /etc/fstab file.
• ab*c matches abbbbc but not abqc (if a glob, it would match both strings -- can you
figure out why?)
• ab*c matches abc but not abbqbbc (again, if a glob, it would match both strings)
• ab*c matches ac but not cba (if a glob, ac would not be matched, nor would cba)
• b[cq]*e matches bqe and be (if a glob, it would match bqe but not be)
• b[cq]*e matches bccqqe but not bccc (if a glob, it would match the first but not the
second as well)
• b[cq]*e matches bqqcce but not cqe (if a glob, it would match the first but not the
second as well)
• b[cq]*e matches bbbeee (this would not be the case with a glob)
• .* will match any string. (if a glob, it would match any string starting with .)
• foo.* will match any string that begins with foo (if a glob, it would match any string
starting with the four literal characters foo..)
Now, for a quick brain-twisting review: the line ac matches the regex ab*c because the
asterisk also allows the preceding expression (c) to appear zero times. Again, it's critical to
note that the * regex metacharacter is interpreted in a fundamentally different way than the *
glob character.
$ grep ^# /etc/fstab
# /etc/fstab: static file system information.
#
Full-line regexes
^ and $ can be combined to match an entire line. For example, the following regex will match
a line that starts with the # character and ends with the . character, with any number of other
characters in between:
In the above example, we surrounded our regular expression with single quotes to prevent $
from being interpreted by the shell. Without the single quotes, the $ will disappear from our
regex before grep even has a chance to take a look at it.
The following grid summarizes the four possible combinations, with examples of directories
that would fall into those categories. Again, this table is straight from the FHS specification:
+---------+-----------------+-------------+
| | shareable | unshareable |
+---------+-----------------+-------------+
|static | /usr | /etc |
| | /opt | /boot |
+---------+-----------------+-------------+
|variable | /var/mail | /var/run |
| | /var/spool/news | /var/lock |
+---------+-----------------+-------------+
(shareable), or mounted from a CD-ROM (static). Most Linux setups don't make use of
sharing /usr, but it's valuable to understand the usefulness of distinguishing between the
primary hierarchy at the root directory and the secondary hierarchy at /usr.
This is all we'll say about the Filesystem Hierarchy Standard. The document itself is quite
readable, so you should go take a look at it. You'll understand a lot more about the Linux
filesystem if you read it. Find it at http://www.pathname.com/fhs/.
Finding files
Linux systems often contain hundreds of thousands of files. Perhaps you are savvy enough
to never lose track of any of them, but it's more likely that you will occasionally need help
finding one. There are a few different tools on Linux for finding files. This introduction will
help you choose the right tool for the job.
The PATH
When you run a program at the command line, bash actually searches through a list of
directories to find the program you requested. For example, when you type ls, bash doesn't
intrinsically know that the ls program lives in /usr/bin. Instead, bash refers to an
environment variable called PATH, which is a colon-separated list of directories. We can
examine the value of PATH:
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin
Given this value of PATH (yours may differ,) bash would first check /usr/local/bin, then
/usr/bin for the ls program. Most likely, ls is kept in /usr/bin, so bash would stop at that
point.
Modifying PATH
You can augment your PATH by assigning to it on the command line:
$ PATH=$PATH:~/bin
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/home/agriffis/bin
You can also remove elements from PATH, although it isn't as easy since you can't refer to
the existing $PATH. Your best bet is to simply type out the new PATH you want:
$ PATH=/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:~/bin
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/home/agriffis/bin
To make your PATH changes available to any future processes you start from this shell,
export your changes using the export command:
$ export PATH
$ which sense
which: no sense in (/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin)
$ which ls
/usr/bin/ls
"which -a"
Finally, you should be aware of the -a flag, which causes which to show you all of the
instances of a given program in your PATH:
$ which -a ls
/usr/bin/ls
/bin/ls
whereis
If you're interested in finding more information than purely the location of a program, you
might try the whereis program:
$ whereis ls
ls: /bin/ls /usr/bin/ls /usr/share/man/man1/ls.1.gz
Here we see that ls occurs in two common binary locations, /bin and /usr/bin. Additionally,
we are informed that there is a manual page located in /usr/share/man. This is the man-page
you would see if you were to type man ls.
The whereis program also has the ability to search for sources, to specify alternate search
paths, and to search for unusual entries. Refer to the whereis man-page for further
information.
find
The find command is another handy tool for your toolbox. With find you aren't restricted to
programs; you can search for any file you want, using a variety of search criteria. For
example, to search for a file by the name of README, starting in /usr/share/doc:
Note that unlike many programs, find requires that the regex specified matches the entire
path, not just a part of it. For that reason, specifying the leading and trailing .* is necessary;
purely using xt as the regex would not be sufficient.
$ ls -l ?
-rw------- 1 root root 0 Jan 7 18:00 a
-rw------- 1 root root 0 Jan 6 18:00 b
-rw------- 1 root root 0 Jan 5 18:00 c
-rw------- 1 root root 0 Jan 4 18:00 d
$ date
Mon May 7 18:14:52 EST 2003
You could search for files that were created in the past 24 hours:
Or you could search for files that were created prior to the current 24-hour period:
For example, to find regular files in /usr/bin that are smaller than 50 bytes:
As you can see, find is a very powerful command. It has grown through the years of UNIX
and Linux development. There are many other useful options to find. You can learn about
them in the find manual page.
locate
We have covered which, whereis, and find. You might have noticed that find can take
a while to execute, since it needs to read each directory that it's searching. It turns out that
the locate command can speed things up by relying on an external database generated by
updatedb (which we'll cover in the next panel.)
The locate command matches against any part of a pathname, not just the file itself. For
example:
$ locate bin/ls
/var/ftp/bin/ls
/bin/ls
/sbin/lsmod
/sbin/lspci
/usr/bin/lsattr
/usr/bin/lspgpot
/usr/sbin/lsof
Using updatedb
Most Linux systems have a "cron job" to update the database periodically. If your locate
returned an error such as the following, then you will need to run updatedb as root to
generate the search database:
$ locate bin/ls
locate: /var/spool/locate/locatedb: No such file or directory
$ su
Password:
# updatedb
The updatedb command may take a long time to run. If you have a noisy hard disk, you will
hear a lot of racket as the entire filesystem is indexed. :)
slocate
On many Linux distributions, the locate command has been replaced by slocate. There
is typically a symbolic link to "locate" so that you don't need to remember which you have.
slocate stands for "secure locate." It stores permissions information in the database so that
normal users can't pry into directories they would otherwise be unable to read. The usage
information for slocate is essentially the same as for locate, although the output might be
different depending on the user running the command.
You will notice that an xeyes window pops up, and the red eyeballs follow your mouse
around the screen. You may also notice that you don't have a new prompt in your terminal.
Stopping a process
To get a prompt back, you could type Control-C (often written as Ctrl-C or ^C):
^C
$
You get a new bash prompt, but the xeyes window disappeared. In fact, the entire process
has been killed. Instead of killing it with Control-C, we could have just stopped it with
Control-Z:
This time you get a new bash prompt, and the xeyes windows stays up. If you play with it a
bit, however, you will notice that the eyeballs are frozen in place. If the xeyes window gets
covered by another window and then uncovered again, you will see that it doesn't even
redraw the eyes at all. The process isn't doing anything. It is, in fact, "Stopped."
fg and bg
To get the process "un-stopped" and running again, we can bring it to the foreground with the
bash built-in fg:
$ fg
xeyesbashbashandxeyes
^Z
[1]+ Stopped xeyes -center red
$
$ bg
[1]+ xeyes -center red &
$
Great! The xeyes process is now running in the background, and we have a new, working
bash prompt.
Using "&"
If we wanted to start xeyes in the background from the beginning (instead of using Control-Z
and bg), we could have just added an "&" (ampersand) to the end of xeyes command line:
$ jobs -l
[1]- 16217 Running xeyes -center red &
[2]+ 16224 Running xeyes -center blue &
The numbers in the left column are the job numbers bash assigned when they were started.
Job 2 has a + (plus) to indicate that it's the "current job," which means that typing fg will
bring it to the foreground. You could also foreground a specific job by specifying its number;
for example, fg 1 would make the red xeyes the foreground task. The next column is the
process id or pid, included in the listing courtesy of the -l option to jobs. Finally, both jobs
are currently "Running," and their command lines are listed to the right.
Introducing signals
To kill, stop, or continue processes, Linux uses a special form of communication called
"signals." By sending a certain signal to a process, you can get it to terminate, stop, or do
other things. This is what you're actually doing when you type Control-C, Control-Z, or use
the bg or fg built-ins -- you're using bash to send a particular signal to the process. These
signals can also be sent using the kill command and specifying the pid (process id) on the
command line:
As you can see, kill doesn't necessarily "kill" a process, although it can. Using the "-s"
option, kill can send any signal to a process. Linux kills, stops or continues processes when
they are sent the SIGINT, SIGSTOP, or SIGCONT signals respectively. There are also other
signals that you can send to a process; some of these signals may be interpreted in an
application-dependent way. You can learn what signals a particular process recognizes by
looking at its man-page and searching for a SIGNALS section.
$ kill 16217
$ jobs -l
[1]- 16217 Terminated xeyes -center red
[2]+ 16224 Stopped (signal) xeyes -center blue
$ kill 16224
$ jobs -l
[2]+ 16224 Stopped (signal) xeyes -center blue
$ kill -s SIGKILL
$ jobs -l
[2]+ 16224 Interrupt xeyes -center blue
nohup
The terminal where you start a job is called the job's controlling terminal. Some shells (not
bash by default), will deliver a SIGHUP signal to backgrounded jobs when you logout,
causing them to quit. To protect processes from this behavior, use the nohup when you start
the process:
$ ps ax
PID TTY STAT TIME COMMAND
1 ? S 0:04 init [3]
2 ? SW 0:11 [keventd]
3 ? SWN 0:13 [ksoftirqd_CPU0]
4 ? SW 2:33 [kswapd]
5 ? SW 0:00 [bdflush]
I've only listed the first few because it is usually a very long list. This gives you a snapshot of
what the whole machine is doing, but is a lot of information to sift through. If you were to
leave off the ax, you would see only processes that are owned by you, and that have a
controlling terminal. The command ps x would show you all your processes, even those
without a controlling terminal. If you were to use ps a, you would get the list of everybody's
processes that are attached to a terminal.
$ ps x --forest
PID TTY STAT TIME COMMAND
927 pts/1 S 0:00 bash
6690 pts/1 S 0:00 \_ bash
26909 pts/1 R 0:00 \_ ps x --forest
19930 pts/4 S 0:01 bash
25740 pts/4 S 0:04 \_ vi processes.txt
$ ps au
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
agriffis 403 0.0 0.0 2484 72 tty1 S 2001 0:00 -bash
chouser 404 0.0 0.0 2508 92 tty2 S 2001 0:00 -bash
root 408 0.0 0.0 1308 248 tty6 S 2001 0:00 /sbin/agetty 3
agriffis 434 0.0 0.0 1008 4 tty1 S 2001 0:00 /bin/sh /usr/X
chouser 927 0.0 0.0 2540 96 pts/1 S 2001 0:00 bash
$ ps al
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND
100 1001 403 1 9 0 2484 72 wait4 S tty1 0:00 -bash
100 1000 404 1 9 0 2508 92 wait4 S tty2 0:00 -bash
000 0 408 1 9 0 1308 248 read_c S tty6 0:00 /sbin/ag
000 1001 434 403 9 0 1008 4 wait4 S tty1 0:00 /bin/sh
000 1000 927 652 9 0 2540 96 wait4 S pts/1 0:00 bash
Using "top"
If you find yourself running ps several times in a row, trying to watch things change, what you
probably want is top. top displays a continuously updated process listing, along with some
useful summary information:
$ top
10:02pm up 19 days, 6:24, 8 users, load average: 0.04, 0.05, 0.00
75 processes: 74 sleeping, 1 running, 0 zombie, 0 stopped
CPU states: 1.3% user, 2.5% system, 0.0% nice, 96.0% idle
Mem: 256020K av, 226580K used, 29440K free, 0K shrd, 3804K buff
Swap: 136544K av, 80256K used, 56288K free 101760K cached
PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
628 root 16 0 213M 31M 2304 S 0 1.9 12.5 91:43 X
26934 chouser 17 0 1272 1272 1076 R 0 1.1 0.4 0:00 top
652 chouser 11 0 12016 8840 1604 S 0 0.5 3.4 3:52 gnome-termin
641 chouser 9 0 2936 2808 1416 S 0 0.1 1.0 2:13 sawfish
nice
Each processes has a priority setting that Linux uses to determine how CPU timeslices are
shared. You can set the priority of a process by starting it with the nice command:
Since the priority setting is called nice, it should be easy to remember that a higher value
will be nice to other processes, allowing them to get priority access to the CPU. By default,
processes are started with a setting of 0, so the setting of 10 above means oggenc will
readily give up the CPU to other processes. Generally, this means that oggenc will allow
other processes to run at their normal speed, regardless of how CPU-hungry oggenc
happens to be. You can see these niceness levels under the NI column in the ps and top
listings above.
renice
The nice command can only change the priority of a process when you start it. If you want
to change the niceness setting of a running process, use renice:
$ ps l 641
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND
000 1000 641 1 9 0 5876 2808 do_sel S ? 2:14 sawfish
$ renice 10 641
641: old priority 0, new priority 10
$ ps l 641
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND
000 1000 641 1 9 10 5876 2808 do_sel S ? 2:14 sawfish
In addition to redirecting output to a file, we can also take advantage of a powerful shell
feature called pipes. Using pipes, we can pass the output of one command to the input of
another command. Consider the following example:
The | character is used to connect the output of the command on the left to the input of the
command on the right. In the example above, the echo command prints out the string hi
there followed by a linefeed. That output would normally appear on the terminal, but the
pipe redirects it into the wc command, which displays the number of lines, words, and
characters in its input.
A pipe example
Here is another simple example:
$ ls -s | sort -n
In this case, ls -s would normally print a listing of the current directory on the terminal,
preceding each file with its size. But instead we've piped the output into sort -n, which
sorts the output numerically. This is a really useful way to find large files in your home
directory!
The following examples are more complex, but they demonstrate the power that can be
harnessed using pipes. We're going to throw out some commands we haven't covered yet,
but don't let that slow you down. Concentrate instead on understanding how pipes work so
you can employ them in your daily Linux tasks.
$ bzip2 -d linux-2.4.16.tar.bz2
$ tar xvf linux-2.4.16.tar
Woo hoo! Our compressed tarball has been extracted and we didn't need an intermediate
file.
A longer pipeline
Here's another pipeline example:
We use cat to feed the contents of myfile.txt to the sort command. When the sort
command receives the input, it sorts all input lines so that they are in alphabetical order, and
then sends the output to uniq. uniq removes any duplicate lines (and requires its input to
be sorted, by the way,) sending the scrubbed output to wc -l. We've seen the wc command
earlier, but without command-line options. When given the -l option, it only prints the
number of lines in its input, instead of also including words and characters. You'll see that
this pipeline will print out the number of unique (non-identical) lines in a text file.
Try creating a couple of test files with your favorite text editor and use this pipeline to see
what results you get.
echo
echo prints its arguments to the terminal. Use the -e option if you want to embed backslash
escape sequences; for example echo -e "foo\nfoo" will print foo, then a newline, and
then foo again. Use the -n option to tell echo to omit the trailing newline that is appended to
the output by default.
cat will print the contents of the files specified as arguments to the terminal. Handy as the
first command of a pipeline, for example, cat foo.txt | blah.
sort
sort will print the contents of the file specified on the command line in alphabetical order. Of
course, sort also accepts piped input. Type man sort to familiarize yourself with its various
options that control sorting behavior.
uniq
uniq takes an already-sorted file or stream of data (via a pipeline) and removes duplicate
lines.
wc prints out the number of lines, words, and bytes in the specified file or in the input stream
(from a pipeline). Type man wc to learn how to fine-tune what counts are displayed.
head
Prints out the first ten lines of a file or stream. Use the -n option to specify how many lines
should be displayed.
tail
Prints out the last ten lines of a file or stream. Use the -n option to specify how many lines
should be displayed.
tac is like cat, but prints all lines in reverse order; in other words, the last line is printed first.
expand
Convert input tabs to spaces. Use the -t option to specify the tabstop.
unexpand
Convert input spaces to tabs. Use the -t option to specify the tabstop.
cut is used to extract character-delimited fields from each line of an input file or stream.
nl
The nl command adds a line number to every line of input. Useful for printouts.
pr
pr is used to break files into multiple pages of output; typically used for printing.
tr is a character translation tool; it's used to map certain characters in the input stream to
certain other characters in the output stream.
sed
sed is a powerful stream-oriented text editor. You can learn more about sed in the following
IBM developerWorks articles:
Sed by example, Part 1
Sed by example, Part 2
Sed by example, Part 3
If you're planning to take the LPI exam, be sure to read the first two articles of this series.
awk
awk is a handy line-oriented text-processing language. To learn more about awk, read the
following IBM developerWorks articles:
Awk by example, Part 1
Awk by example, Part 2
Awk by example, Part 3
od is designed to transform the input stream into a octal or hex "dump" format.
split
split is a command used to split a larger file into many smaller-sized, more manageable
chunks.
fmt
fmt will reformat paragraphs so that wrapping is done at the margin. These days it's less
useful since this ability is built into most text editors, but it's still a good one to know.
paste takes two or more files as input, concatenates each sequential line from the input
files, and outputs the resulting lines. It can be useful to create tables or columns of text.
join
join is similar to paste, but it uses a field (by default the first) in each input line to match up
what should be combined on a single line.
tee
The tee command will print its input both to a file and to the screen. This is useful when you
want to create a log of something, but you also want to see it on the screen.
Bash and other shells support the concept of a "herefile." This allows you to specify the input
to a command in the lines following the command invocation, terminating it with a sentinal
value. This is easiest shown through an example:
$ sort <<END
apple
cranberry
banana
END
apple
banana
cranberry
In the example above, we typed the words apple, cranberry and banana, followed by
"END" to signify the end of the input. The sort program then returned our words in
alphabetical order.
Using >>
You would expect >> to be somehow analogous to <<, but it isn't really. It simply means to
append the output to a file, rather than overwrite as > would. For example:
Much better!
$ uname -a
Linux inventor 2.4.20-gaming-r1 #1 Fri Apr 11 18:33:35 MDT 2003 i686 AMD Athlon(tm) XP 2
Now, look in the /lib/modules directory and --presto!-- I bet you'll find a directory with that
exact name! OK, not quite magic, but now may be a good time to talk about the significance
of the directories in /lib/modules and explain what kernel modules are.
The kernel
The Linux kernel is the heart of what is commonly referred to as "Linux" -- it's the piece of
code that accesses your hardware directly and provides abstractions so that regular old
programs can run. Thanks to the kernel, your text editor doesn't need to worry about whether
it is writing to a SCSI or IDE disk -- or even a RAM disk. It just writes to a filesystem, and the
kernel takes care of the rest.
special format on disk. At your command, they can be loaded into the running kernel and
provide additional functionality.
Because the kernel modules are loaded on demand, you can have your kernel support a lot
of additional functionality that you may not ordinarily want to be enabled. But once in a blue
moon, those kernel modules are likely to come in quite handy and can be loaded -- often
automatically -- to support that odd filesystem or hardware device that you rarely use.
lsmod
To see what modules are currently loaded on your system, use the "lsmod" command:
# lsmod
Module Size Used by Tainted: PF
vmnet 20520 5
vmmon 22484 11
nvidia 1547648 10
mousedev 3860 2
hid 16772 0 (unused)
usbmouse 1848 0 (unused)
input 3136 0 [mousedev hid usbmouse]
usb-ohci 15976 0 (unused)
ehci-hcd 13288 0 (unused)
emu10k1 64264 2
ac97_codec 9000 0 [emu10k1]
sound 51508 0 [emu10k1]
usbcore 55168 1 [hid usbmouse usb-ohci ehci-hcd]
the system automatically load the appropriate modules to enable that device. It's a handy
way to do things.
Third-party modules
Rounding out my list of modules are "emu10k1," "ac97_codec," and "sound," which together
provide support for my SoundBlaster Audigy sound card.
It should be noted that some of my kernel modules come from the kernel sources
themselves. For example, all the USB-related modules are compiled from the standard Linux
kernel sources. However, the nvidia, emu10k1 and VMWare-related modules come from
other sources. This highlights another major benefit of kernel modules -- allowing third parties
to provide much-needed kernel functionality and allowing this functionality to "plug in" to a
running Linux kernel. No reboot necessary.
$ ls /lib/modules/2.4.20-gaming-r1/modules.*
/lib/modules/2.4.20-gaming-r1/modules.dep
/lib/modules/2.4.20-gaming-r1/modules.generic_string
/lib/modules/2.4.20-gaming-r1/modules.ieee1394map
/lib/modules/2.4.20-gaming-r1/modules.isapnpmap
/lib/modules/2.4.20-gaming-r1/modules.parportmap
/lib/modules/2.4.20-gaming-r1/modules.pcimap
/lib/modules/2.4.20-gaming-r1/modules.pnpbiosmap
/lib/modules/2.4.20-gaming-r1/modules.usbmap
These files contain some lots of dependency information. For one, they record *dependency*
information for modules -- some modules require other modules to be loaded first before they
will run. This information is recorded in these files.
Using depmod
If you ever install a new module, this dependency information may become out of date. To
make it fresh again, simply type "depmod -a". The "depmod" program will then scan all the
modules in your directories in /lib/modules and freshening the dependency information. It
does this by scanning the module files in /lib/modules and looking at what are called
"symbols" inside the modules:
# depmod -a
# insmod /lib/modules/2.4.20-gaming-r1/kernel/fs/fat/fat.o
# lsmod | grep fat
fat 29272 0 (unused)
However, one normally loads modules by using the "modprobe" command. One of the nice
things about the "modprobe" command is that it automatically takes care of loading any
dependent modules. Also, one doesn't need to specify the path to the module you wish to
load, nor does one specify the trailing ".o".
# rmmod fat
# lsmod | grep fat
# modprobe fat
# lsmod | grep fat
fat 29272 0 (unused)
As you can see, the "rmmod" command works similarly to modprobe, but has the opposite
effect -- it unloads the module you specify.
You can use the "modinfo" command to learn interesting things about your favorite modules:
# modinfo fat
filename: /lib/modules/2.4.20-gaming-r1/kernel/fs/fat/fat.o
description: <none>
author: <none>
license: "GPL"
And make special note of the /etc/modules.conf file. This file contains configuration
information for modprobe. It allows you to tweak the functionality of modprobe by telling it to
load modules before/after loading others, run scripts before/after modules load, and more.
modules.conf gotchas
The syntax and functionality of modules.conf is quite complicated, and we won't go into its
syntax now (type "man modules.conf" for all the gory details), but here are some things that
you *should* know about this file.
For one, many distributions generate this file automatically from a bunch of files in another
directory, like /etc/modules.d/. For example, Gentoo Linux has an /etc/modules.d/ directory,
and running the update-modules command will take every file in /etc/modules.d/ and
concatenate them to produce a new /etc/modules.conf. Therefore, make your changes to the
files in /etc/modules.d/ and run update-modules if you are using Gentoo. If you are using
Debian, the procedure is similar except that the directory is called /etc/modutils/.
Resources
Speaking of LPIC certification, if this is something you're interested in, then we recommend
that you study the following resources, which have been carefully selected to augment the
material covered in this tutorial:
There are a number of good regular expression resources on the 'net. Here are a couple that
we've found:
In the Bash by example article series, I show you how to use bash programming constructs
to write your own bash scripts. This series (particularly parts one and two) will be good
preparation for the LPIC Level 1 exam:
You can learn more about sed in the following IBM developerWorks articles:
Sed by example, Part 1
Sed by example, Part 2
Sed by example, Part 3
If you're planning to take the LPI exam, be sure to read the first two articles of this series.
To learn more about awk, read the following IBM developerWorks articles:
Awk by example, Part 1
Awk by example, Part 2
Awk by example, Part 3
We highly recommend the Technical FAQ for Linux users, a 50-page in-depth list of
frequently-asked Linux questions, along with detailed answers. The FAQ itself is in PDF
(Acrobat) format. If you're a beginning or intermediate Linux user, you really owe it to yourself
to check this FAQ out.
If you're not too familiar with the vi editor, I strongly recommend that you check out my Vi --
the cheat sheet method tutorial. This tutorial will give you a gentle yet fast-paced introduction
to this powerful text editor. Consider this must-read material if you don't know how to use vi.
Your feedback
Please let us know whether this tutorial was helpful to you and how we could make it better.
We'd also like to hear about other tutorial topics you'd like to see covered in developerWorks
tutorials.
For questions about the content of this tutorial, contact the authors:
• Daniel Robbins, at [email protected]
• Chris Houser, at [email protected]
• Aron Griffis, at [email protected]
Colophon
This tutorial was written entirely in XML, using the developerWorks Toot-O-Matic tutorial
generator. The open source Toot-O-Matic tool is an XSLT stylesheet and several XSLT
extension functions that convert an XML file into a number of HTML pages, a zip file, JPEG
heading graphics, and two PDF files. Our ability to generate multiple text and binary formats
from a single source file illustrates the power and flexibility of XML. (It also saves our
production team a great deal of time and effort.)
Table of Contents
If you're viewing this document online, you can click any of the topics below to link directly to that section.
If you are new to Linux, we recommend that you start with Part 1 and Part 2. For some, much
of this material will be new, but more experienced Linux users may find this tutorial to be a
great way of "rounding out" their foundational Linux system administration skills.
By the end of this series of tutorials (eight in all covering the LPI 101 and 102 exams), you
will have the knowledge you need to become a Linux Systems Administrator and will be
ready to attain an LPIC Level 1 certification from the Linux Professional Institute if you so
choose.
For those who have taken the release 1 version of this tutorial for reasons other than LPI
exam preparation, you probably don't need to take this one. However, if you do plan to take
the exams, you should strongly consider reading this revised tutorial.
Residing in Albuquerque, New Mexico, Daniel Robbins is the Chief Architect of Gentoo
Linux an advanced ports-based Linux metadistribution. Besides writing articles, tutorials, and
tips for the developerWorks Linux zone and Intel Developer Services, he has also served as
a contributing author for several books, including Samba Unleashed and SuSE Linux
Unleashed. Daniel enjoys spending time with his wife, Mary, and his daughter, Hadassah.
You can contact Daniel at [email protected].
Chris Houser, known to his friends as "Chouser," has been a UNIX proponent since 1994
when he joined the administration team for the computer science network at Taylor
University in Indiana, where he earned his Bachelor's degree in Computer Science and
Mathematics. Since then, he has gone on to work in Web application programming, user
interface design, professional video software support, and now Tru64 UNIX device driver
programming at Compaq. He has also contributed to various free software projects, most
recently to Gentoo Linux). He lives with his wife and two cats in New Hampshire. You can
contact Chris at [email protected].
Aron Griffis graduated from Taylor University with a degree in Computer Science and an
award that proclaimed, "Future Founder of a Utopian UNIX Commune." Working towards that
goal, Aron is employed by Compaq writing network drivers for Tru64 UNIX, and spending his
spare time plunking out tunes on the piano or developing Gentoo Linux. He lives with his wife
Amy (also a UNIX engineer) in Nashua, New Hampshire.
Manual pages
Manual pages, or "man pages", are the classic form of UNIX and Linux reference
documentation. Ideally, you can look up the man page for any command, configuration file,
or library routine. In practice, Linux is free software, and some pages haven't been written or
are showing their age. Nonetheless, man pages are the first place to look when you need
help.
To access a man page, simply type man followed by your topic of inquiry. A pager will be
started, so you will need to press q when you're done reading. For example, to look up
information about the ls command, you would type:
$ man ls
$ whatis printf
printf (1) - format and print data
printf (3) - formatted output conversion
In this case, man printf would default to the page in section 1 ("User Programs"). If we
were writing a C program, we might be more interested in the page from section 3 ("Library
functions"). You can call up a man page from a certain section by specifying it on the
command line, so to ask for printf(3), we would type:
$ man 3 printf
$ man -k whatis
apropos (1) - search the whatis database for strings
makewhatis (8) - Create the whatis database
whatis (1) - search the whatis database for complete words
# makewhatis
For more information on "man" and friends, you should start with its man page:
$ man man
The MANPATH
By default, the man program will look for man pages in /usr/share/man, /usr/local/man,
/usr/X11R6/man, and possibly /opt/man. Sometimes, you may find that you need to add an
additional item to this search path. If so, simply edit /etc/man.conf in a text editor and add a
line that looks like this:
MANPATH /opt/man
From that point forward, any man pages in the /opt/man/man* directories will be found.
Remember that you'll need to rerun makewhatis to add these new man pages to the whatis
database.
GNU info
One shortcoming of man pages is that they don't support hypertext, so you can't jump easily
from one to another. The GNU folks recognized this shortcoming, so they invented another
documentation format: "info" pages. Many of the GNU programs come with extensive
documentation in the form of info pages. You can start reading info pages with the info
command:
$ info
Calling info in this way will bring up an index of the available pages on the system. You can
move around with the arrow keys, follow links (indicated with a star) using the Enter key, and
quit by pressing q. The keys are based on Emacs, so you should be able to navigate easily if
you're familiar with that editor. For an intro to the Emacs editor, see the developerWorks
tutorial, Living in Emacs.
$ info diff
For more information on using the info reader, try reading its info page. You should be able
to navigate primitively using the few keys I've already mentioned:
$ info info
/usr/share/doc
There is a final source for help within your Linux system. Many programs are shipped with
additional documentation in other formats: text, PDF, PostScript, HTML, to name a few. Take
a look in /usr/share/doc (or /usr/doc on older systems). You'll find a long list of directories,
each of which came with a certain application on your system. Searching through this
documentation can often reveal some gems that aren't available as man pages or info pages,
such as tutorials or additional technical documentation. A quick check reveals there's a lot of
reading material available:
$ cd /usr/share/doc
$ find . -type f | wc -l
7582
Whew! Your homework this evening is to read just half (3791) of those documents. Expect a
quiz tomorrow. ;-)
An LDP overview
The LDP is made up of the following areas:
• Guides - longer, more in-depth books, such as The Linux Programmer's Guide
(http://www.tldp.org/LDP/lpg/)
• HOWTOs - subject-specific help, such as the DSL HOWTO
(http://www.tldp.org/HOWTO/DSL-HOWTO/)
• FAQs - Frequently Asked Questions with answers, such as the Brief Linux FAQ
(http://www.tldp.org/FAQ/faqs/BLFAQ)
• man pages - help on individual commands (these are the same manual pages you get on
your Linux system when you use the man command)
If you aren't sure which section to peruse, you can take advantage of the search box, which
allows you to find things by topic.
The LDP additionally provides a list of Links and Resources such as Linux Gazette (see links
in Resources on page 28 ) andLinuxFocus, as well links to mailing lists and news archives.
Mailing lists
Mailing lists provide probably the most important point of collaboration for Linux developers.
Often projects are developed by contributors who live far apart, possibly even on opposite
sides of the globe. Mailing lists provide a method for each developer on a project to contact
all the others, and to hold group discussions via e-mail. One of the most famous
If you took the time to read the LKML FAQ at the link on the previous panel, you might have
noticed that mailing list subscribers often don't take kindly to questions being asked
repeatedly. It's always wise to search the archives for a given mailing list before writing your
question. Chances are, it will save you time, too!
Newsgroups
Internet "newsgroups" are similar to mailing lists, but are based on a protocol called NNTP
("Network News Transfer Protocol") instead of e-mail. To participate, you need to use an
NNTP client such as slrn or pan. The primary advantage is that you only take part in the
discussion when you want, instead of having it continually arrive in your inbox. :-)
The newsgroups of primary interest start with comp.os.linux. You can browse the list on the
LDP site (http://www.tldp.org/linux/#ng).
As with mailing lists, newsgroup discussion is often archived. A popular newsgroup archiving
site is Deja News.
Linux consultancies
Some Linux consultancies, such as Linuxcare and Mission Critical Linux, provide some free
documentation as well as pay-for support contracts. There are many Linux consultancies;
below are a couple of the larger examples:
• LinuxCare (http://www.linuxcare.com)
• Mission Critical Linux (http://www.missioncriticallinux.com)
Developer resources
In addition, many hardware and software vendors have developed wonderful resources for
Linux developers and administrators. At the risk of sounding self-promoting, one of the most
valuable Linux resources run by a hardware/software vendor is the IBM developerWorks
Linux zone (http://www.ibm.com/developerworks/linux).
$ ls -l /bin/bash
-rwxr-xr-x 1 root wheel 430540 Dec 23 18:27 /bin/bash
In this particular example, the /bin/bash executable is owned by root and is in the wheel
group. The Linux permissions model works by allowing three independent levels of
permission to be set for each filesystem object -- those for the file's owner, the file's group,
and all other users.
$ ls -l /bin/bash
-rwxr-xr-x 1 root wheel 430540 Dec 23 18:27 /bin/bash
This first field -rwxr-xr-x contains a symbolic representation of this particular files'
permissions. The first character (-) in this field specifies the type of this file, which in this
case is a regular file. Other possible first characters:
'd' directory
'l' symbolic link
'c' character special device
'b' block special device
'p' fifo
's' socket
Three triplets
$ ls -l /bin/bash
-rwxr-xr-x 1 root wheel 430540 Dec 23 18:27 /bin/bash
The rest of the field consists of three character triplets. The first triplet represents
permissions for the owner of the file, the second represents permissions for the file's group,
and the third represents permissions for all other users:
"rwx"
"r-x"
"r-x"
Above, the r means that reading (looking at the data in the file) is allowed, the w means that
writing (modifying the file, as well as deletion) is allowed, and the x means that "execute"
(running the program) is allowed. Putting together all this information, we can see that
everyone is able to read the contents of and execute this file, but only the owner (root) is
allowed to modify this file in any way. So, while normal users can copy this file, only root is
allowed to update it or delete it.
Who am I?
Before we take a look at how to change the user and group ownership of a file, let's first take
a look at how to learn your current user id and group membership. Unless you've used the
su command recently, your current user id is the one you used to log in to the system. If you
use su frequently, however, you may not remember your current effective user id. To view it,
type whoami:
# whoami
root
# su drobbins
$ whoami
drobbins
$ groups
drobbins wheel audio
In the above example, I'm a member of the drobbins, wheel, and audio groups. If you
want to see what groups other user(s) are in, specify their usernames as arguments:
You can also set the owner and group simultaneously with an alternate form of the chown
command:
You may not use chown unless you are the superuser, but chgrp can be used by anyone to
change the group ownership of a file to a group to which they belong.
Introducing chmod
chown and chgrp can be used to change the owner and group of a filesystem object, but
another program -- called chmod -- is used to change the rwx permissions that we can see in
an ls -l listing. chmod takes two or more arguments: a "mode", describing how the
permissions should be changed, followed by a file or list of files that should be affected:
$ chmod +x scriptfile.sh
In the above example, our "mode" is +x. As you might guess, a +x mode tells chmod to
make this particular file executable for both the user and group and for anyone else.
$ chmod -x scriptfile.sh
User/group/other granularity
So far, our chmod examples have affected permissions for all three triplets -- the user, the
group, and all others. Often, it's handy to modify only one or two triplets at a time. To do this,
simply specify the symbolic character for the particular triplets you'd like to modify before the
+ or - sign. Use u for the "user" triplet, g for the "group" triplet, and o for the "other/everyone"
triplet:
We just removed write permissions for the group and all other users, but left "owner"
permissions untouched.
Resetting permissions
In addition to flipping permission bits on and off, we can also reset them altogether. By using
the = operator, we can tell chmod that we want the specified permissions and no others:
Above, we just set all "read" and "execute" bits, and unset all "write" bits. If you just want to
reset a particular triplet, you can specify the symbolic name for the triplet before the = as
follows:
Numeric modes
Up until now, we've used what are called symbolic modes to specify permission changes to
chmod. However, there's another common way of specifying permissions: using a 4-digit
octal number. Using this syntax, called numeric permissions syntax, each digit represents a
permissions triplet. For example, in 1777, the 777 sets the "owner", "group", and "other"
flags that we've been discussing in this section. The 1 is used to set the special permissions
bits, which we'll cover later (see "The elusive first digit on page 17 " at the end of this section).
This chart shows how the second through fourth digits (777) are interpreted:
Mode Digit
rwx 7
rw- 6
r-x 5
r-- 4
-wx 3
-w- 2
--x 1
--- 0
In this example, we used a mode of 0755, which expands to a complete permissions setting
of -rwxr-xr-x.
The umask
When a process creates a new file, it specifies the permissions that it would like the new file
to have. Often, the mode requested is 0666 (readable and writable by everyone), which is
more permissive that we would like. Fortunately, Linux consults something called a "umask"
whenever a new file is created. The system uses the umask value to reduce the originally
specified permissions to something more reasonable and secure. You can view your current
$ umask
0022
On Linux systems, the umask normally defaults to 0022, which allows others to read your
new files (if they can get to them) but not modify them.
$ umask 0077
This umask will make sure that the group and others will have absolutely no permissions for
any newly created files. So, how does the umask work? Unlike "regular" permissions on files,
the umask specifies which permissions should be turned off. Let's consult our mode-to-digit
mapping table so that we can understand what a umask of 0077 means:
Mode Digit
rwx 7
rw- 6
r-x 5
r-- 4
-wx 3
-w- 2
--x 1
--- 0
Using our table, the last three digits of 0077 expand to ---rwxrwx. Now, remember that the
umask tells the system which permissions to disable. Putting two and two together, we can
see that all "group" and "other" permissions will be turned off, while "user" permissions will
remain untouched.
$ ls -l /etc/passwd
-rw-r--r-- 1 root wheel 1355 Nov 1 21:16 /etc/passwd
However, normal users do need to be able to modify /etc/passwd (at least indirectly)
whenever they need to change their password. But, if the user is unable to modify this file,
how exactly does this work?
suid
Thankfully, the Linux permissions model has two special bits called suid and sgid. When
an executable program has the suid bit set, it will run on behalf of the owner of the
executable, rather than on behalf of the person who started the program.
Now, back to the /etc/passwd problem. If we take a look at the passwd executable, we can
see that it's owned by root:
$ ls -l /usr/bin/passwd
-rwsr-xr-x 1 root wheel 17588 Sep 24 00:53 /usr/bin/passwd
You'll also note that in place of an x in the user's permission triplet, there's an s. This
indicates that, for this particular program, the suid and executable bits are set. Because of
this, when passwd runs, it will execute on behalf of the root user (with full superuser access)
rather than that of the user who ran it. And because passwd runs with root access, it's able
to modify the /etc/passwd file with no problem.
suid/sgid caveats
We've seen how suid works, and sgid works in a similar way. It allows programs to inherit
the group ownership of the program rather than that of the current user.
Here's some miscellaneous yet important information about suid and sgid. First, suid and
sgid bits occupy the same space as the x bits in a ls -l listing. If the x bit is also set, the
respective bits will show up as s (lowercase). However, if the x bit is not set, it will show up
as a S (uppercase).
Another important note: suid and sgid come in handy in many circumstances, but improper
use of these bits can allow the security of a system to be breached. It's best to have as few
suid programs as possible. The passwd command is one of the few that must be suid.
And here, we remove the sgid bit from a directory. We'll see how the sgid bit affects
directories in just a few panels:
For a directory, if the "read" flag is set, you may list the contents of the directory; "write"
means you may create files in the directory; and "execute" means you may enter the
directory and access any sub-directories inside. Without the "execute" flag, the filesystem
objects inside a directory aren't accessible. Without a "read" flag, the filesystem objects
inside a directory aren't viewable, but objects inside the directory can still be accessed as
long as someone knows the full path to the object on disk.
# mkdir /home/groupspace
# chgrp mygroup /home/groupspace
# chmod g+s /home/groupspace
Now, any users in the group mygroup can create files or directories inside
/home/groupspace, and they will be automatically assigned a group ownership of mygroup
as well. Depending on the users' umask setting, new filesystem objects may or may not be
readable, writable, or executable by other members of the mygroup group.
However, for directories that are used by many users, especially /tmp and /var/tmp, this
behavior can be bad news. Since anyone can write to these directories, anyone can delete or
rename anyone else's files -- even if they don't own them! Obviously, it's hard to use /tmp for
anything meaningful when any other user can type rm -rf /tmp/* at any time and destroy
everyone's files.
Thankfully, Linux has something called the sticky bit. When /tmp has the sticky bit set (with a
chmod +t), the only people who are able to delete or rename files in /tmp are the directory's
owner (typically root), the file's owner, or root. Virtually all Linux distributions enable /tmp's
sticky bit by default, but you may find that the sticky bit comes in handy in other situations.
Here's an example of how to use a 4-digit numeric mode to set permissions for a directory
that will be used by a workgroup:
As homework, figure out the meaning of the 1755 numeric permissions setting. :)
Each line in /etc/passwd defines a user account. Here's an example line from my /etc/passwd
file:
drobbins:x:1000:1000:Daniel Robbins:/home/drobbins:/bin/bash
As you can see, there is quite a bit of information on this line. In fact, each /etc/passwd line
consists of multiple fields, each separated by a :.
The first field defines the username (drobbins)), and the second field contains an x. On
ancient Linux systems, this field contained an encrypted password to be used for
authentication, but virtually all Linux systems now store this password information in another
file.
The third field (1000) defines the numeric user id associated with this particular user, and the
fourth field (1000) associates this user with a particular group; in a few panels, we'll see
where group 1000 is defined.
The fifth field contains a textual description of this account -- in this case, the user's name.
The sixth field defines this user's home directory, and the seventh field specifies the user's
default shell -- the one that will be automatically started when this user logs in.
/etc/shadow
So, user accounts themselves are defined in /etc/passwd. Linux systems contain a
companion file to /etc/passwd that's called /etc/shadow. This file, unlike /etc/passwd, is
readable only by root and contains encrypted password information. Let's look at a sample
line from /etc/shadow:
drobbins:$1$1234567890123456789012345678901:11664:0:-1:-1:-1:-1:0
Each line defines password information for a particular account, and again, each field is
separated by a :. The first field defines the particular user account with which this shadow
entry is associated. The second field contains an encrypted password. The remaining fields
are described in the following table:
/etc/group
Next, we take a look at the /etc/group file, which defines all the groups on a Linux system.
Here's a sample line:
drobbins:x:1000:
The /etc/group field format is as follows. The first field defines the name of the group; the
second field is a vestigial password field that now simply holds an x, and the third field
defines the numeric group id of this particular group. The fourth field (empty in the above
example) defines any users that are members of this group.
You'll recall that our sample /etc/passwd line referenced a group id of 1000. This has the
effect of placing the drobbins user in the drobbins group, even though the drobbins
username isn't listed in the fourth field of /etc/group.
Group notes
A note about associating users with groups: on some systems, you'll find that every new
login account is associated with an identically named (and usually identically numbered)
group. On other systems, all login accounts will belong to a single users group. The
approach that you use on the system(s) you administrate is up to you. Creating matching
groups for each user has the advantage of allowing users to more easily control access to
their own files by placing trusted friends in their personal group.
# echo $EDITOR
vim
# export EDITOR=/usr/bin/emacs
Now, type:
# vipw
You should now find yourself in your favorite text editor with the /etc/passwd file loaded up on
the screen. When modifying system passwd and group files, it's very important to use the
vipw and vigr commands. They take extra precautions to ensure that your critical passwd
and group files are locked properly so they don't become corrupted.
Editing /etc/passwd
Now that you have the /etc/passwd file up, go ahead and add the following line:
We've just added a "testuser" user with a UID of 3000. We've added him to a group with a
GID of 3000, which we haven't created just yet. Alternatively, we could have assigned this
user to the GID of the users group if we wanted. This new user has a comment that reads
LPI tutorial test user; the user's home directory is set to /home/testuser, and the
user's shell is set to /bin/false for security purposes. If we were creating an non-test account,
we would set the shell to /bin/bash. OK, go ahead and save your changes and exit.
Editing /etc/shadow
Now, we need to add an entry in /etc/shadow for this particular user. To do this, type vipw
-s. You'll be greeted with your favorite editor, which now contains the /etc/shadow file. Now,
go ahead and copy the line of an existing user account (one that has a password and is
longer than the standard system account entries):
drobbins:$1$1234567890123456789012345678901:11664:0:-1:-1:-1:-1:0
Now, change the username on the copied line to the name of your new user, and ensure that
all fields (particularly the password aging ones) are set to your liking:
testuser:$1$1234567890123456789012345678901:11664:0:-1:-1:-1:-1:0
Setting a password
You'll be back at the prompt. Now, it's time to set a password for your new user:
# passwd testuser
Enter new UNIX password: (enter a password for testuser)
Retype new UNIX password: (enter testuser's new password again)
Editing /etc/group
Now that /etc/passwd and /etc/shadow are set up, it's now time to get /etc/group configured
properly. To do this, type:
# vigr
Your /etc/group file will appear in front of you, ready for editing. Now, if you chose to assign a
default group of users for your particular test user, you do not need to add any groups to
/etc/groups. However, if you chose to create a new group for this user, go ahead and add the
following line:
testuser:x:3000:
# cd /home
# mkdir testuser
# chown testuser.testuser testuser
# chmod o-rwx testuser
Our user's home directory is now in place and the account is ready for use. Well, almost
ready. If you'd like to use this account, you'll need to use vipw to change testuser's default
shell to /bin/bash so that the user can log in.
newgrp
By default, any files that a user creates are assigned to the user's group specified in
/etc/passwd. If the user belongs to other groups, he or she can type newgrp thisgroup to
set current default group membership to the group thisgroup. Then, any new files created will
inherit thisgroup membership.
chage
The chage command is used to view and change the password aging setting stored in
/etc/shadow.
gpasswd
groupadd/groupdel/groupmod
More commands
useradd/userdel/usermod
pwconv/grpconv
Used to convert passwd and group files to "new-style" shadow passwords. Virtually all Linux
systems already use shadow passwords, so you should never need to use these commands.
pwunconv/grpunconv
Used to convert passwd, shadow, and group files to "old-style" non-shadow passwords. You
should never need to use these commands.
First, let's add a friendly message for when you first log in. To see an example message, run
fortune:
$ fortune
No amount of careful planning will ever replace dumb luck.
.bash_profile
Now, let's set up fortune so that it gets run every time you log in. Use your favorite text
editor to edit a file named .bash_profile in your home directory. If the file doesn't exist
already, go ahead and create it. Insert a line at the top:
fortune
Try logging out and back in. Unless you're running a display manager like xdm, gdm, or kdm,
you should be greeted cheerfully when you log in:
Bash acts somewhat differently depending on how it is started. If it is started as a login shell,
it will act as it did above -- first sourcing the system-wide /etc/profile, and then your personal
~/.bash_profile.
There are two ways to tell bash to run as a login shell. One way is used when you first log in:
bash is started with a process name of -bash. You can see this in your process listing:
$ ps u
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
chouser 404 0.0 0.0 2508 156 tty2 S 2001 0:00 -bash
You will probably see a much longer listing, but you should have at least one COMMAND
with a dash before the name of your shell, like -bash in the example above. This dash is
used by the shell to determine if it's being run as a login shell.
Understanding --login
The second way to tell bash to run as a login shell is with the --login command-line option.
This is sometimes used by terminal emulators (like xterm) to make their bash sessions act
like initial login sessions.
After you have logged in, more copies of your shell will be run. Unless they are started with
--login or have a dash in the process name, these sessions will not be login shells. If they
give you a prompt, however, they are called interactive shells. If bash is started as
interactive, but not login, it will ignore /etc/profile and ~/.bash_profile and will instead source
~/.bashrc.
if [ -n "$PS1" ]; then
fortune
fi
However, there are some settings that you may want new users to have as defaults, but also
allow them to change easily. This is where the /etc/skel directory comes in. When you use
the useradd command to create a new user account, it copies all the files from /etc/skel into
the user's new home directory. That means you can put helpful .bash_profile and .bashrc
files in /etc/skel to get new users off to a good start.
export
Variables in bash can be marked so that they are set the same in any new shells that it
starts; this is called being marked for export. You can have bash list all of the variables that
are currently marked for export in your shell session:
$ export
declare -x EDITOR="vim"
declare -x HOME="/home/chouser"
declare -x MAIL="/var/spool/mail/chouser"
declare -x PAGER="/usr/bin/less"
declare -x PATH="/bin:/usr/bin:/usr/local/bin:/home/chouser/bin"
declare -x PWD="/home/chouser"
declare -x TERM="xterm"
declare -x USER="chouser"
$ FOO=foo
$ BAR=bar
$ export BAR
$ echo $FOO $BAR
foo bar
$ bash
$ echo $FOO $BAR
bar
In this example, the variables FOO and BAR were both set, but only BAR was marked for
export. When a new bash was started, it had lost the value for FOO. If you exit this new bash,
you can see that the original one still has values for both FOO and BAR:
$ exit
$ echo $FOO $BAR
foo bar
$ set -x
The -x option causes bash to print out each command it is about to run:
$ echo $FOO
+ echo foo
foo
This can be very useful for understanding unexpected quoting behavior or similar
strangeness. To turn off the -x option, do set +x. See the bash man page for all of the
options to the set built-in.
$ FOO=bar
$ echo $FOO
bar
$ unset FOO
$ echo $FOO
$ FOO=bar
$ set | grep ^FOO
FOO=bar
$ FOO=
$ set | grep ^FOO
FOO=
$ unset FOO
$ set | grep ^FOO
Using set with no parameters like this is similar to using the export built-in, except that
set lists all variables instead of just those marked for export.
$ PAGER=less
$ export PAGER
$ man man
With PAGER set to less, you will see one page at a time, and pressing the space bar moves
on to the next page. If you change PAGER to cat, the text will be displayed all at once,
without stopping.
$ PAGER=cat
$ man man
Using "env"
Unfortunately, if you forget to set PAGER back to less, man (as well as some other
commands) will continue to display all their text without stopping. If you wanted to have
PAGER set to cat just once, you could use the env command:
$ PAGER=less
$ env PAGER=cat man man
$ echo $PAGER
less
This time, PAGER was exported to man with a value of cat, but the PAGER variable itself
remained unchanged in the bash session.
Summary
Congratulations on finishing Part 3 of this tutorial series! At this point, you should know
how to locate information in system and Internet documentation, and you should have
a good grasp of the Linux permissions model, user account management, and login
environment tuning.
Resources
Be sure to check out the various Linux documentation resources covered in this tutorial --
particularly the Linux Documentation Project (http://www.tldp.org). You'll find its collection of
guides, HOWTOs, FAQs, and man pages to be invaluable. Be sure to check out Linux
Gazette (http://www.tldp.org/LDP/LG/current/) and LinuxFocus
(http://www.tldp.org/LDP/LG/current/) as well.
The Linux System Administrators guide (available from the "Guides" section at www.tldp.org)
is a good complement to this series of tutorials -- give it a read! You may also find Eric S.
Raymond's Unix and Internet Fundamentals HOWTO
(http://www.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWTO/) to be helpful.
You can read the GNU Project's online documentation for the GNU info system (also called
"texinfo") at GNU's texinfo documentation page (http://www.gnu.org/manual/texinfo/).
One of the most famous development mailing lists is the "Linux Kernel Mailing List"
(http://www.tux.org/lkml/).
Browse the Linux newsgroup list on the LDP site (http://www.tldp.org/linux/#ng), and the
newsgroup archives at Deja News
(http://groups.google.com/googlegroups/deja_announcement.html).
In the Bash by example article series on developerWorks, Daniel shows you how to use bash
programming constructs to write your own bash scripts. This bash series (particularly Parts 1
and 2) is good preparation for the LPIC Level 1 exam and reinforces the concepts covered in
this tutorial's "Tuning the user environment" section:
We highly recommend the Technical FAQ by Linux Users by Mark Chapman, a 50-page
in-depth list of frequently-asked Linux questions, along with detailed answers. The FAQ itself
is in PDF (Adobe Acrobat) format. If you're a beginning or intermediate Linux user, you really
owe it to yourself to check this FAQ out. We also recommend the Linux glossary for Linux
users, also from Mark.
If you're not familiar with the vi editor, check out Daniel's Vi intro -- the cheat sheet method
tutorial. This tutorial will give you a gentle yet fast-paced introduction to this powerful text
editor. Consider this must-read material if you don't know how to use vi.
For an intro to the Emacs editor, see the developerWorks tutorial, Living in Emacs.
Feedback
Please let us know whether this tutorial was helpful to you and how we could make it better.
We'd also like to hear about other tutorial topics you'd like to see covered in developerWorks
tutorials.
For questions about the content of this tutorial, contact the authors:
• Daniel Robbins, at [email protected]
• Chris Houser, at [email protected]
• Aron Griffis, at [email protected]
Colophon
This tutorial was written entirely in XML, using the developerWorks Toot-O-Matic tutorial
generator. The open source Toot-O-Matic tool is an XSLT stylesheet and several XSLT
extension functions that convert an XML file into a number of HTML pages, a zip file, JPEG
heading graphics, and two PDF files. Our ability to generate multiple text and binary formats
from a single source file illustrates the power and flexibility of XML. (It also saves our
production team a great deal of time and effort.)
Table of Contents
If you're viewing this document online, you can click any of the topics below to link directly to that section.
This particular tutorial (Part 1) is ideal for those who are new to Linux, or those who want to
review or improve their understanding of fundamental Linux concepts like copying and
moving files, creating symbolic and hard links, and using Linux's standard text-processing
commands along with pipelines and redirection. Along the way, we'll share plenty of hints,
tips, and tricks to keep the tutorial "meaty" and practical, even for those with a good amount
of previous Linux experience. For beginners, much of this material will be new, but more
experienced Linux users may find this tutorial to be a great way of "rounding out" their
fundamental Linux skills.
For those who have taken the release 1 version of this tutorial for reasons other than LPI
exam preparation, you probably don't need to take this one. However, if you do plan to take
the exams, you should strongly consider reading this revised tutorial.
For technical questions about the content of this tutorial, contact the author, Daniel Robbins,
at [email protected].
The particular prompt that you see may look quite different. It may contain your system's
hostname, the name of the current working directory, or both. But regardless of what your
prompt looks like, there's one thing that's certain. The program that printed that prompt is
called a "shell," and it's very likely that your particular shell is a program called bash.
$ echo $SHELL
/bin/bash
If the above line gave you an error or didn't respond similarly to our example, then you may
be running a shell other than bash. In that case, most of this tutorial should still apply, but it
would be advantageous for you to switch to bash for the sake of preparing for the 101 exam.
(The next tutorial in this series, on basic administration, covers changing your shell using the
chsh command.)
About bash
Bash, an acronym for "Bourne-again shell," is the default shell on most Linux systems. The
shell's job is to obey your commands so that you can interact with your Linux system. When
you're finished entering commands, you may instruct the shell to exit or logout, at which
point you'll be returned to a login prompt.
By the way, you can also log out by pressing control-D at the bash prompt.
Using "cd"
As you've probably found, staring at your bash prompt isn't the most exciting thing in the
world. So, let's start using bash to navigate around our filesystem. At the prompt, type the
following (without the $):
$ cd /
We've just told bash that you want to work in /, also known as the root directory; all the
directories on the system form a tree, and / is considered the top of this tree, or the root. cd
sets the directory where you are currently working, also known as the "current working
directory".
Paths
To see bash's current working directory, you can type:
$ pwd
/
In the above example, the / argument to cd is called a path. It tells cd where we want to go.
In particular, the / argument is an absolute path, meaning that it specifies a location relative
to the root of the filesystem tree.
Absolute paths
Here are some other absolute paths:
/dev
/usr
/usr/bin
/usr/local/bin
As you can see, the one thing that all absolute paths have in common is that they begin with
/. With a path of /usr/local/bin, we're telling cd to enter the / directory, then the usr directory
under that, and then local and bin. Absolute paths are always evaluated by starting at / first.
Relative paths
The other kind of path is called a relative path. Bash, cd, and other commands always
interpret these paths relative to the current directory. Relative paths never begin with a /. So,
if we're in /usr:
$ cd /usr
$ cd local/bin
$ pwd
/usr/local/bin
Using ..
Relative paths may also contain one or more .. directories. The .. directory is a special
directory that points to the parent directory. So, continuing from the example above:
$ pwd
/usr/local/bin
$ cd ..
$ pwd
/usr/local
As you can see, our current directory is now /usr/local. We were able to go "backwards" one
directory, relative to the current directory that we were in.
$ pwd
/usr/local
$ cd ../share
$ pwd
/usr/share
$ cd /bin
$ cd ../usr/share/zoneinfo
$ cd /usr/X11R6/bin
$ cd ../lib/X11
$ cd /usr/bin
$ cd ../bin/../bin
Now, try them out and see if you got them right :)
Understanding .
Before we finish our coverage of cd, there are a few more things I need to mention. First,
there is another special directory called ., which means "the current directory". While this
directory isn't used with the cd command, it's often used to execute some program in the
current directory, as follows:
$ ./myprog
In the above example, the myprog executable residing in the current working directory will
be executed.
$ cd
With no arguments, cd will change to your home directory, which is /root for the superuser
and typically /home/username for a regular user. But what if we want to specify a file in our
home directory? Maybe we want to pass a file argument to the myprog command. If the file
lives in our home directory, we can type:
$ ./myprog /home/drobbins/myfile.txt
However, using an absolute path like that isn't always convenient. Thankfully, we can use the
~ (tilde) character to do the same thing:
$ ./myprog ~/myfile.txt
$ ./myprog ~fred/fredsfile.txt
$ cd /usr
$ ls
X11R6 doc i686-pc-linux-gnu lib man sbin ssl
bin gentoo-x86 include libexec portage share tmp
distfiles i686-linux info local portage.old src
By specifying the -a option, you can see all of the files in a directory, including hidden files:
those that begin with .. As you can see in the following example, ls -a reveals the . and
.. special directory links:
$ ls -a
. bin gentoo-x86 include libexec portage share tmp
.. distfiles i686-linux info local portage.old src
X11R6 doc i686-pc-linux-gnu lib man sbin ssl
$ ls -l /usr
drwxr-xr-x 7 root root 168 Nov 24 14:02 X11R6
drwxr-xr-x 2 root root 14576 Dec 27 08:56 bin
drwxr-xr-x 2 root root 8856 Dec 26 12:47 distfiles
lrwxrwxrwx 1 root root 9 Dec 22 20:57 doc -> share/doc
drwxr-xr-x 62 root root 1856 Dec 27 15:54 gentoo-x86
drwxr-xr-x 4 root root 152 Dec 12 23:10 i686-linux
drwxr-xr-x 4 root root 96 Nov 24 13:17 i686-pc-linux-gnu
drwxr-xr-x 54 root root 5992 Dec 24 22:30 include
lrwxrwxrwx 1 root root 10 Dec 22 20:57 info -> share/info
drwxr-xr-x 28 root root 13552 Dec 26 00:31 lib
drwxr-xr-x 3 root root 72 Nov 25 00:34 libexec
drwxr-xr-x 8 root root 240 Dec 22 20:57 local
lrwxrwxrwx 1 root root 9 Dec 22 20:57 man -> share/man
lrwxrwxrwx 1 root root 11 Dec 8 07:59 portage -> gentoo-x86/
drwxr-xr-x 60 root root 1864 Dec 8 07:55 portage.old
drwxr-xr-x 3 root root 3096 Dec 22 20:57 sbin
drwxr-xr-x 46 root root 1144 Dec 24 15:32 share
drwxr-xr-x 8 root root 328 Dec 26 00:07 src
The first column displays permissions information for each item in the listing. I'll explain how
to interpret this information in a bit. The next column lists the number of links to each
filesystem object, which we'll gloss over now but return to later. The third and fourth columns
list the owner and group, respectively. The fifth column lists the object size. The sixth column
is the "last modified" time or "mtime" of the object. The last column is the object's name. If
the file is a symbolic link, you'll see a trailing -> and the path to which the symbolic link
points.
Looking at directories
Sometimes, you'll want to look at a directory, rather than inside it. For these situations, you
can specify the -d option, which will tell ls to look at any directories that it would normally
look inside:
Finally, the -i ls option can be used to display the inode numbers of the filesystem objects in
the listing:
$ ls -i /usr
1409 X11R6 314258 i686-linux 43090 libexec 13394 sbin
1417 bin 1513 i686-pc-linux-gnu 5120 local 13408 share
8316 distfiles 1517 include 776 man 23779 src
43 doc 1386 info 93892 portage 36737 ssl
70744 gentoo-x86 1585 lib 5132 portage.old 784 tmp
$ ls -id /usr/local
5120 /usr/local
The /usr/local directory has an inode number of 5120. Now, let's take a look at the inode
number of /usr/local/bin/..:
$ ls -id /usr/local/bin/..
5120 /usr/local/bin/..
$ ls -dl /usr/local
drwxr-xr-x 8 root root 240 Dec 22 20:57 /usr/local
If we take a look at the second column from the left, we see that the directory /usr/local
(inode 5120) is referenced eight times. On my system, here are the various paths that
reference this inode:
/usr/local
/usr/local/.
/usr/local/bin/..
/usr/local/games/..
/usr/local/lib/..
/usr/local/sbin/..
/usr/local/share/..
/usr/local/src/..
mkdir
Let's take a quick look at the mkdir command, which can be used to create new directories.
The following example creates three new directories, tic, tac, and toe, all under /tmp:
$ cd /tmp
$ mkdir tic tac toe
By default, the mkdir command doesn't create parent directories for you; the entire path up
to the next-to-last element needs to exist. So, if you want to create the directories
won/der/ful, you'd need to issue three separate mkdir commands:
$ mkdir won/der/ful
mkdir: cannot create directory `won/der/ful': No such file or directory
$ mkdir won
$ mkdir won/der
$ mkdir won/der/ful
mkdir -p
However, mkdir has a handy -p option that tells mkdir to create any missing parent
directories, as you can see here:
$ mkdir -p easy/as/pie
All in all, pretty straightforward. To learn more about the mkdir command, type man mkdir
to read the manual page. This will work for nearly all commands covered here (for example,
man ls), except for cd, which is built-in to bash.
touch
Now, we're going to take a quick look at the cp and mv commands, used to copy, rename,
and move files and directories. To begin this overview, we'll first use the touch command to
create a file in /tmp:
$ cd /tmp
$ touch copyme
The touch command updates the "mtime" of a file if it exists (recall the sixth column in ls
-l output). If the file doesn't exist, then a new, empty file will be created. You should now
have a /tmp/copyme file with a size of zero.
echo
Now that the file exists, let's add some data to the file. We can do this using the echo
command, which takes its arguments and prints them to standard output. First, the echo
command by itself:
$ echo "firstfile"
firstfile
The greater-than sign tells the shell to write echo's output to a file called copyme. This file
will be created if it doesn't exist, and will be overwritten if it does exist. By typing ls -l, we
can see that the copyme file is 10 bytes long, since it contains the word firstfile and the
newline character:
$ ls -l copyme
-rw-r--r-- 1 root root 10 Dec 28 14:13 copyme
cat and cp
To display the contents of the file on the terminal, use the cat command:
$ cat copyme
firstfile
Now, we can use a basic invocation of the cp command to create a copiedme file from the
original copyme file:
$ cp copyme copiedme
Upon investigation, we find that they are truly separate files; their inode numbers are
different:
$ ls -i copyme copiedme
648284 copiedme 650704 copyme
mv
Now, let's use the mv command to rename "copiedme" to "movedme". The inode number will
remain the same; however, the filename that points to the inode will change.
$ mv copiedme movedme
$ ls -i movedme
648284 movedme
A moved file's inode number will remain the same as long as the destination file resides on
the same filesystem as the source file. We'll take a closer look at filesystems in Part 3 of this
tutorial series.
While we're talking about mv, let's look at another way to use this command. mv, in addition
to allowing us to rename files, also allows us to move one or more files to another location in
the directory heirarchy. For example, to move /var/tmp/myfile.txt to /home/drobbins (which
happens to be my home directory,) I could type:
$ mv /var/tmp/myfile.txt /home/drobbins
might guess, when myfile.txt is moved between filesystems, the myfile.txt at the new location
will have a new inode number. This is because every filesystem has its own independent set
of inode numbers.
We can also use the mv command to move multiple files to a single destination directory. For
example, to move myfile1.txt and myarticle3.txt to /home/drobbins, I could type:
$ cd /tmp
$ touch firstlink
$ ln firstlink secondlink
$ ls -i firstlink secondlink
15782 firstlink 15782 secondlink
Symbolic links
In practice, symbolic links (or symlinks) are used more often than hard links. Symlinks are a
special file type where the link refers to another file by name, rather than directly to the inode.
Symlinks do not prevent a file from being deleted; if the target file disappears, then the
symlink will just be unusable, or broken.
$ ln -s secondlink thirdlink
$ ls -l firstlink secondlink thirdlink
-rw-rw-r-- 2 agriffis agriffis 0 Dec 31 19:08 firstlink
-rw-rw-r-- 2 agriffis agriffis 0 Dec 31 19:08 secondlink
lrwxrwxrwx 1 agriffis agriffis 10 Dec 31 19:39 thirdlink -> secondlink
Symbolic links can be distinguished in ls -l output from normal files in three ways. First,
notice that the first column contains an l character to signify the symbolic link. Second, the
size of the symbolic link is the number of characters in the target (secondlink, in this
case). Third, the last column of the output displays the target filename preceded by a cute
little ->.
$ ln -s /usr/local/bin bin1
$ ls -l bin1
lrwxrwxrwx 1 root root 14 Jan 1 15:42 bin1 -> /usr/local/bin
Or alternatively:
$ ln -s ../usr/local/bin bin2
$ ls -l bin2
lrwxrwxrwx 1 root root 16 Jan 1 15:43 bin2 -> ../usr/local/bin
$ ls -l bin2
lrwxrwxrwx 1 root root 16 Jan 1 15:43 bin2 -> ../usr/local/bin
$ mkdir mynewdir
$ mv bin2 mynewdir
$ cd mynewdir
$ cd bin2
bash: cd: bin2: No such file or directory
Because the directory /tmp/usr/local/bin doesn't exist, we can no longer change directories
into bin2; in other words, bin2 is now broken.
# ls -l /usr/bin/keychain
# cd /usr/bin
# ln -s /usr/bin/keychain kc
# ls -l keychain
-rwxr-xr-x 1 root root 10150 Dec 12 20:09 /usr/bin/keychain
# ls -l kc
lrwxrwxrwx 1 root root 17 Mar 27 17:44 kc -> /usr/bin/keychain
In this example, we created a symbolic link called kc that points to the file /usr/bin/keychain.
Because we used an absolute path in our symbolic link, our kc symlink is still pointing to
/usr/bin/keychain, which no longer exists since we moved /usr/bin/keychain to /usr/local/bin.
That means that kc is now a broken symlink. Both relative and absolute paths in symbolic
links have their merits, and you should use a type of path that's appropriate for your
particular application. Often, either a relative or absolute path will work just fine. The
following example would have worked even after both files were moved:
# cd /usr/bin
# ln -s keychain kc
# ls -l kc
lrwxrwxrwx 1 root root 8 Jan 5 12:40 kc -> keychain
# mv keychain kc /usr/local/bin
# ls -l /usr/local/bin/keychain
-rwxr-xr-x 1 root root 10150 Dec 12 20:09 /usr/local/bin/keychain
# ls -l /usr/local/bin/kc
lrwxrwxrwx 1 root root 17 Mar 27 17:44 kc -> keychain
rm
Now that we know how to use cp, mv, and ln, it's time to learn how to remove objects from
the filesystem. Normally, this is done with the rm command. To remove files, simply specify
them on the command line:
$ cd /tmp
$ touch file1 file2
$ ls -l file1 file2
-rw-r--r-- 1 root root 0 Jan 1 16:41 file1
-rw-r--r-- 1 root root 0 Jan 1 16:41 file2
$ rm file1 file2
$ ls -l file1 file2
ls: file1: No such file or directory
ls: file2: No such file or directory
Note that under Linux, once a file is rm'd, it's typically gone forever. For this reason, many
junior system administrators will use the -i option when removing files. The -i option tells
rm to remove all files in interactive mode -- that is, prompt before removing any file. For
example:
$ rm -i file1 file2
rm: remove regular empty file `file1'? y
rm: remove regular empty file `file2'? y
In the above example, the rm command prompted whether or not the specified files should
*really* be deleted. In order for them to be deleted, I had to type "y" and Enter twice. If I had
typed "n", the file would not have been removed. Or, if I had done something really wrong, I
could have typed Control-C to abort the rm -i command entirely -- all before it is able to do
any potential damage to my system.
If you are still getting used to the rm command, it can be useful to add the following line to
your ~/.bashrc file using your favorite text editor, and then log out and log back in. Then, any
time you type rm, the bash shell will convert it automatically to an rm -i command. That
way, rm will always work in interactive mode:
rmdir
To remove directories, you have two options. You can remove all the objects inside the
directory and then use rmdir to remove the directory itself:
$ mkdir mydir
$ touch mydir/file1
$ rm mydir/file1
$ rmdir mydir
This method is commonly referred to as "directory removal for suckers." All real power users
and administrators worth their salt use the much more convenient rm -rf command,
covered next.
rm and directories
The best way to remove a directory is to use the recursive force options of the rm command
to tell rm to remove the directory you specify, as well as all objects contained in the directory:
$ rm -rf mydir
Generally, rm -rf is the preferred method of removing a directory tree. Be very careful
when using rm -rf, since its power can be used for both good and evil :)
$ rm file[1-8]
Or if you simply wanted to remove all files whose names begin with file as well as any file
named file, you could type:
$ rm file*
The * wildcard matches any character or sequence of characters, or even "no character." Of
course, glob wildcards can be used for more than simply removing files, as we'll see in the
next panel.
Understanding non-matches
If you wanted to list all the filesystem objects in /etc beginning with g as well as any file called
g, you could type:
$ ls -d /etc/g*
/etc/gconf /etc/ggi /etc/gimp /etc/gnome /etc/gnome-vfs-mime-magic /etc/gpm /etc/g
Now, what happens if you specify a pattern that doesn't match any filesystem objects? In the
following example, we try to list all the files in /usr/bin that begin with asdf and end with jkl,
including potentially the file asdfjkl:
$ ls -d /usr/bin/asdf*jkl
ls: /usr/bin/asdf*jkl: No such file or directory
Here's what happened. Normally, when we specify a pattern, that pattern matches one or
more files on the underlying filesystem, and bash replaces the pattern with a
space-separated list of all matching objects . However, when the pattern doesn't produce any
matches, bash leaves the argument, wildcards and all, as-is. So, then ls can't find the file
/usr/bin/asdf*jkl and it gives us an error. The operative rule here is that glob patterns are
expanded only if they match objects in the filesystem. Otherwise they remain as is and are
passed literally to the program you're calling.
Wildcard syntax: *
Now that we've seen how globbing works, we should look at wildcard syntax. You can use
special characters for wildcard expansion:
* will match zero or more characters. It means "anything can go here, including nothing".
Examples:
• /etc/g* matches all files in /etc that begin with g, or a file called g.
• /tmp/my*1 matches all files in /tmp that begin with my and end with 1, including the file
my1.
Wildcard syntax: ?
? matches any single character. Examples:
• myfile? matches any file whose name consists of myfile followed by a single
character.
• /tmp/notes?txt would match both /tmp/notes.txt and /tmp/notes_txt, if they
exist.
Wildcard syntax: []
This wildcard is like a ?, but it allows more specificity. To use this wildcard, place any
characters you'd like to match inside the []. The resultant expression will match a single
occurrence of any of these characters. You can also use - to specify a range, and even
combine ranges. Examples:
• myfile[12] will match myfile1 and myfile2. The wildcard will be expanded as long
as at least one of these files exists in the current directory.
• [Cc]hange[Ll]og will match Changelog, ChangeLog, changeLog, and changelog.
As you can see, using bracket wildcards can be useful for matching variations in
capitalization.
• ls /etc/[0-9]* will list all files in /etc that begin with a number.
• ls /tmp/[A-Za-z]* will list all files in /tmp that begin with an upper or lower-case letter.
The [!] construct is similar to the [] construct, except rather than matching any characters
inside the brackets, it'll match any character, as long as it is not listed between the [! and ].
Example:
• rm myfile[!9] will remove all files named myfile plus a single character, except for
myfile9.
Wildcard caveats
Here are some caveats to watch out for when using wildcards. Since bash treats
wildcard-related characters (?, [, ], and *) specially, you need to take special care when
typing in an argument to a command that contains these characters. For example, if you
want to create a file that contains the string [fo]*, the following command may not do what
you want:
If the pattern [fo]* matches any files in the current working directory, then you'll find the
names of those files inside /tmp/mynewfile.txt rather than a literal [fo]* like you were
expecting. The solution? Well, one approach is to surround your characters with single
quotes, which tell bash to perform absolutely no wildcard expansion on them:
Both approaches (single quotes and backslash escaping) have the same effect. Since we're
talking about backslash expansion, now would be a good time to mention that in order to
specify a literal \, you can either enclose it in single quotes as well, or type \\ instead (it will
be expanded to \).
By continuing in this tutorial series, you'll soon be ready to attain your LPIC Level 1
Certification from the Linux Professional Institute. Speaking of LPIC certification, if this is
something you're interested in, then we recommend that you study the Resources in the next
panel, which have been carefully selected to augment the material covered in this tutorial.
Resources
In the "Bash by example" article series on developerWorks, Daniel shows you how to use
bash programming constructs to write your own bash scripts. This series (particularly Parts
1 and 2) will be good preparation for the LPIC Level 1 exam:
If you're a beginning or intermediate Linux user, you really owe it to yourself to check out the
Technical FAQ for Linux users, a 50-page in-depth list of frequently-asked Linux questions,
along with detailed answers. The FAQ itself is in PDF (Acrobat) format.
If you're not too familiar with the vi editor, see the developerWorks tutorial Intro to vi. This
tutorial gives you a gentle yet fast-paced introduction to this powerful text editor. Consider
this must-read material if you don't know how to use vi.
Your feedback
Please let us know whether this tutorial was helpful to you and how we could make it better.
We'd also like to hear about other topics you'd like to see covered in developerWorks
tutorials.
For questions about the content of this tutorial, contact the author, Daniel Robbins, at
[email protected].
Colophon
This tutorial was written entirely in XML, using the developerWorks Toot-O-Matic tutorial
generator. The open source Toot-O-Matic tool is an XSLT stylesheet and several XSLT
extension functions that convert an XML file into a number of HTML pages, a zip file, JPEG
heading graphics, and two PDF files. Our ability to generate multiple text and binary formats
from a single source file illustrates the power and flexibility of XML. (It also saves our
production team a great deal of time and effort.)
Table of Contents
If you're viewing this document online, you can click any of the topics below to link directly to that section.
This tutorial on compiling sources and managing packages is ideal for those who want to
learn about or improve their Linux package management skills. This tutorial is particularly
appropriate for those who will be setting up applications on Linux servers or desktops. For
many readers, much of this material will be new, but more experienced Linux users may find
this tutorial to be a great way to "round out" their important Linux system administration skills.
If you are new to Linux, we recommend that you start with Part 1 and work through the series
from there.
By the end of this series of tutorials (eight in all, covering the LPI 101 and 102 exams), you'll
have the knowledge you need to become a Linux Systems Administrator and will be ready to
attain an LPIC Level 1 certification from the Linux Professional Institute if you so choose.
For those who have taken the release 1 version of this tutorial for reasons other than LPI
exam preparation, you probably don't need to take this one. However, if you do plan to take
the exams, you should strongly consider reading this revised tutorial.
Daniel Robbins lives in Albuquerque, New Mexico, and is the Chief Architect of Gentoo
Technologies, Inc., the creator of Gentoo Linux, an advanced Linux for the PC, and the
Portage system, a next-generation ports system for Linux. He has also served as a
contributing author for the Macmillan books Caldera OpenLinux Unleashed, SuSE Linux
Unleashed, and Samba Unleashed. Daniel has been involved with computers in some
fashion since the second grade, when he was first exposed to the Logo programming
language as well as a potentially dangerous dose of Pac Man. This probably explains why he
has since served as a Lead Graphic Artist at SONY Electronic Publishing/Psygnosis. Daniel
enjoys spending time with his wife, Mary, and their daughter, Hadassah.
Chris Houser, known to his friends as "Chouser," has been a UNIX proponent since 1994
when joined the administration team for the computer science network at Taylor University in
Indiana, where he earned his Bachelor's degree in Computer Science and Mathematics.
Since then, he has gone on to work in Web application programming, user interface design,
professional video software support, and now Tru64 UNIX device driver programming at
Compaq. He has also contributed to various free software projects, most recently to Gentoo
Linux. He lives with his wife and two cats in New Hampshire.
Aron Griffis graduated from Taylor University with a degree in Computer Science and an
award that proclaimed him the "Future Founder of a Utopian UNIX Commune". Working
towards that goal, Aron is employed by Compaq writing network drivers for Tru64 UNIX, and
spending his spare time plunking out tunes on the piano or developing Gentoo Linux. He
lives with his wife Amy (also a UNIX engineer) in Nashua, NH.
The second are dynamically linked executables. We'll get into those in the next panel.
# ldd /sbin/sln
not a dynamic executable
"not a dynamic executable" is ldd's way of saying that sln is statically linked. Now, let's take
a look at sln's size in comparison to its non-static cousin, ln:
# ls -l /bin/ln /sbin/sln
-rwxr-xr-x 1 root root 23000 Jan 14 00:36 /bin/ln
-rwxr-xr-x 1 root root 381072 Jan 14 00:31 /sbin/sln
As you can see, sln is over ten times the size of ln. ln is so much smaller than sln
because it is a dynamic executable. Dynamic executables are incomplete programs that
depend on external shared libraries to provide many of the functions that they need to run.
# ldd /bin/ln
libc.so.6 => /lib/libc.so.6 (0x40021000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
As you can see, ln depends on the external shared libraries libc.so.6 and
ld-linux.so.2. As a rule, dynamically linked programs are much smaller than their
statically-linked equivalents. However, statically-linked programs come in handy for certain
low-level maintenance tasks. For example, sln is the perfect tool to modify various library
symbolic links that exist in /lib. But in general, you'll find that nearly all executables on a
Linux system are of the dynamically linked variety.
the job of loading them along with any necessary shared libraries so that they can execute
correctly? The answer is something called the dynamic loader, which is actually the
ld-linux.so.2 library that you see listed as a shared library dependency in ln's ldd
listing. The dynamic loader takes care of loading the shared libraries that dynamically linked
executables need in order to run. Now, let's take a quick look at how the dynamic loader
finds the appropriate shared libraries on your system.
ld.so.conf
The dynamic loader finds shared libraries thanks to two files -- /etc/ld.so.conf and
/etc/ld.so.cache. If you cat your /etc/ld.so.conf file, you'll probably see a listing
that looks something like this:
$ cat /etc/ld.so.conf
/usr/X11R6/lib
/usr/lib/gcc-lib/i686-pc-linux-gnu/2.95.3
/usr/lib/mozilla
/usr/lib/qt-x11-2.3.1/lib
/usr/local/lib
The ld.so.conf file contains a listing of all directories (besides /lib and /usr/lib,
which are automatically included) in which the dynamic loader will look for shared libraries.
ld.so.cache
But before the dynamic loader can "see" this information, it must be converted into an
ld.so.cache file. This is done by running the ldconfig command:
# ldconfig
When ldconfig completes, you now have an up-to-date /etc/ld.so.cache file that reflects
any changes you've made to /etc/ld.so.conf. From this point forward, the dynamic
loader will look in any new directories that you specified in /etc/ld.so.conf when looking
for shared libraries.
ldconfig tips
To view all the shared libraries that ldconfig can "see," type:
# ldconfig -p | less
There's one other handy trick you can use to configure your shared library paths. Sometimes,
you'll want to tell the dynamic loader to try to use shared libraries in a specific directory
before trying any of your /etc/ld.so.conf paths. This can be handy in situations where
you are running an older application that doesn't work with the currently-installed versions of
your libraries.
LD_LIBRARY_PATH
To instruct the dynamic loader to check a certain directory first, set the LD_LIBRARY_PATH
variable to the directories that you would like searched. Separate multiple paths using
commas; for example:
# export LD_LIBRARY_PATH="/usr/lib/old:/opt/lib"
After LD_LIBRARY_PATH has been exported, any executables started from the current shell
will use libraries in /usr/lib/old or /opt/lib if possible, falling back to the directories
specified in /etc/ld.so.conf if some shared library dependencies are still unsatisfied.
We've completed our coverage of Linux shared libraries. To learn more about shared
libraries, type man ldconfig and man ld.so.
Whatever the reason, whether of necessity or simply just because you want to compile the
program from its sources, this section will show you how.
Downloading
Your first step will be to locate and download the sources that you want to compile. They'll
probably be in a single archive with a trailing .tar.gz, tar.Z, tar.bz2, or .tgz extension. Go
ahead and download the archive with your favorite browser or ftp program. If the program
happens to have a Web page, this would be a good time to visit it to familiarize yourself with
any installation documentation that may be available.
The program you're installing could depend on the existence of any number of other
programs that may or may not be currently installed on your system. If you know for sure that
your program depends on other programs or libraries that are not currently installed, you'll
need to get these packages installed first (either from a binary package like rpm or by
compiling them from their sources also.) Then, you'll be in a great position to get your original
source file successfully installed.
Unpacking
Unpacking the source archive is relatively easy. If the name of your archive ends with .tar.gz,
.tar.Z, or .tgz, you should be able to unpack the archive by typing:
(x is for extract, z is for gzip decompression, v is for verbose (print the files that are
extracted), and f means that the filename will appear next on the command line.)
Nearly all "source tarballs" will create one main directory that contains all the program's
sources. This way, when you unpack the archive, your current working directory isn't
cluttered with lots of files -- instead, all files are neatly organized in a single, new directory
and don't get in the way.
Listing archives
Every now and then, you may come across an archive that, when decompressed, creates
tons of files in your current working directory. While most tarballs aren't created this way, it's
been known to happen. If you want to verify that your particular tarball was put together
correctly and creates a main directory to hold the sources, you can view its contents by
typing:
If there is no common directory listed on the left-hand side of the archive listing, you'll want to
create a new directory, move the tarball inside it, enter the directory, and only then extract
the tarball. Otherwise, you'll be in for a mess!
Because bzip2 has been gaining popularity, modern versions of GNU tar recognize the j
option to mean "this tarball is compressed with bzip2." When tar encounters the j option, it
will auto-decompress the tarball (by calling the "bzip2" program) before it tries to open the
tarball. For example, here's the command to view the contents of a .tar.bz2 file:
And here is the command to view the contents of a .tar (uncompressed) file:
bzip2 pipelines
So, your version of tar doesn't recognize those handy bzip2 shortcuts -- what can be done?
Fortunately, there's an easy way to extract the contents of bzip2 tarballs that will work on
nearly all UNIX systems, even if the system in question happens to have a non-GNU version
of tar. To view the contents of a bzip2 file, we can create a shell pipeline:
from our archive file. Since tar was called with the f - option, it read tar data from stdin,
rather than trying to read data from a file on disk.
If you used the pipeline method to try to extract the contents of your archive and your system
complained that bzip2 couldn't be found, it's possible that bzip2 isn't installed on your system.
You can download the sources to bzip2 from http://sources.redhat.com/bzip2. After installing
the bzip2 sources (by following this tutorial), you'll then be able to unpack and install the
application you wanted to install in the first place :)
Inspecting sources
Once you've unpacked your sources, you'll want to enter the unpacked directory and check
things out. It's always a good idea to locate any installation-related documentation. Typically,
this information can be found in a README or INSTALL file located in the main source
directory. Additionally, look for README.platform and INSTALL.platform files, where platform
is the name of your particular operating system -- in this case "Linux."
Configuration
Many modern sources contain a configure script in the main source directory. This script
(typically generated by the developers using the GNU autoconf program) is specially
designed to set up the sources so that they compile perfectly on your system. When run, the
configure script probes your system, determining its capabilities, and creates Makefiles,
which contain instructions for building and installing the sources on your system.
The configure script is almost always called "configure." If you find a configure script in the
main source directory, odds are good that it was put there for your use. If you can't find a
configure script, then your sources probably come with a standard Makefile that has been
designed to work across a variety of systems -- this means that you can skip the following
configuration steps, and resume this tutorial where we start talking about "make."
Using configure
Before running the configure script, it's a good idea to get familiar with it. By typing
./configure --help, you can view all the various configuration options that are available
for your program. Many of the options you see, especially the ones listed at the top of the
--help printout, are standard options that will be found in nearly every configure script. The
options listed near the end are often related to the particular package you're trying to
compile. Take a look at them and note any you'd like to enable or disable.
Using --prefix
If you'd like the sources to install somewhere else, say in /usr, you'll want to pass the
--prefix=/usr option to configure. Likewise, you could also tell configure to install to your
/opt tree, by using the --prefix=/opt option.
This capability comes in very handy, since most source archives aren't yet FHS-compliant.
Often, you'll need to add a --mandir=/usr/share/man and a
--infodir=/usr/share/info to the configure command line in order to make your
source package configure itself to eventually install its files in the "correct" locations.
Time to configure
Once you've taken a look at the various configure options and determined which ones you'd
like to use, it's time to run configure. Please note that you may not need to include any
command-line options when you run configure -- in the majority of situations, the defaults will
work (but may not be exactly what you want).
$ ./configure <options>
$ ./configure
or
The options you need will depend on the particular package you're configuring. When you
run configure, it will spend a minute or two detecting whatfeatures or tools are available on
your system, printing out the results of its various configuration checks as it goes.
config.cache
Once the configuration process completes, the configure script stores all its configuration
data in a file called config.cache. This file lives in the same directory as the configure
script itself. If you ever need to run ./configure again after you've updated your system
configuration, make sure you rm config.cache first; otherwise, configure will simply use
the old settings without rechecking your system.
Makefile intro
Makefiles are typically named makefile or Makefile. There will normally be one makefile in
each directory that contains source files, in addition to one that sits in the main source
directory. The autoconf-generated Makefiles contain instructions (officially called rules) that
specify how to build certain targets, like the program you want to install. make figures out the
order in which all the rules should run.
Invoking make
Invoking make is easy; just type "make" in the current directory. The make program will then
find and interpret a file called makefile or Makefile in the current directory. If you type "make"
all by itself, it will build the default target. Developers normally set up their makefiles so that
the default target compiles all the sources:
$ make
Some makefiles won't have a default target, and you'll need to specify one in order to get the
compilation started:
$ make all
After typing one of these commands, your computer will spend several minutes compiling
your program into object code. Presuming it completes with no errors, you'll be ready to
install the compiled program onto your system.
Installation
After the program is compiled, there's one more important step: installation. Although the
program is compiled, it's not yet ready for use. All its components need to be copied from the
source directory to the correct "live" locations on your filesystem. For example, all binaries
need to be copied to /usr/local/bin, and all man pages need to be installed into
/usr/local/man, etc.
Before you can install the software, you'll need to become the root user. This is typically done
by either logging in as root on a separate terminal or typing su, at which point you'll be
prompted for root's password. After typing it in, you'll have root privileges until you exit from
your current shell session by typing "exit" or hitting control-D. If you're already root, you're
ready to go!
make install
Installing sources is easy. In the main source directory, simply type:
# make install
Typing "make install" will tell make to satisfy the "install" target; this target is traditionally used
to copy all the freshly created source files to the correct locations on disk so that your
program can be used. If you didn't specify a --prefix option, it's very likely that quite a few
files and directories will be copied to your /usr/local tree. Depending on the size of the
program, the install target may take anywhere from several seconds to a few minutes to
complete.
In addition to simply copying files, make install will also make sure the installed files have the
correct ownership and permissions. After make install completes successfully, the
program is installed and ready (or almost ready) for use!
$ man programname
It's possible that a program may require additional configuration steps. For example, if you
installed a Web server, you'll need to configure it to start automatically when your system
boots. You may also need to customize a configuration file in /etc before your application will
run.
Ta da!
Now that you've fully installed a software package from its sources, you can run it! To start
the program, type:
$ programname
Congratulations!
Possible problems
It's very possible that configure or make, or possibly even make install, aborted with some
kind of error code. The next several panels will help you correct common problems.
Missing libraries
Every now and then, you may experience a problem where configure bombs out because
you don't have a certain library installed. In order for you to continue the build process, you'll
need to temporarily put your current program configuration on hold and track down the
sources or binary package for the library that your program needs. Once the correct library is
installed, configure or make should be happy and complete successfully.
Other problems
Sometimes, you'll run into some kind of error that you simply don't know how to fix. As your
experience with UNIX/Linux grows, you'll be able to diagnose more and more seemingly
cryptic error conditions that you encounter during the configure and make process.
Sometimes, errors occur because an installed library is too old (or possibly even too new!).
Other times, the problem you're having is actually the fault of the developers, who may not
have anticipated their program running on a system such as yours -- or maybe they just
made a typo :)
There is some truth to these statements, but the general consensus among Linux users is
that the advantages outweigh the disadvantages. Additionally, each stumbling block listed
above has a corresponding rebuttal: Multiple packages can be built to optimize for different
systems; package managers can be augmented to resolve dependencies automatically;
databases can be rebuilt based on other files; and the initial effort expended in creating a
package is mitigated by the ease of upgrading or removing that package later.
The rpm program has a command-line interface by default, although there are GUIs and
Web-based tools to provide a friendlier interface. In this section we'll introduce the most
common command-line operations, using the Xsnow program for the examples. If you would
like to follow along, you can download the Xsnow rpm below, which should work on most
rpm-based distributions.
• xsnow-1.41-1.i386.rpm
Note: If you find the various uses of the term "rpm" confusing in this section, keep in mind
that "rpm" usually refers to the program, whereas "an rpm" or "the rpm" usually refers to an
rpm package.
Installing an rpm
To get started, let's install our Xsnow rpm using rpm -i:
# rpm -i xsnow-1.41-1.i386.rpm
If this command produced no output, then it worked! You should be able to run Xsnow to
enjoy a blizzard on your X desktop. Personally, we prefer to have some visual feedback
when we install an rpm, so we like to include the -h (hash marks to indicate progress) and
-v (verbose) options:
Re-installing an rpm
If you were following along directly, you might have seen the following message from rpm in
the previous example:
There may be occasions when you wish to re-install an rpm, for instance if you were to
accidentally delete the binary /usr/X11R6/bin/xsnow. In that case, you should first remove the
rpm with rpm -e, then re-install it. Note that the information message from rpm in the
following example does not hinder the removal of the package from the system:
# rpm -e xsnow
removal of /usr/X11R6/bin/xsnow failed: No such file or directory
# rpm -e xsnow
error: removing these packages would break dependencies:
/usr/X11R6/bin/xsnow is needed by x-amusements-1.0-1
In that case, you could re-install Xsnow using the --force option:
You can also use --nodeps when installing an rpm. To re-iterate what was said above,
using --nodeps is not recommended, however it is sometimes necessary:
Upgrading packages
There is now an rpm of Xsnow version 1.42 on the Xsnow author's Website. You may want
to upgrade your existing Xsnow installation for your particular version of Linux. If you were to
use rpm -ivh --force, it would appear to work, but rpm's internal database would list
both versions as being installed. Instead, you should use rpm -U to upgrade your
installation:
Here's a little trick: we rarely use rpm -i at all, because rpm -U will simply install an rpm if
it doesn't exist yet on the system. This is especially useful if you specify multiple packages on
the command-line, where some are currently installed and some are not:
# rpm -q xsnow
xsnow-1.41-1
In fact, rpm knows even more about the installed package than just the name and version.
We can ask for a lot more information about the Xsnow rpm using rpm -qi:
Combined with the -c option or the -d option, you can restrict the output to configuration or
documentation files, respectively. This type of query is more useful for larger rpms with long
file lists, but we can still demonstrate using the Xsnow rpm:
# rpm -qa | wc -l
287
# rpm -qal | wc -l
45706
Here's a quick tip: Using rpm -qa can ease the administration of multiple systems. If you
redirect the sorted output to a file on one machine, then do the same on the other machine,
you can use the diff program to see the differences.
# rpm -qa | while read p; do rpm -ql $p | grep -q '^/usr/X11R6/bin/xsnow$' && echo $p; d
xsnow-1.41-1
Since this takes a long time to type, and even longer to run (1m50s on one of our Pentiums),
the rpm developers thoughtfully included the capability in rpm. You can query for the owner
of a given file using rpm -qf:
Even on the Pentium, that only takes 0.3s to run. And even fast typists will enjoy the
simplicity of rpm -qf compared to the complex shell construction :)
Showing dependencies
Unless you employ options such as --nodeps, rpm normally won't allow you to install or
remove packages that break dependencies. For example, you can't install Xsnow without first
having the X libraries on your system. Once you have Xsnow installed, you can't remove the
X libraries without removing Xsnow first (and probably half of your installed packages).
This is a strength of rpm, even if it's frustrating sometimes. It means that when you install an
rpm, it should just work. You shouldn't need to do much extra work, since rpm has already
verified that the dependencies exist on the system.
You can also query the installed database for the same information by omitting the -p:
Wait a minute! According to that output, the GPG signature is NOT OK. Let's add some
verbosity to see what's wrong:
So, the problem is that we couldn't retrieve the author's public key. After we retrieve the
public key from the package author's Website (shown in the output from rpm -qi), the
signature checks out:
# rpm -V xsnow
Normally this command displays no output to indicate a clean bill of health. Let's spice things
up and try again:
# rm /usr/X11R6/man/man1/xsnow.1x.gz
# cp /bin/sh /usr/X11R6/bin/xsnow
# rpm -V xsnow
S.5....T /usr/X11R6/bin/xsnow
missing /usr/X11R6/man/man1/xsnow.1x.gz
This output shows us that the Xsnow binary fails MD5 sum, file size, and mtime tests. And
the man page is missing altogether! Let's repair this broken installation:
# rpm -e xsnow
removal of /usr/X11R6/man/man1/xsnow.1x.gz failed: No such file or directory
Configuring rpm
Rpm rarely needs configuring. It simply works out of the box. In older versions of rpm, you
could change things in /etc/rpmrc to affect run-time operation. In recent versions, that file
has been moved to /usr/lib/rpm/rpmrc, and is not meant to be edited by system
administrators. Mostly it just lists flags and compatibility information for various platforms
(e.g. i386 is compatible with all other x86 architectures).
If you wish to configure rpm, you can do so by editing /etc/rpm/macros. Since this is
rarely necessary, we'll let you read about it in the rpm bundled documentation. You can find
the right documentation file with the following command:
• Xsnow Website
• rpm Home Page
• Maximum RPM - an entire book
• The RPM HOWTO at the Linux Documentation Project
• Red Hat's chapter on package management with rpm
• developerWorks article on creating rpms
• rpmfind.net -- a huge collection of rpms
Skimming through this output, you can see that Xsnow was to be installed, then it was
fetched it from the Web, unpacked, and finally set up.
Simulated install
If apt-get notices that the package you are trying to install depends on other packages, it will
automatically fetch and install those as well. In the last example, only Xsnow was installed,
because all of it's dependencies were already satisfied.
Sometimes, however, the list of packages apt-get needs to fetch can be quite large, and it is
often useful to see what is going to installed before you let it start. The -s option does
exactly this. For example, on one of our systems if we try to install the graphical e-mail
program balsa:
It then goes on to list the order in which the packages will be installed and configured (or set
up).
# apt-setup
This tool walks you through the process of finding places to get Debian packages, such as
CDROMs, Web sites, and ftp sites. When you're done, it writes out changes to your
/etc/apt/sources.list file, so that apt-get can find packages when you ask for them.
apt-get also has many other commands besides the install command we've used so far.
One of these is apt-get dselect-upgrade, which obeys the Status set for each package
on your Debian system.
Starting dselect
The Status for each package is stored in the file /var/lib/dpkg/status, but it is best
updated using another interactive tool:
# dselect
Debian GNU/Linux `dselect' package handling frontend.
Move around with ^P and ^N, cursor keys, initial letters, or digits;
Press <enter> to confirm selection. ^L redraws screen.
You can view and change each package's Status by choosing the Select option. It will then
display a screenful of help. When you're done reading this, press space. Now you will see a
list of packages that looks something like this:
To change the Mark, just press the key for the code you want (equal, dash, or underline), but
if you want to change the Mark to * (asterisk), you have to press + (plus).
When you are done, use an upper-case Q to save your changes and exit the Select screen. If
you need help at any time in dselect, type ? (question mark). Type a space to get back out
of a help screen.
There are other ways to run these steps. For example, you can choose each step individually
from the main dselect menu.
Some packages use a system called debconf for their Configure step. Those that do can
ask their setup questions in a variety of ways, such as in a text terminal, through a graphical
interface, or through a Web page. To configure one of these packages, use the
dpkg-reconfigure command. You can even use it to make sure all debconf packages
have been completely configured:
# dpkg-reconfigure --all
debconf: package "3c5x9utils" is not installed or does not use debconf
debconf: package "3dchess" is not installed or does not use debconf
debconf: package "9menu" is not installed or does not use debconf
debconf: package "9wm" is not installed or does not use debconf
debconf: package "a2ps" is not installed or does not use debconf
debconf: package "a2ps-perl-ja" is not installed or does not use debconf
debconf: package "aalib-bin" is not installed or does not use debconf
This will produce a very long list of packages that do not use debconf, but it will also find
some that do and present easy-to-use forms for you to answer the questions that each
package asks.
For example, to get the complete status and description of a package, use the -s option:
# dpkg -s xsnow
Package: xsnow
Status: install ok installed
Priority: optional
Section: non-free/x11
Installed-Size: 41
Maintainer: Martin Schulze <[email protected]>
Version: 1.40-6
Depends: libc6, xlib6g (>= 3.3-5)
Description: Brings Christmas to your desktop
Xsnow is the X-windows application that will let it snow on the
root window, in between and on windows. Santa and his reindeer
will complete your festive-season feeling.
# dpkg -L xsnow
/.
/usr
/usr/doc
/usr/doc/xsnow
/usr/doc/xsnow/copyright
/usr/doc/xsnow/readme.gz
/usr/doc/xsnow/changelog.Debian.gz
/usr/X11R6
/usr/X11R6/bin
/usr/X11R6/bin/xsnow
/usr/X11R6/man
/usr/X11R6/man/man6
/usr/X11R6/man/man6/xsnow.6.gz
To go the other way around, and find which package contains a specific file, use the -S
option:
# dpkg -S /usr/doc/xsnow/copyright
xsnow: /usr/doc/xsnow/copyright
The name of the package is listed just to the left of the colon.
If you do find and download a .deb file, you can install it using the -i option:
# dpkg -d /tmp/dl/xsnow_1.40-6_i386.deb
If you can't find the package you're looking for as a .deb file, but you find a .rpm or some
other type of package, you may be able to use alien. The alien program can convert
packages from various formats into .debs.
On the next page, you'll find a number of resources that you will find helpful in learning more
about the subjects presented in this tutorial.
Resources
At linuxdoc.org, you'll find linuxdoc's collection of guides, HOWTOs, FAQs and man-pages to
be invaluable. Be sure to check out Linux Gazette and LinuxFocus as well.
The Linux System Administrators guide, available from Linuxdoc.org's "Guides" section, is a
good complement to this series of tutorials -- give it a read! You may also find Eric S.
Raymond's Unix and Internet Fundamentals HOWTO to be helpful.
In the Bash by example article series on developerWorks, Daniel shows you how to use
bash programming constructs to write your own bash scripts. This bash series (particularly
Parts 1 and 2) will be excellent additional preparation for the LPIC Level 1 exam:
• Bash by example, Part 1: Fundamental programming in the Bourne-again shell
• Bash by example, Part 2: More bash programming fundamentals
• Bash by example, Part 3: Exploring the ebuild system
We highly recommend the Technical FAQ for Linux users by Mark Chapman, a 50-page
in-depth list of frequently asked Linux questions, along with detailed answers. The FAQ itself
is in PDF (Acrobat) format. If you're a beginning or intermediate Linux user, you really owe it
to yourself to check this FAQ out. We also recommend Linux glossary for Linux users, also
from Mark.
If you're not familiar with the vi editor, we strongly recommend that you check out Daniel's Vi
intro -- the cheat sheet method tutorial . This tutorial will give you a gentle yet fast-paced
introduction to this powerful text editor. Consider this must-read material if you don't know
how to use vi.
Feedback
Please send any tutorial feedback you may have to the authors:
• Daniel Robbins, at [email protected]
• Chris Houser, at [email protected]
• Aron Griffis, at [email protected]
Colophon
This tutorial was written entirely in XML, using the developerWorks Toot-O-Matic tutorial
generator. The open source Toot-O-Matic tool is an XSLT stylesheet and several XSLT
extension functions that convert an XML file into a number of HTML pages, a zip file, JPEG
heading graphics, and two PDF files. Our ability to generate multiple text and binary formats
from a single source file illustrates the power and flexibility of XML. (It also saves our
production team a great deal of time and effort.)
Table of Contents
If you're viewing this document online, you can click any of the topics below to link directly to that section.
This tutorial is ideal for those who want to learn about or improve their Linux kernel
compilation and configuration skills. This tutorial is especially appropriate for those who will
be setting up Linux servers or desktops. For many readers, much of this material will be new,
but more experienced Linux users may find this tutorial to be a great way of rounding out
their important Linux kernel skills. If you are new to Linux, we recommend you start with Part
1 and work through the series from there.
By the end of this series of tutorials (eight in all; this is part six), you'll have the knowledge
you need to become a Linux Systems Administrator and will be ready to attain an LPIC Level
1 certification from the Linux Professional Institute if you so choose.
For those who have taken the release 1 version of this tutorial for reasons other than LPI
exam preparation, you probably don't need to take this one. However, if you do plan to take
the exams, you should strongly consider reading this revised tutorial.
Daniel lives in Albuquerque, New Mexico, and is the Chief Architect of Gentoo Technologies,
Inc., the creator of Gentoo Linux, an advanced Linux for the PC, and the Portage system, a
next-generation ports system for Linux. He has also served as a contributing author for the
Macmillan books Caldera OpenLinux Unleashed, SuSE Linux Unleashed, and Samba
Unleashed. Daniel has been involved with computers in some fashion since the second
grade, when he was first exposed to the Logo programming language as well as a potentially
dangerous dose of Pac Man. This probably explains why he has since served as a Lead
Graphic Artist at SONY Electronic Publishing/Psygnosis. Daniel enjoys spending time with
his wife, Mary, and their daughter, Hadassah.
CPU abstraction
The Linux kernel also provides a level of abstraction on top of the processor(s) in your
system -- allowing for multiple programs to appear to run simultaneously. The kernel takes
care of giving each process a fair and timely share of the processors' computing resources.
If you're running Linux right now, then the kernel that you are using now is either UP
(uniprocessor) or SMP (symmetric multiprocessor) aware. If you happen to have an SMP
motherboard, but you're using a UP kernel, Linux won't "see" your extra processors! To fix
this, you'll want to compile a special SMP kernel for your hardware. Currently, SMP kernels
will also work on uniprocessor systems, but at a slight performance hit.
Abstracting IO
The kernel also handles the much-needed task of abstracting all forms of file IO. Imagine
what would happen if every program had to interface with your specific disk hardware directly
-- if you changed disk controllers, all your programs would stop working! Fortunately, the
Linux kernel follows the UNIX model of providing a simple data storage and access
abstraction that all programs can use. That way, your favorite database doesn't need to be
concerned whether it is storing data on an IDE disk, on a SCSI RAID array, or on a
network-mounted file system.
Networking central
One of Linux's main claims to fame is its robust networking, especially TCP/IP support. And,
if you guessed that the TCP/IP stack is in the Linux kernel, you're right! The kernel provides a
standards-compliant, high-level interface for programs that want to send data over the
network. Behind the scenes, the Linux kernel interfaces directly with your particular Ethernet
card or pppd daemon and handles the low-level Internet communication details. Note that
the next tutorial in this series, Part 7, will deal with TCP/IP and networking.
Networking goodies
One of the greatest things about Linux is the wealth of useful optional features that are
available in the kernel, especially related to networking. For example, you can configure a
kernel that will allow your entire home network to access the Internet by way of your Linux
modem -- this is called IP Masquerading, or IP NAT.
Additionally, the Linux kernel can be configured to export or mount network-based NFS file
systems, allowing for other UNIX machines on your LAN to easily share data with your Linux
system. There are a lot of goodies in the kernel, as you'll learn once you begin exploring the
Linux kernel's many configuration options.
Booting review
Now would be a good time for a quick refresher of the Linux boot process. When you turn on
your Linux-based system, the kernel image (stored in a single binary file) is loaded from disk
to memory by a boot loader, such as LILO or GRUB. At this point, the kernel takes control of
your system. One of the first things it does is detect and initialize all the hardware that it finds
and has been configured to support. Once the hardware has been initialized properly, the
kernel is ready to start normal user-space programs (also known as "processes").
The first process run by the kernel is /sbin/init. It, in turn, starts additional processes as
specified in /etc/inittab. Within seconds, your Linux system is up and running, ready for you
to use. Although you never interact with the kernel directly, the Linux kernel is always running
"above" all normal processes, providing the necessary virtualization and abstractions that
your various programs and libraries require to function.
Introducing... modules!
All recent Linux kernels support kernel modules. Kernel modules are really neat things --
they're pieces of the kernel that reside in relatively small binary files on disk. As soon as the
kernel needs the functionality of a particular module, the kernel can load that specific module
from disk and automatically integrate it into itself, thus dynamically extending its capabilities.
If the features of a loaded kernel module haven't been used for several minutes, the kernel
can voluntarily disassociate it from the rest of the kernel and unload it from memory --
something that's called autocleaning. Without kernel modules, you'd need to ensure that your
running kernel (which exists on disk as a single binary file) contains absolutely all the
functionality you could possibly need. Without modules, you'd need to build a completely
new kernel to add important new functionality to it.
Typically, users build a single kernel image that contains all essential functionality, and then
build a bunch of modules that correspond to features that they may need in the future. If and
when that time comes, the appropriate module can be loaded into the kernel as needed. This
also helps to conserve RAM, since a module uses RAM only when it has been loaded into
the running kernel. When a module is removed from the kernel, that memory can be freed
and used for other purposes.
While the kernel modules for the 2.4 and earlier kernels end in ".o", kernel modules for the
2.5 and 2.6 kernels end in ".ko".
There are also several 2.5 series kernels available, but you should not use them on
production systems. The "5" in "2.5" is an odd number, indicating that these kernels are
experimental in nature and intended for kernel developers. When the "2.5" kernels are ready
for production use, a "2.6" (even second number) series will begin.
However, there may be times where you want to install a new kernel. Generally, the best
approach is to simply install a new or updated version of your distribution's kernel source
package. This package will contain a kernel sources that have been patched and tweaked to
run optimally on your Linux system. The latest versions of Red Hat Linux require a special
"Red Hat" kernel in order to function. While other distributions don't specifically require that
you use their patched kernel sources, doing so is still recommended.
At kernel.org, you'll find the kernel sources organized into several different directories, based
on kernel version (v2.2, v2.4, etc.) Inside each directory, you'll find files labeled
"linux-x.y.z.tar.gz" and "linux-x.y.z.tar.bz2". These are the Linux kernel source tarballs. You'll
also see files labeled "patch-x.y.z.gz" and "patch-x.y.z.bz2". These files are patches that can
be used to update the previous version of complete kernel sources. If you want to compile a
new kernel release, you'll need to download one of the "linux" files.
Now, it's time to extract the new kernel. While still in /usr/src, type tar xzvf
/path/to/my/kernel-x.y.z.tar.gz or cat
/path/to/my/kernel-x.y.z.tar.bz2 | bzip2 -d | tar xvf -, depending on
whether your sources are compressed with gzip or bzip2. After typing this, your new kernel
sources will be extracted into a new "linux" directory. Beware -- the full kernel sources
typically occupy more than 50 MB on disk!
The old-fashioned way of configuring a kernel was a tremendous pain, and involved entering
/usr/src/linux and typing make config. While make config still works, please don't try to use
this method to configure your kernel -- unless you like answering hundreds (yes, hundreds!)
of yes/no questions on the command-line.
When using "make menuconfig," options that have a "< >" to their left can be compiled as a
module. When the option is highlighted, hit the space bar to toggle whether the option is
deselected ("< >"), selected to be compiled into the kernel image ("<*>"), or selected to be
compiled as a module ("<M>"). You can also hit "y" to enable an option, "n" to disable it, or
"m" to select it to be compiled as a module if possible. Fortunately, most kernel configuration
options have verbose help that you can view by typing h.
Configuration tips
Unfortunately, there are so many kernel configuration options that we simply don't have room
to cover them all here (but you can check the options(4) man page for a more complete list of
options, if you're curious).
In the following panels, I'll give you an overview of the important categories you'll find when
you do a "make menuconfig" or "make xconfig," pointing out essential or important kernel
configuration options along the way.
Code maturity level options: This configuration category contains a single option: "Prompt
for development and/or incomplete code/drivers." If enabled, many options that are
considered experimental (such as ReiserFS, devfs, and others) will be visible under other
category menus. If this option isn't selected, the only options that will be visible will be those
that are considered "stable." Generally, it's a good idea to enable this option so that you can
see all that the kernel has to offer. Some of the "experimental" options are truly experimental,
but many are experimental in name only and are in wide production use.
Processor type and features: This section includes various CPU-specific configuration
options. Of particular importance is the "Symmetric multiprocessing support option", which
should be enabled if your system has more than one CPU. Otherwise, only the first CPU in
your system will be utilized. The "MTRR Support" option should generally be enabled, since it
will result in better performance in X on modern systems.
Parallel port support option: The Parallel port supportsection should be of interest to
anyone with parallel port devices, including printers. Note that in order to have full printer
support, you must also enable "Parallel printer support" under the "Character devices"
section in addition to the appropriate parallel port support here.
Network device support: The second requirement for getting Linux networking to work is to
compile in support for your particular networking hardware. You should select support for the
card(s) that you'd like your kernel to support. The options you want are most likely hiding
under the "Ethernet (10 or 100Mbit)" sub-category.
IDE support
ATA/IDE/MFM/RLL support: This section contains important options for those using IDE
drives, CD-ROMs, DVD-ROMs, and other peripherals. If your system has IDE disks, be sure
to enable "Enhanced IDE/MFM/RLLdisk/cdrom/tape/floppy support," "Include IDE/ATA-2
DISK support", and the chipset appropriate to your particular motherboard (built-in to the
kernel, not as modules -- so your system can boot!). If you have an IDE CD-ROM, be sure to
enable "Include IDE/ATAPI CD-ROM support" as well. Note: without specific chipset support,
IDE will still work but may not take advantage of all the performance-enhancing features of
your particular motherboard.
Also note that the "Enable PCI DMA by default if available" is a highly recommended option
for nearly all systems. Without DMA (direct memory access) enabled by default, your IDE
peripherals will run in PIO mode and may perform up to 15 times slower than normal! You
can verify that DMA is enabled on a particular disk by typing hdparm -d 1 /dev/hdx at
the shell prompt as root, where /dev/hdx is the block special device corresponding to the disk
on which you'd like to enable DMA.
SCSI support
SCSI support: This contains all the options related to SCSI disks and peripherals. If you
have a SCSI-based system, be sure to enable "SCSI support," "SCSI disk support," "SCSI
CD-ROM support," and "SCSI tape support" as necessary. If you are booting from a SCSI
disk, ensure that both "SCSI support" and "SCSI disk support" are compiled-in to your kernel
and are not selected to be compiled as loadable modules. For SCSI to work correctly, you
need to perform one additional step: head over to the "SCSI low-level drivers" sub-category
and ensure that support for your particular SCSI card is enabled and configured to be
compiled directly into the kernel.
File systems: This contain options related to file system drivers, as you might guess. You'll
need to ensure that the file system used for "/" (the root directory) is compiled into your
kernel. Typically, this is ext2, but it may also be ext3, JFS, XFS, or ReiserFS. Be sure to also
enable the "/proc file system support" option, as most distributions require it. Typically, you
should also enable "/dev/pts file system support for Unix98 PTYs," unless you're planning to
use "/dev file system support," (also known as "devfs") in which case you should leave the
"/dev/pts" option disabled, since devfs contains a superset of this capability.
Console drivers: Typically, most people will enable "VGA text console" (normally required
on x86 systems) and optionally "Video mode selection support." It's also possible to use
"Frame-buffer support," which will cause your text console to be rendered on a graphics
rather than a text screen. Some of these drivers can negatively interact with X, so it's best to
stick with the VGA text console, at least in the beginning.
make bzImage
Now it's time to compile the actual binary kernel image. Type make bzImage. After several
minutes, compilation will complete and you'll find the bzImage file in
/usr/src/linux/arch/i386/boot (for an x86 PC kernel). You'll see how to install the new kernel
image in a bit, but now it's time for the modules.
Compiling modules
Now that the bzImage is done, it's time to compile the modules. Even if you didn't enable any
modules when you configured the kernel, don't skip this step -- it's good to get into the habit
of compiling modules immediately after a bzImage. And, if you really have no modules
enabled for compilation -- this step will go really quickly for you. Type make modules &&
make modules_install. This will cause the modules to be compiled and then installed
into/usr/lib/<kernelversion>.
Congratulations! Your kernel is now fully compiled, and your modules are all compiled and
installed. Now it's time to reconfigure LILO so that you can boot the new kernel.
Configuring LILO
To configure LILO to boot the new kernel, you have two options. The first is to overwrite your
existing kernel -- this is risky unless you have some kind of emergency boot method, such a
boot disk with this particular kernel on it.
The safer option is to configure LILO so that it can boot either the new or the old kernel. LILO
can be configured to boot the new kernel by default, but still provide a way for you to select
your older kernel if you happen to run into problems. This is the recommended option, and
the one we'll show you how to perform.
LILO code
Your lilo.conf may look like this:
boot=/dev/hdadelay=20vga=normalroot=/dev/hda1read-only
image=/vmlinuzlabel=linux
To add a new boot entry to your lilo.conf, do the following. First, copy
/usr/src/linux/arch/i386/boot/bzImage to a file on your root partition, such as /vmlinuz2. Once
it's there, duplicate the last three lines of your lilo.conf and add them again to the end of the
file... we're almost there...
Tweaking LILO
Now, your lilo.conf should look like this:
Now, change the first "image=" line to read image=/vmlinuz2" Next, change the second
"label=" line to read label=oldlinux. Also, make sure there is a "delay=20" line near the
top of the file -- if not, add one. If there is, make sure the number is at least twenty.
After doing all this, you'll need to run "lilo" as root. This is very important! If you don't do this,
the booting process won't work. Running "lilo" will give it an opportunity to update its boot
map.
If, for some reason, you need to boot the old kernel, simply reboot your computer and hold
down the shift key. LILO will detect this, and allow you to type in the label of the image you'd
like to boot. To boot your old kernel, you'd type oldlinux, and hit Enter. To see a list of
possible labels, you'd hit TAB.
The only additional step required is to enable the specific driver for the type of card you're
installing into your system. For example, you'd enable "Creative SBLive!" support (under the
"Sound" category) if you were installing a SoundBlaster Live!" card, and you'd enable
"3c590/3c900 series (592/595/597) "Vortex/Boomerang" support" under the "Network device
support/Ethernet (10 or 100Mbit)" category/subcategory if you were installing a 3Com 3c905c
Fast Ethernet card.
The pciutils package also contains a program called "setpci" that can be used to change
various PCI device settings, including PCI device latency. To learn more about PCI device
latency and the effects it can have on your system, see the developerWorks article Linux
hardware stability guide, Part 2.
The first column lists an IRQ number; the second column displays how many interrupts have
been processed by the kernel for this particular IRQ; and the last column identifies the "short
name" of the hardware device(s) associated with the IRQ. As you can see, multiple devices
are capable of sharing the same IRQ if necessary.
/proc also contains additional information about your hardware contained in the following
files:
This 2nd edition of the LPI 102 tutorials also includes a more thorough look at USB in Part 4,
which will guide you through the steps to enable Linux support for several popular USB
devices.
Enabling USB
To enable Linux USB support, first go inside the "USB support" section and enable the
"Support for USB" option. While that step is fairly obvious, the following Linux USB setup
steps can be confusing. In particular, you now need to select the proper USB host controller
driver for your system. Your options are "EHCI," "UHCI," "UHCI (alternate driver)," and
"OHCI." This is where a lot of people start getting confused about USB for Linux.
The Linux USB drivers have three different USB host controller options because there are
three different types of USB chips found on motherboards and PCI cards. The "EHCI" driver
is designed to provide support for chips that implement the new high-speed USB 2.0
protocol. The "OHCI" driver is used to provide support for USB chips found on non-PC
systems, as well as those on PC motherboards with SiS and ALi chipsets. The "UHCI" driver
is used to provide support for the USB implementations you'll find on most other PC
motherboards, including those from Intel and Via. You simply need to select the "?HCI" driver
that corresponds to the type of USB support you'd like to enable. If in doubt, you can enable
"ECHI," "UHCI" (pick either of the two, there's no significant difference between them), and
"OHCI" just to be safe.
the actual USB peripherals that you will be using with Linux. For example, in order to enable
support for my USB game controller, I enabled the "USB Human Interface Device (full HID)
support". I also enabled "Input core support" and "Joystick support" under the main "Input
core support" section.
Mounting usbdevfs
Once you've rebooted with your new USB-enabled kernel, you should mount the USB device
file system to /proc/bus/usb by typing the following command:
In order to have the USB device file system mounted automatically when your system boots,
add the following line to /etc/fstab after the /proc mount line:
These steps are unnecessary for many Linux distributions, which will auto-detect if usbdevfs
support is enabled in your kernel at boot time and automatically mount the usbdevfs file
system if possible. For more information about USB, visit the USB sites I've listed in
"Resources," which follow.
On the next page, you'll find a number of resources that will help you learn more about the
topics in this tutorial.
Resources
The Linux Kernel HOWTO is another good resource for kernel compilation instructions.
The LILO, Linux Crash Rescue HOW-TO shows you how to create an emergency Linux boot
disk.
Don't forget The Linux Documentation Project. You'll find its collection of guides, HOWTOs,
FAQs, and man-pages to be invaluable. Be sure to check out Linux Gazette and LinuxFocus
as well.
The Linux System Administrators guide, available from Linuxdoc.org's "Guides" section, is a
good complement to this series of tutorials -- give it a read! You may also find Eric S.
Raymond's Unix and Internet Fundamentals HOWTO to be helpful.
In the Bash by example article series on developerWorks, Daniel shows you how to use
bash programming constructs to write your own bash scripts. This series (particularly Parts
1 and 2) will be excellent additional preparation for the LPI exam:
• Bash by example, Part 1: Fundamental programming in the Bourne-again shell
• Bash by example, Part 2: More bash programming fundamentals
• Bash by example, Part 3: Exploring the ebuild system
We highly recommend the Technical FAQ for Linux users by Mark Chapman, a 50-page
in-depth list of frequently asked Linux questions, along with detailed answers. The FAQ itself
is in PDF (Acrobat) format. If you're a beginning or intermediate Linux user, you really owe it
to yourself to check this FAQ out. We also recommend Linux glossary for Linux users, also
from Mark.
If you're not familiar with the vi editor, we strongly recommend that you check out Daniel's Vi
intro -- the cheat sheet method tutorial . This tutorial will give you a gentle yet fast-paced
introduction to this powerful text editor. Consider this must-read material if you don't know
how to use vi.
You can find more information on software RAID in Daniel's developerWorks software RAID
series: Part 1 and Part 2. Logical volume management adds an additional storage
management layer to the kernel that allows you to easily grow, shrink, and span file systems
across multiple disks. To learn more about LVM, see Daniel's articles on the subject: Part 1
and Part 2. Both software RAID and LVM require additional user-land tools and setup.
For more information on using the iptables command to set up your own stateful firewall,
see the developerWorks tutorial Linux 2.4 stateful firewall design.
For more information about USB, visit linux-usb.org . For additional USB setup and
configuration instructions, be sure to read the Linux-USB guide .
For more information on the Linux Professional Institute, visit the LPI home page.
Feedback
We look forward to getting your feedback on this tutorial. Additionally, you are welcome to
contact the author, Daniel Robbins, directly at [email protected].
Colophon
This tutorial was written entirely in XML, using the developerWorks Toot-O-Matic tutorial
generator. The open source Toot-O-Matic tool is an XSLT stylesheet and several XSLT
extension functions that convert an XML file into a number of HTML pages, a zip file, JPEG
heading graphics, and two PDF files. Our ability to generate multiple text and binary formats
from a single source file illustrates the power and flexibility of XML. (It also saves our
production team a great deal of time and effort.)
Table of Contents
If you're viewing this document online, you can click any of the topics below to link directly to that section.
This tutorial is ideal for those who want to learn about or improve their foundational Linux
USB, networking, and file sharing skills. It is particularly appropriate for those who will be
setting up applications or USB hardware on Linux servers or desktops. For many, much of
this material will be new, but more experienced Linux users may find this tutorial to be a
great way of rounding out their important Linux system administration skills. If you are new to
Linux, we recommend you start with Part 1 and work through the series from there.
By studying this series of tutorials (eight in all for the 101 and 102 exams; this is the eighth
and last installment), you'll have the knowledge you need to become a Linux Systems
Administrator and will be ready to attain an LPIC Level 1 certification (exams 101 and 102)
from the Linux Professional Institute if you so choose.
For those who have taken the release 1 version of this tutorial for reasons other than LPI
exam preparation, you probably don't need to take this one. However, if you do plan to take
the exams, you should strongly consider reading this revised tutorial.
Daniel Robbins lives in Albuquerque, New Mexico, and is the Chief Architect of Gentoo
Technologies, Inc., the creator of Gentoo Linux, an advanced Linux for the PC, and the
Portage system, a next-generation ports system for Linux. He has also served as a
contributing author for the Macmillan books Caldera OpenLinux Unleashed, SuSE Linux
Unleashed, and Samba Unleashed. Daniel has been involved with computers in some
fashion since the second grade, when he was first exposed to the Logo programming
language as well as to a potentially dangerous dose of Pac Man. This probably explains why
he has since served as a Lead Graphic Artist at Sony Electronic Publishing/Psygnosis.
Daniel enjoys spending time with his wife, Mary, and their daughter, Hadassah.
John Davis lives in Cleveland, Ohio, and is the Senior Documentation Coordinator for
Gentoo Linux, as well as a full-time computer science student at Mount Union College. Ever
since his first dose of Linux at age 11, John has become a religious follower and has not
looked back. When he is not writing, coding, or doing the college "thing," John can be found
mountain biking or spending time with his family and close friends.
Setting up USB under GNU/Linux has always been a fairly easy, but undocumented, task.
Users are often confused about whether or not to use modules, what the difference between
UHCI, OHCI, and EHCI is, and why in the world their specific USB device is not working.
This section should help clarify the different aspects of the Linux USB system.
This section assumes that you are familiar with how to compile your kernel, as well as the
basic operation of a GNU/Linux system. For more information on these subjects, please visit
the other LPI tutorials in this series, starting with Part 1, or The Linux Documentation Project
homepage (see the Resources on page 17 at the end of this tutorial for links).
Grab a kernel
If you do not already have Linux kernel sources installed on your system, it is recommended
that you download the latest 2.4 series kernel from kernel.org or one of its many mirrors (see
the Resources on page 17 for a link).
Enter lspci
Running lspci should produce output similar to this:
# lspci
00:00.0 Host bridge: Advanced Micro Devices [AMD] AMD-760 [IGD4-1P] System Controller (r
00:01.0 PCI bridge: Advanced Micro Devices [AMD] AMD-760 [IGD4-1P] AGP Bridge
00:07.0 ISA bridge: VIA Technologies, Inc. VT82C686 [Apollo Super South] (rev 40)
00:07.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT8233/A/C/VT8235
00:07.2 USB Controller: VIA Technologies, Inc. USB (rev 1a)
00:07.3 USB Controller: VIA Technologies, Inc. USB (rev 1a)
00:07.4 SMBus: VIA Technologies, Inc. VT82C686 [Apollo Super ACPI] (rev 40)
00:08.0 Serial controller: US Robotics/3Com 56K FaxModem Model 5610 (rev 01)
00:0b.0 VGA compatible controller: nVidia Corporation NV11DDR [GeForce2 MX 100 DDR/200 D
00:0d.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M [Tornado] (rev 78)
00:0f.0 Multimedia audio controller: Creative Labs SB Live! EMU10k1 (rev 08)
00:0f.1 Input device controller: Creative Labs SB Live! MIDI/Game Port (rev 08)
01:05.0 VGA compatible controller: nVidia Corporation NV25 [GeForce4 Ti 4400] (rev a2)
Driver Chipset
EHCI USB 2.0 Support
UHCI All Intel, all VIA chipsets
JE (Alternate to If UHCI does not work, and you have an Intel or VIA
UHCI chipset, try JE
OHCI Compaq, most PowerMacs, iMacs, and PowerBooks, OPTi,
SiS, ALi
# cd /usr/src/linux
# make menuconfig
# make modules && make modules_install
# modprobe usbcore
# modprobe ehci-hcd
# modprobe usb-uhci
# modprobe uhci
# modprobe usb-ohci
Before you can start using your USB mouse, you need to compile USB mouse support into
your kernel. Enable these two options:
# cd /usr/src/linux
# make menuconfig
# make modules && make modules_install
Once these options are compiled as modules, you are ready to load the usbmouse module
and proceed:
# modprobe usbmouse
When the module finishes loading, go ahead and plug in your USB mouse. If you already
had the mouse plugged in while the machine was booting, no worries, as it will still work.
Once you plug in the mouse, use dmesg to see if it was detected by the kernel:
# dmesg
hub.c: new USB device 10:19.0-1, assigned address 2
input4: USB HID v0.01 Mouse [Microsoft Microsoft IntelliMouse Optical] on usb2:2.0
When you have confirmed that your mouse is recognized by the kernel, it is time to configure
XFree86 to use it. That's next.
Section "InputDevice"
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "IMPS/2"
#The next line enables mouse wheel support
Option "ZAxisMapping" "4 5"
#The next line points XFree86 to the USB mouse device
Option "Device" "/dev/input/mice"
EndSection
Now, restart XFree86, and your USB mouse should be working just fine. Once everything is
working, go ahead and compile your USB modules into the kernel statically. Of course, this is
completely optional, so if you would like to keep your modules as modules, make sure they
are loaded at boot time so that you can use your mouse after you reboot.
Before any digital picture editing can take place, you'll need to retrieve the pictures that are
going to be edited. Many times, digital cameras will have a USB port, but if yours does not,
these instructions will work for your media card reader as long as the file system on your
media card is supported in the Linux kernel.
USB Mass Storage works for anything that uses USB to access an internal drive of some
sort. Feel free to experiment with other hardware, such as USB MP3 players, as these
instructions will work the same. Additionally, note that older cameras with built-in serial ports
are not compatible with these instructions.
Linux kernel. Therefore, kernel support must be enabled for SCSI support, SCSI disk
support, SCSI generic support, and USB Mass Storage support.
# cd /usr/src/linux
# make menuconfig
Please note that your third-party modules, such as NVIDIA drivers and ALSA drivers, may be
overwritten by the module installation. You might want to reinstall those right after running
make modules_install.
Did it work?
Once your modules are rebuilt, plug in your camera or media card reader and load the USB
Mass Storage module:
# modprobe sd_mod
The following line loads the USB Mass Storage support module:
# modprobe usb-storage
# dmesg
Initializing USB Mass Storage driver...
usb.c: registered new driver usb-storage
scsi1 : SCSI emulation for USB Mass Storage devices
Vendor: SanDisk Model: ImageMate CF-SM Rev: 0100
Type: Direct-Access ANSI SCSI revision: 02
Vendor: SanDisk Model: ImageMate CF-SM Rev: 0100
Type: Direct-Access ANSI SCSI revision: 02
WARNING: USB Mass Storage data integrity not assured
USB Mass Storage device found at 2
USB Mass Storage support registered.
On our machine, the card reader was mapped to /dev/sda1; yours might be different.
To mount your device, do the following (and note that your media's file system might not be
vfat, so substitute as needed):
# mkdir /mnt/usb-storage
# mount -t vfat /dev/sda1 /mnt/usb-storage
Not only that, but authentication (the sending of your password to the server) is performed in
plain text, making it a trivial matter for someone capturing your network data to get instant
access to your password. In fact, using a network sniffer it's possible for someone to
reconstruct your entire telnet session, seeing everything on the screen that you see!
Obviously, tools such as telnet were designed with the assumption that the network was
secure and unsniffable and are inappropriate for today's distributed and public networks.
Secure shell
A better solution was needed, and that solution came in the form of a tool called ssh. A
popular modern incarnation of this tool is available in the openssh package, available for
virtually every Linux distribution, not to mention many other systems.
What sets ssh apart from its insecure cousins is that it encrypts all communications between
the client and the server using strong encryption. By doing this, it becomes very difficult or
impossible to monitor the communications between the client and server. In this way, ssh
provides its service as advertised -- it is a secure shell. In fact, ssh has excellent "all-around"
security -- even authentication takes advantage of encryption and various key exchange
strategies to ensure that the user's password cannot be easily grabbed by anyone monitoring
data being transmitted over the network.
In this age of Internet popularity, ssh is a valuable tool for enhancing network security when
using Linux systems. Most security-savvy network admins discourage or disallow the use of
telnet and rsh on their systems because ssh is such a capable and secure replacement.
Using ssh
Generally, most distributions' openssh packages can be used without any manual
configuration. After installing openssh, you'll have a couple of binaries. One is, of course,
ssh, the secure shell client that can be used to connect to any system running sshd, the
secure shell server. To use ssh, you typically start a session by typing something like:
$ ssh drobbins@remotebox
Above, you instruct ssh to log in as the "drobbins" user account on remotebox. Like telnet,
you'll be prompted for a password; after entering it, you'll be presented with a new login
session on the remote system.
Starting sshd
If you want to allow ssh connections to your machine, you'll need to start the sshd server.
To start the sshd server, you would typically use the rc-script that came with your
distribution's openssh package by typing something like:
# /etc/init.d/sshd start
or
# /etc/rc.d/init.d/sshd start
If necessary, you can adjust configuration options for sshd by modifying the
/etc/ssh/sshd_config file. For more information on the various options available, type man
sshd.
Secure copy
The openssh package also comes with a handy tool called scp (secure copy). You can use
this command to securely copy files to and from various systems on the network. For
example, if you wanted to copy ~/foo.txt to our home directory on remotebox, you could type:
Note the trailing colon -- without it, scp would have created a local file in the current working
directory called "drobbins@remotebox." However, with the colon, the intended action is
taken. After being prompted for the password on remotebox, the copy will be performed.
If you wanted to copy a file called bar.txt in remotebox's /tmp directory to the current working
directory on our local system, you could type:
$ scp drobbins@remotebox:/tmp/bar.txt .
Again, the ever-important colon separates the user and host name from the file path.
Section 4. NFS
Introducing NFS
The Network File System (NFS) is a technology that allows the transparent sharing of files
between UNIX and Linux systems connected via a Local Area Network, or LAN. NFS has
been around for a long time; it's well known and used extensively in the Linux and UNIX
worlds. In particular, NFS is often used to share home directories among many machines on
the network, providing a consistent environment for a user when he or she logs in to a
machine (any machine) on the LAN. Thanks to NFS, it's possible to mount remote file system
trees and have them fully integrated into a system's local file system. NFS' transparency and
maturity make it a useful and popular choice for network file sharing under Linux.
NFS basics
To share files using NFS, you first need to set up an NFS server. This NFS server can then
"export" file systems. When a file system is exported, it is made available for access by other
systems on the LAN. Then, any authorized system that is also set up as an NFS client can
mount this exported file system using the standard mount command. After the mount
completes, the remote file system is "grafted in" in the same way that a locally mounted file
system (such as /mnt/cdrom) would be after it is mounted. The fact that all of the file data is
being read from the NFS server rather than from a disk is not an issue to any standard Linux
application. Everything simply works.
Attributes of NFS
Shared NFS file systems have a number of interesting attributes. The first is a result of NFS'
stateless design. Because client access to the NFS server is stateless in nature, it's possible
for the NFS server to reboot without causing client applications to crash or fail. All access to
remote NFS files will simply "pause" until the server comes back online. Also, because of
NFS' stateless design, NFS servers can handle large numbers of clients without any
additional overhead besides that of transferring the actual file data over the network. In other
words, NFS performance is dependent on the amount of NFS data being transferred over the
network, rather than on the number of machines that happen to be requesting that data.
Securing NFS
It's important to mention that NFS version 2 and 3 have some very clear security limitations.
They were designed to be used in a specific environment: a secure, trusted LAN. In
particular, NFS 2 and 3 were designed to be used on a LAN where "root" access to the
machine is only allowed by administrators. Due to the design of NFS 2 and NFS 3, if a
malicious user has "root" access to a machine on your LAN, he or she will be able to bypass
NFS security and very likely be able to access or even modify files on the NFS server that he
or she wouldn't normally be able to otherwise. For this reason, NFS should not be deployed
casually. If you're going to use NFS on your LAN, great -- but set up a firewall first. Make
sure that people outside your LAN won't be able to access your NFS server. Then, make
sure that your internal LAN is relatively secure and that you are fully aware of all the hosts
participating in your LAN.
Once your LAN's security has been thoroughly reviewed and (if necessary) improved, you're
ready to safely use NFS (see Part 3 of the 102 series for more on this).
Because of this, using mismatched user and group IDs with NFS can result in security
breaches -- particularly if two different users on different systems happen to be sharing the
same numerical UID.
So, before getting NFS set up on a larger LAN, it's a good idea to first set up NIS or NIS+.
NIS(+), which stands for "Network Information Service," allows you to have a user and group
database that can be centrally managed and shared throughout your entire LAN, ensuring
NFS ownership consistency as well as reducing administrative headaches.
While NIS+ is important, we don't have room to cover it in this tutorial. If you are planning to
take the LPI exams -- or want to be able to use NFS to its full potential -- be sure to study the
"Linux NFS HOWTO" by Thorsten Kukuk (see the Resources on page 17 for a link).
Now that our NFS server has support for NFS in the kernel, it's time to set up an /etc/exports
file. The /etc/exports file will describe the local file systems that will be made available for
export as well as:
But before we look at the format of the /etc/exports file, a big implementation warning is
needed! The NFS implementation in the Linux kernel only allows the export of one local
directory per file system. This means that if both /usr and /home are on the same ext3 file
system (on /dev/hda6, for example), then you can't have both /usr and /home export lines in
/etc/exports. If you try to add these lines, you'll see error like this when your /etc/exports file
gets reread (which will happen if you type exportfs -ra after your NFS server is up and
running):
As you can see, the first line in the /etc/exports file is a comment. On the second line, we
select our root ("/") file system for export. Note that while this exports everything under "/", it
will not export any other local file system. For example, if our NFS server has a CD-ROM
mounted at /mnt/cdrom, the contents of the CDROM will not be available unless they are
exported explicitly in /etc/exports. Now, notice the third line in our /etc/exports file. On this
line, we export /mnt/backup; as you might guess, /mnt/backup is on a separate file system
from /, and it contains a backup of our system.
Each line also has a "192.168.1.9(rw,no_root_squash)" on it. This information tells nfsd to
only make these exports available to the NFS client with the IP address of 192.168.1.9. It
also tells nfsd to make these file systems writeable as well as readable by NFS client
systems ("rw",) and instructs the NFS server to allow the remote NFS client to allow a
superuser account to have true "root" access to the file systems ("no_root_squash".)
In the above example /etc/exports file, we use a host mask of /24 to mask out the last eight
bits in the IP address we specify. It's very important that there is no space between the IP
address specification and the "(", or NFS will interpret your information incorrectly. And, as
you might guess, there are other options that you can specify besides "rw" and
"no_root_squash"; type "man exports" for a complete list.
# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 32802 status
100024 1 tcp 46049 status
100011 1 udp 998 rquotad
100011 2 udp 998 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100021 1 udp 32804 nlockmgr
100021 3 udp 32804 nlockmgr
100021 4 udp 32804 nlockmgr
100021 1 tcp 48026 nlockmgr
# rpcinfo
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 32768 status
100024 1 tcp 32768 status
You can also perform this check from a remote system by typing rpcinfo -p myhost, as
follows:
# rpcinfo -p sidekick
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 32768 status
100024 1 tcp 32768 status
Once both client and server are set up correctly (and assuming that the NFS server is
configured to allow connections from the client), you can go ahead and mount an exported
NFS file system on the client. In this example, "inventor" is the NFS server and "sidekick" (IP
address 192.168.1.9) is the NFS client. Inventor's /etc/exports file contains a line that looks
like this, allowing connections from any machine on the 192.168.1 network:
/ 192.168.1.1/24(rw,no_root_squash)
Inventor's root file system will now be mounted on sidekick at /mnt/nfs; you should now be
able to type cd /mnt/nfs and look around inside and see inventor's files. Again, note that if
inventor's /home tree is on another file system, then /mnt/nfs/home will not contain anything
-- another mount (as well as another entry in inventor's /etc/exports file) will be required to
access that data.
Inventor's /usr tree will now be NFS mounted to the pre-existing /mnt/usr directory. It's
important to again note that inventor's /etc/exports file didn't need to explicitly export /usr; it
was included "for free" in our "/" export line.
Resources
Although the tutorial is over, learning never ends and we recommend you check out the
following resources, particularly if you plan to take the LPI 102 exam:
For more information on USB under GNU/Linux, please check out the official Linux USB
project page for more information.
If you do not have pciutils installed on your system, you can find the source at the pciutils
project homepage.
Visit the project homepage for the venerable GIMP, or GNU Image Manipulation Program.
Also be sure to visit the home of openssh, which is an excellent place to continue your study
of this important tool.
The best thing you can do to improve your NFS skills is to try setting up your own NFS 3
server and client(s) -- the experience will be invaluable. The second-best thing you can do is
to read the quite good Linux NFS HOWTO, by Thorsten Kukuk.
We didn't have room to cover another important networked file-sharing technology: Samba.
For more information about Samba, we recommend that you read Daniel's Samba articles on
developerWorks:
• Part 1 on key concepts
• Part 2 on compiling and installing Samba
• Part 3 on Samba configuration
Once you're up to speed on Samba, we recommend that you spend some time studying the
Linux DNS HOWTO. The LPI 102 exam is also going to expect that you have some
familiarity with Sendmail. We didn't have enough room to cover Sendmail, but (fortunately for
us!) Red Hat has a good Sendmail HOWTO that will help to get you up to speed.
In addition, we recommend the following general resources for learning more about Linux
and preparing for LPI certification in particular:
Linux kernels and more can be found at the Linux Kernel Archives.
You'll find a wealth of guides, HOWTOs, FAQs, and man pages at The Linux Documentation
Project. Be sure to check out Linux Gazette and LinuxFocus as well.
The Linux System Administrators guide, available from Linuxdoc.org's "Guides" section, is a
good complement to this series of tutorials -- give it a read! You may also find Eric S.
Raymond's Unix and Internet Fundamentals HOWTO to be helpful.
In the Bash by example article series on developerWorks, learn how to use bash
programming constructs to write your own bash scripts. This series (particularly parts 1 and
2) are excellent additional preparation for the LPI exam:
• Part 1 on fundamental programming in the Bourne-again shell
• Part 2 on more bash programming fundamentals
• Part 3 on the ebuild system
The Technical FAQ for Linux Users by Mark Chapman is a 50-page in-depth list of
frequently-asked Linux questions, along with detailed answers. The FAQ itself is in PDF
(Acrobat) format. If you're a beginning or intermediate Linux user, you really owe it to yourself
to check this FAQ out. The Linux glossary for Linux users, also from Mark, is excellent as
well.
If you're not very familiar with the vi editor, you should check out Daniel's tutorial on vi. This
developerWorks tutorial will give you a gentle yet fast-paced introduction to this powerful text
editor. Consider this must-read material if you don't know how to use vi.
For more information on the Linux Professional Institute, visit the LPI home page.
Feedback
Please send any tutorial feedback you may have to the authors:
Colophon
This tutorial was written entirely in XML, using the developerWorks Toot-O-Matic tutorial
generator. The open source Toot-O-Matic tool is an XSLT stylesheet and several XSLT
extension functions that convert an XML file into a number of HTML pages, a zip file, JPEG
heading graphics, and two PDF files. Our ability to generate multiple text and binary formats
from a single source file illustrates the power and flexibility of XML. (It also saves our
production team a great deal of time and effort.)
Table of Contents
If you're viewing this document online, you can click any of the topics below to link directly to that section.
This tutorial is ideal for those who want to learn about or improve their basic Linux
networking and security skills. It's especially appropriate for those who will be setting up
applications on Linux servers or desktops. For many, much of this material will be new, but
more experienced Linux users may find this tutorial to be a great way of rounding out their
important Linux system administration skills. If you are new to Linux, we recommend you
start with Part 1 and work through the series from there.
By the end of this series of tutorials (eight in all; this is part seven), you'll have the knowledge
you need to become a Linux Systems Administrator and will be ready to attain an LPIC Level
1 certification (exams 101 and 102) from the Linux Professional Institute if you so choose.
For those who have taken the release 1 version of this tutorial for reasons other than LPI
exam preparation, you probably don't need to take this one. However, if you do plan to take
the exams, you should strongly consider reading this revised tutorial.
Daniel Robbins lives in Albuquerque, New Mexico, and is the Chief Architect of Gentoo
Technologies, Inc., the creator of Gentoo Linux, an advanced Linux for the PC, and the
Portage system, a next-generation ports system for Linux. He has also served as a
contributing author for the Macmillan books Caldera OpenLinux Unleashed, SuSE Linux
Unleashed, and Samba Unleashed. Daniel has been involved with computers in some
fashion since the second grade, when he was first exposed to the Logo programming
language as well as to a potentially dangerous dose of Pac Man. This probably explains why
he has since served as a Lead Graphic Artist at Sony Electronic Publishing/Psygnosis.
Daniel enjoys spending time with his wife, Mary, and their daughter, Hadassah.
Chris Houser, known to his friends as "Chouser," has been a UNIX proponent since 1994
when he joined the administration team for the computer science network at Taylor
University in Indiana, where he earned his Bachelor's degree in Computer Science and
Mathematics. Since then, he has gone on to work in Web application programming, user
interface design, professional video software support, and now Tru64 UNIX device driver
programming at Compaq. He has also contributed to various free software projects, most
recently to Gentoo Linux. He lives with his wife and two cats in New Hampshire.
Aron Griffis graduated from Taylor University with a degree in Computer Science and an
award that proclaimed him to be the "Future Founder of a Utopian UNIX Commune." Working
towards that goal, Aron is employed by Compaq writing network drivers for Tru64 UNIX, and
spending his spare time plunking out tunes on the piano or developing Gentoo Linux. He
lives with his wife Amy (also a UNIX engineer) in Nashua, New Hampshire.
00:01:02:CB:57:3C
Introducing IP addresses
These hardware addresses are used as unique addresses for individual systems on your
Ethernet LAN. Using hardware addresses, one machine can, for example, send an Ethernet
frame addressed to another machine. The problem with this approach is that TCP/IP-based
communication uses a different kind of addressing scheme, using what are called IP
addresses instead. IP addresses look something like this:
192.168.1.1
initialization scripts very likely have a command in them that looks something like this:
Above, the ifconfig command is used to associate eth0 (and thus eth0's hardware
address) with the 192.168.1.1 IP address. In addition, various other IP-related information is
specified, including a broadcast address (192.168.1.255) and a netmask (255.255.255.0).
When this command completes, your eth0 interface will be enabled and have an associated
IP address.
Using ifconfig -a
You can view all network devices that are currently running by typing ifconfig -a,
resulting in output that looks something like this:
Above, you can see a configured eth0 interface, as well as a configured lo (localhost)
interface. The lo interface is a special virtual interface that's configured so that you can run
TCP/IP applications locally, even without a network.
TCP/IP is working!
Once all of your network interfaces are brought up and associated with corresponding IP
addresses (probably done automatically by your distribution's startup scripts), your Ethernet
network can be used to carry TCP/IP traffic as well. The systems on your LAN can now
address each other using IP addresses, and common commands such as ping, telnet,
and ssh will work properly between your machines.
address. So if I had a network with three nodes, my /etc/hosts file might look something like
this:
127.0.0.1 localhost
192.168.1.1 mybox.gentoo.org mybox
192.168.1.2 testbox.gentoo.org testbox
192.168.1.3 mailbox.gentoo.org mailbox
There's one important thing to note about the syntax of the /etc/hosts file -- the fully qualified
name must appear first after the IP address, then it's followed by a list of optional aliases. So,
while "testbox.gentoo.org testbox" is correct, "testbox testbox.gentoo.org" is not. Getting this
wrong can cause name resolution to not work properly, so be careful.
After copying this /etc/hosts file to each of my systems, I'll now be able to refer to my
systems by name, rather than simply by IP address: ping mybox (as well as ping
mybox.gentoo.org) will now work!
Using DNS
While this approach works for small LANs, it isn't very convenient for larger LANs with many
systems on them. For such configurations, it's generally much better to store all your
IP-to-hostname mapping information on a single machine, and set up what is called a "DNS
server" (domain name service server) on it. Then, you can configure each machine to contact
this particular machine to receive up-to-date IP-to-name mappings. This is done by creating
an /etc/resolv.conf file on every machine that looks something like this:
domain gentoo.org
nameserver 192.168.1.1
nameserver 192.168.1.2
In the above /etc/resolv.conf, I tell the system that any host names that are not qualified
(such as "testbox" as opposed to "testbox.gentoo.org," etc.) should be considered to be part
of the gentoo.org domain. I also specify that I have a DNS server running on 192.168.1.1, as
well as a backup one running on 192.168.12. Actually, nearly all network-connected Linux
PCs already have a nameserver specified in their resolv.conf file, even if they aren't on a
LAN. This is because they are configured to use a DNS server at their Internet Service
Provider, in order to map hostnames to IP addresses. That way, names not listed in
/etc/hosts (such as ibm.com, for example) will be forwarded to one of the DNS servers for
resolution. In this way, /etc/hosts and /etc/resolv.conf can work in tandem to provide both
local and Internet-wide name-to-IP mappings, respectively.
line. This router would be configured with an IP address so that it could communicate with
the systems on our LAN. In turn, we would configure every system on our LAN to use this
router as its default route, or gateway. What this means is that any network data addressed
to a system that isn't on our LAN would be routed to our router, which would take care of
forwarding to remote systems outside our LAN. Generally, your distribution's system
initialization scripts handle the setting of a default route for you. The command they use to do
this probably looks something like this:
In the above route command, the default route is set to 192168.1.80 -- the IP address of the
router. To view all the routes configured on your system, you can type route -n. The route
with a destination of "0.0.0.0" is the default route.
Homework
So far, we've given you a very brief introduction to Linux networking concepts. Unfortunately,
we simply don't have the room to cover everything you need to know, such as how to select
appropriate IP addresses, network masks, broadcast addresses, etc. In fact, there's quite a
bit more information you'll need to learn in order to prepare for the LPI Level 102 exam.
Fortunately, the topic of Linux networking is one of the most comprehensively documented
Linux topics around. In particular, we recommend that you read the Linux Network
Administrators Guide, especially sections 2 through 6. Accompanied with our gentle
introduction to Linux networking, the Linux Network Administrators Guide should get you up
to speed in no time!
In order to provide these services, the remote system either runs an instance of each server
to accept connections (for example /usr/sbin/in.telnetd and /usr/sbin/in.ftpd), or runs inetd.
What's the difference? While the in.telnetd and in.ftpd programs continually wait to handle
incoming requests for specific services, the inetd program is able to listen for a variety of
incoming connections and dynamically start the appropriate service to handle the connection
based on its type. For this reason, inetd is also known as the "Internet superserver."
On a typical Linux installation, inetd handles most incoming connections. Only a few
programs (such as sshd and lpd) handle their own network communication without relying on
inetd to accept incoming connections. Often, services can be configured to run one of two
ways -- either dynamically via inetd or in a "standalone" mode (like ssh and lpd) where they
hang around listening for incoming connections themselves.
The mapping between port number, protocol and the official service that runs on it is defined
in the /etc/services file. Typically, this information is identical between every Linux
machine, since these mappings are world-wide standards. Each line has the format:
ftp 21/tcp
fsp 21/udp fspd
ssh 22/tcp # SSH Remote Login Protocol
ssh 22/udp # SSH Remote Login Protocol
Above, we can see that the ftp services runs on TCP port 21, and that the fsp service runs on
UDP port 21. We can also see that the ssh service can listen on either TCP or UDP port 22.
It's important to recognize that "ftp," "fsp," and "ssh" -- the first column of /etc/services -- are
official service names. In a few panels, we will use these official service names to configure
inetd.
Note: the IANA maintains the official mappings of assigned port numbers that are recorded in
your /etc/services file.
Since services are specified in inetd.conf by service name rather than port, they must be
listed in /etc/services in order to be eligible for handling by inetd.
Let's look at some common lines from /etc/inetd.conf. For example, the telnet and ftp
services:
For both of these services, the configuration is to use the TCP protocol, and run the server
(in.telnetd or in.ftpd) as the root user. For a complete explanation of the fields in
/etc/inetd.conf, see the inetd(8) man page.
Disabling services
Disabling a service in inetd is simple: Just comment out the line in /etc/inetd.conf. You
probably don't want to remove the line entirely, just in case you need to reference it later. For
example, some system administrators prefer to disable telnet for security reasons (since the
connection is entirely cleartext):
# vi /etc/inetd.conf
[comment out undesired line]
# /etc/rc.d/init.d/inet stop
Stopping INET services: [ OK ]
# /etc/rc.d/init.d/inet start
Starting INET services: [ OK ]
# /etc/rc.d/init.d/inet restart
Stopping INET services: [ OK ]
Starting INET services: [ OK ]
# killall inetd
You can start it again simply by invoking it on the command line. It will automatically run in
the background:
# /usr/sbin/inetd
And there's a shortcut to instruct inetd to reread the configuration file without actually
stopping it: just send it the HUP signal:
At this point you shouldn't be able to telnet or ftp into this system, since telnet and ftp are
disabled. Try telnet localhost to check. If you need telnet or ftp access, all you need to
do is to re-enable it!
# telnet localhost
telnet: Unable to connect to remote host: Connection refused
# telnet localhost
login: (press <ctrl-d> to abort)
# tail -1 /var/log/secure
Feb 12 23:33:05 firewall in.telnetd[440]: connect from 127.0.0.1
The telnet attempt was logged by tcpd, so it looks like we have things working. Since tcpd
provides a consistent connection logging service, that frees up the individual service
daemons from each needing to log connections on their own. In fact, it's similar to inetd doing
the work of accepting connections, since that frees up each of the individual daemons from
needing to accept their own connections. Isn't the simplicity of Linux marvelous?
Access is granted or denied in the following order. The search stops at the first match:
For example, to allow telnet access only to our internal network, we start by setting policy
(reject all connections with a source other than localhost) in /etc/hosts.deny:
# telnet box.yourdomain.com
Trying 10.0.0.1...
Connected to box.yourdomain.com.
Escape character is '^]'.
Connection closed by foreign host.
Slap! Rejected! (This is one of the few times in life that rejection is indicative of success.) To
re-enable access from our own network, we insert the exception in /etc/hosts.allow:
in.telnetd: .yourdomain.com
At this point we're able to successfully telnet into our system again. This is just scraping the
surface of the capabilities of tcp_wrappers. There's lots more information on tcp_wrappers in
the tcpd(8) and hosts_access(5) man pages.
xinetd configuration
The configuration file for xinetd is /etc/xinetd.conf. Most often, that file contains just a few
lines that set default configuration parameters for the rest of the services:
# cat /etc/xinetd.conf
defaults
{
instances = 60
log_type = SYSLOG authpriv
log_on_success = HOST PID
log_on_failure = HOST RECORD
}
includedir /etc/xinetd.d
The last line in that file instructs xinetd to read additional configuration from file snippets in
the /etc/xinetd.d directory. Let's take a quick glance at the telnet snippet:
# cat /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
}
As you can see, it's not hard to configure xinetd, and it's more intuitive than inetd. You can
get lots more information about xinetd in the xinetd(8), xinetd.conf(5), and xinetd.log(5) man
pages.
There's also lots of information on the Web regarding inetd, tcp_wrappers, and xinetd. Be
sure to check out some of the links we've provided for these tools in Resources on page 28 ;
they will give you a much better feel for the capability and configuration of these tools.
Linux system security can be divided into two parts: internal security and external security.
Internal security refers to guarding against users accidentally or maliciously disrupting the
system. External security refers to preventing unauthorized users from gaining access to the
system.
This section will cover internal security first, then external security, and we'll finish with some
general guidelines and tips.
Regarding file permissions, you may want to modify permissions for the following three
cases:
First, log files in /var/log need not be world-readable. There is no reason for anybody other
than root to be snooping in the logs. See Part 4 of the LPI 101 series for more information on
syslog, plus the logrotate(8) man page for information on configuring that program to create
logs with appropriate permissions.
# cd
# pwd
/root
# chmod 700 .
if [ "$UID" = 0 ]; then
# root user; set world-readable by default so that
# installed files can be read by normal users.
umask 022
else
# make user files secure unless they explicitly open them
# for reading by other users
umask 077
fi
You should consult the umask(2) and bash(1) man pages for more information on setting the
umask. Note that the umask(2) man page refers to the C function, but the information it
contains applies to the bash command as well. See Part 3 of the LPI 101 series for additional
details on umask.
You should consider each program carefully to determine if it needs to have its SUID or
SGID bits on. There may be SUID/SGID programs on your system that you don't need at all.
To search for programs of this nature, use the find command. For example, we could start
searching for SUID/SGID programs in the /usr directory:
# cd /usr
# find . -type f -perm +6000 -xdev -exec ls {} \;
-rwsr-sr-x 1 root root 593972 11-09 12:47 ./bin/gpg
-r-xr-sr-x 1 root man 38460 01-27 22:13 ./bin/man
-rwsr-xr-x 1 root root 15576 09-29 22:51 ./bin/rcp
-rwsr-xr-x 1 root root 8256 09-29 22:51 ./bin/rsh
-rwsr-xr-x 1 root root 29520 01-17 19:42 ./bin/chfn
-rwsr-xr-x 1 root root 27500 01-17 19:42 ./bin/chsh
-rwsr-xr-x 1 lp root 8812 01-15 23:21 ./bin/lppasswd
-rwsr-x--- 1 root cron 10476 01-15 22:16 ./bin/crontab
In this list, I've already found a candidate for closer inspection: lppasswd is part of the CUPS
printing software distribution. Since I don't provide print services on my system, I might
consider removing CUPS, which will also remove the lppasswd program. There may be no
security-compromising bugs in lppasswd, but why take the chance on a program I'm not
using?
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 3071
virtual memory (kbytes, -v) unlimited
It can be quite tricky to set these limits in a way that actually increases the security of your
system without causing problems for legitimate users, so be careful when adjusting these
settings.
# time bash
# ulimit -t 1
# while true; do true; done
Killed
real 0m28.941s
user 0m1.990s
sys 0m0.017s
In the example above, "user" time plus "sys" time equals total CPU time used by the process.
When the bash process reached the 2-second mark, Linux judged that it had exceeded the
1-second limit, so the process was killed. Cool, eh?
Note: One second was just an example. Don't do this to your users! Even multiple hours is
bad, since X can really rack up the time (my current session has used 69+ hours of CPU
time). For a real implementation, you may want to ulimit something other than CPU time.
• Clobberd monitors user activity, and meters resources such as time and network activity.
• Idled can log out users that have been idle for too long or who have been logged on for too
long. It can also prevent users from being logged in too many times, and refuse users from
Intrusion prevention
External security can be split into two categories: intrusion prevention and intrusion
detection. Intrusion prevention measures are taken to prevent unauthorized access to a
system. If these measures fail, intrusion detection may prove useful in determining when
unauthorized access has occurred, and what damage has been done.
A full Linux installation is a large and complex system. It's difficult to keep track of everything
that's installed, and even harder to configure each package's security features. The problem
becomes simpler when fewer packages are installed. A first step to intrusion prevention is to
remove packages you don't need. Take a look back at Part 1 of the LPI 102 series for a
review of packaging systems.
To disable services in inetd, simply comment out the appropriate line in /etc/inetd.conf by
prepending "#"; then restart inetd. (This was described previously in this tutorial, so glance
back a few panels if you need a refresher.)
To disable services in xinetd, you can do something similar with the appropriate snippet in
/etc/xinetd.d. For example, to disable telnet, either comment out the entire content of the file
/etc/xinetd.d/telnet, or simply delete the file. Restart xinetd to complete the procedure.
If you're using tcpd in conjunction with inetd, or if you're using xinetd, you also have the
option of limiting incoming connections to trusted hosts. For tcpd, see the earlier sections in
this tutorial. For xinetd, search for "only_from" in the xinetd.conf(5) man page.
Standalone servers are usually started by the init system when the system boots up or
changes runlevels. If you don't remember how runlevels work, take a look at Part 4 of the LPI
101 series.
To stop the init system from starting a server, find the symlinks to its startup script in each
runlevel directory, and delete them. The runlevel directories are usually named /etc/rc3.d or
/etc/rc.d/rc3.d (for runlevel 3). You'll also want to check the other runlevels.
Once the runlevel symlinks for the service are removed, you will still need to shut down the
currently running server. It is best to do this with the service's init script, usually found in
/etc/init.d or /etc/rc.d/init.d. For example, to shut down sshd:
# /etc/init.d/sshd stop
* Stopping sshd... [ ok ]
In addition to the standard telnet client, you should look into the possibility of using utilities for
testing the "openness" of your system. We recommend netcat and nmap.
ncat is the network Swiss Army knife: it is a simple UNIX utility that reads and writes data
across network connections, using TCP or UDP protocol. nmap is a utility for network
exploration or security auditing. Specifically, nmap scans ports to determine what's open.
==============================================
# telnet localhost
login: agriffis
Password:
==============================================
==============================================
Login incorrect
Be sure to delete the file when you're done with maintenance, otherwise nobody will be able
to login until you remember! Not that I've ever done this, no, not me... ;-)
The packet filter rules can be set up to do both firewall and router activities. You can inspect
your current rules with the -L option to iptables:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Until you're comfortable building your own ruleset, there are many scripts out there that can
get you started, as long as you trust their authors. One of the most complete is gShield (see
Resources on page 28 ). You can adjust its well-commented and fairly simple configuration file
to set up most normal forms of packet filter rules.
The most basic form of intrusion detection is to pay attention to your syslogs. These usually
appear in the /var/log directory, although the actual filenames will vary depending on your
distribution and configuration.
# less /var/log/messages
Feb 17 21:21:38 [kernel] Vendor: SONY Model: CD-RW CRX140E Rev: 1.0n
Feb 17 21:21:39 [kernel] eth0: generic NE2100 found at 0xe800, Version 0x031243,
DMA 3 (autodetected), IRQ 11 (autodetected).
Feb 17 21:21:39 [kernel] ne.c:v1.10 9/23/94 Donald Becker ([email protected])
Feb 17 21:22:11 [kernel] NVRM: AGPGART: VIA MVP3 chipset
Feb 17 21:22:11 [kernel] NVRM: AGPGART: allocated 16 pages
Feb 17 22:20:05 [PAM_pwdb] authentication failure; (uid=1000)
-> root for su service
Feb 17 22:20:06 [su] pam_authenticate: Authentication failure
Feb 17 22:20:06 [su] - pts/3 chouser-root
It can take some practice to understand all these messages, but most of the important ones
are fairly clear. For example, at the end of this log we can see that user "chouser" tried to use
su to become root and failed.
Tripwire is one of the most popular of these intrusion detection packages (see Resources on
page 28 at the end of this tutorial for a link). Once you have installed tripwire, you must
customize its configuration file so that it knows which files should change and which should
not. You will also need to tell it how to send you reports of what has changed, and how often
it should run (usually once per day).
There are several Web sites that can help you keep your software up to date. These include
the security-conscious CERT and SecurityFocus' BugTraq list, as well as your normal
software update sites, such as freshmeat.net and your distribution's home page. We'll repeat
these URLs also in Resources on page 28 , but security is such an important issue that -- if you
are not already familiar with these sites, we do recommend that you take a few minutes to
visit the first two now.
"encouraging" (that is, mandating that) your users to do the same, is a cornerstone of good
security. Remember to avoid common words and names, especially any that are related to
you such as the name of a friend, where you work, or your pet's name. Also avoid guessable
numbers such as your birthday or anniversary. Instead try to use random combinations of
letters, numbers, and punctuation.
We already mentioned nmap and netcat for testing network security. It is also a good idea to
check for weak passwords, especially if your system has multiple users. There are many
tools available, such as the ones we've put into Resources on page 28 at the end of this
tutorial.
Section 5. Printing
Introduction
This section will walk you through the set-up and use of the classic UNIX printing system on
Linux, sometimes called Berkeley LPD. There are other systems available for Linux that are
not covered here; see the Resources on page 28 section at the end of the tutorial for
information on these.
Physically installing a printer is beyond the scope of this tutorial. Once it's correctly hooked
up, you'll want to install a print spooler daemon so that machines on the network (including
the one housing the spooler) can send print jobs to the printer.
Once it's installed, the print spooler daemon (officially the Line Printer Daemon) can be run
from the command line. Log in as a normal user and try:
$ /usr/sbin/lpd --help
--X option form illegal
usage: lpd [-FV] [-D dbg] [-L log]
Options
-D dbg - set debug level and flags
Example: -D10,remote=5
set debug level to 10, remote flag = 5
-F - run in foreground, log to STDERR
Example: -D10,remote=5
-L logfile - append log information to logfile
-V - show version info
Now that the daemon is installed, you should make sure that it's set up to run automatically.
Your distribution's LPRng package may have set this up for you already, but if not, see Part 4
of the LPI 101 series for information on using runlevels to start daemons such as lpd
automatically.
When printing on the local printer, both of ends of this "pipeline" are described by the
configuration file /etc/printcap (sometimes located at /etc/lprng/printcap). Each entry in the
printcap (which is short for printer capabilities) describes one print spool:
$ more /etc/printcap
Note that the last line of the entry does not have a trailing backslash (\).
Your distribution may have other entries, and they may be more complex, but they should all
have roughly this form. The name of this entry comes first, lp, followed by a longer
description of this spool. The keyword/value pair lp=/dev/lp0 specifies the Linux device were
print jobs in this spool will be printed, and the sd keyword gives the directory where jobs will
be held until they can be printed.
The rest of the keyword/value pairs provide details about what type of printer is hooked up to
/dev/lp0. They are described in the printcap man page, and we'll cover some of them a little
bit later.
# mkdir -p /var/spool/lpd/lp
# chown lp /var/spool/lpd/lp
# chmod 700 /var/spool/lpd/lp
# checkpc -f
# /etc/init.d/lprng restart
LPRng includes a helpful tool for checking your printcap. It will even set up the spool
directory for you, if you forget to do it manually:
# checkpc -f
Finally, restart the lpd. You'll need to do this any time you change the printcap, in order for
your changes to take effect. You may need to use lpd instead of lprng:
# /etc/init.d/lprng restart
The older Berkeley printing system doesn't include a checkpc tool, so you'll have to try
printing pages to your various printers yourself to make sure the printcap and spool
directories are correct.
up in a print spool and then printed. To try it out, first find or make a small sample text file.
Then:
$ lpr sample.txt
If it worked, you shouldn't see any response on the screen, but your printer should start
going, and soon you'll have a hard copy of your sample text. Don't worry if it doesn't come
out looking quite right; we'll set up filters a bit later that should ensure that all sorts of file
formats print correctly.
You can examine the list of print jobs in the print spool queue with the lpq command. The
-P option specifies the name of the queue to display; if you leave it off, lpq will use the
default spool, just like lpr did before:
$ lpq -Plp
Printer: lp@localhost 'Generic dot-matrix printer entry'
Queue: 1 printable job
Server: pid 1671 active
Unspooler: pid 1672 active
Rank Owner/ID Class Job Files Size Time
active chouser@localhost+670 A 670 sample.txt 8 21:57:30
If you want to stop a job from printing, use the lprm command. You might want to do this if a
job is taking to long, or if a user accidentally sends the same file more than once. Just copy
the job id from the lpq listing above:
$ lprm chouser@localhost+670
Printer lp@localhost:
checking perms 'chouser@localhost+670'
dequeued 'chouser@localhost+670'
You can do many other operations on a print spool using the interactive tool lpc. See the lpc
man page for details.
Here the name of the machine where we want to print is faraway, and the name of the
printer on that machine is lp. The spool directory, /var/spool/lpd/farawaylp, is where print jobs
will be held locally until they can be sent to the remote print spooler, where they may be
spooled again before being sent to the printer. Again, you will need to create this spool
# mkdir -p /var/spool/lpd/farawaylp
# chown lp /var/spool/lpd/farawaylp
# chmod 700 /var/spool/lpd/farawaylp
# checkpc -f
# /etc/init.d/lprng restart
Locally, we have given this remote printer the name farawaylp, so we can send print jobs to
it thusly:
The new key here is if, the input filter. Pointing this to the smbprint script will cause the print
job to be sent to a Windows server instead of the lp device. We still have to list a device
(/dev/null in this case) which the print daemon uses for locking. But no print jobs will actually
be sent there.
# mkdir -p /var/spool/lpd/smb
# chown lp /var/spool/lpd/smb
# chmod 700 /var/spool/lpd/smb
# checkpc -f
# /etc/init.d/lprng restart
In your favorite editor, create a .config file in the spool directory named above. In this case,
/var/spool/lpd/smb/.config:
server="WindowsServerName"
service="PrinterName"
password=""
user=""
Adjust these values to point to the Windows machine and printer name that you want to use,
and you're done:
The smbprint script should come with Samba, but it isn't included in all distributions. If you
can't find it on your system, you can get it from the Samba HOWTO.
Magicfilter
So far we've only tried printing text files, which isn't terribly exciting. Generally any one printer
can only print one format of graphics file -- and yet there are dozens of different formats that
we would like to print: PostScript, gif, jpeg, and many more. A program named Magicfilter
acts as an input filter, much like smbprint does. It doesn't convert the file formats, rather it
provides a framework for identifying the type of document you're trying to print, and runs it
through an appropriate conversion tool: conversion tools must be installed separately. By far,
the most important of these is Ghostscript, which can convert files from Postscript format to
the native format of many printers.
There are filters for dozens and dozens of different printers and printer settings in
/usr/share/magicfilter, so be sure you use the right one for your printer. Each of these filters
is a text file, and usually the full name of the printer is at the top. This may help you if the file
name of the filter isn't clear.
I also added a gqfilter flag to this printcap entry, which will cause the input filter to be used
even when the print job is coming from a remote client. This only works with LPRng.
Since I set up the /var/spool/lpd/lp print spool directory earlier, I only need to check my
printcap syntax and restart the server:
# checkpc -f
# /etc/init.d/lprng restart
Now you are able to print all sorts of documents, including Postscript files. In other words,
choosing "Print" from a menu in your favorite web browser should now work.
directories, printcap entries, and so on. You will still need to make sure you have Ghostscript
installed, but then you can follow the very complete instructions provided in the Apsfilter
handbook.
To expand your Linux knowledge even further, see the Resources on page 28 on the next
panel.
Resources
You can learn more about configuring inetd and xinetd from the xinetd home page. When
adding your own service names and ports to /etc/services, remember to check first that they
don't conflict with the assigned port numbers.
The netfilter home page is a good place to start learning more about iptables and the Linux
packet filter. Until you're comfortable building your own ruleset, you might want to use an
existing script for this. We recommend gShield.
Tripwire is one of the more popular intrusion detection packages. Also see Wietse Venema's
TCP Wrappers, which allow monitoring of and control over connections to your system.
Authenticating users can be much easier with Pluggable Authentication Modules (also known
as PAM).
Is your network wide open? nmap is a utility for network exploration or security auditing.
Specifically, nmap scans ports to determine what's open.
Password checkers and other tools that will test how easy it is to guess your passwords (and
those of your users) include John the Ripper from the Openwall Project, built for just this
purpose. You may also want to try a comprehensive checker like SAINT.
These security sites rate among the most-visited by any systems administrator: CERT is a
federally funded center operated by Carnegie Mellon University. They study Internet security
and vulnerabilities, publish security alerts, and research other security issues. BugTraq,
hosted by Security Focus, is a full-disclosure moderated mailing list for the detailed
discussion and announcement of computer security vulnerabilities. Even if you aren't
particularly interested in this aspect of administration, a subscription to this list can be quite
valuable, as simply scanning subject lines may alert you to vulnerabilities on your own
systems that you might otherwise discover much later, or not at all.
Other recommended security sites that will help you get a better grip on the security of your
Linux machines are the Linux Security HOWTO and O'Reilly's Security Page.
Printing and spooling resources you'll want to check out include the LPRng print spooler
home page and the Spooling Software overview from the Printing HOWTO. Of course, the
Printing HOWTO itself is a valuable resource, as is LinuxPrinting.org.
For help with specific printers, consult the Serial HOWTO. Also the USB guide offers
valuable information on (you guessed it) USB printers.
Samba is a great help in heterogeneous networks. When setting up printing for this kind of
environment, you will want to check out the Samba home page as well as the Samba
HOWTO, with good printer sharing details.
The two printer filters we discussed were Magicfilter and Apsfilter. Remember that both need
a conversion program (we recommend Ghostscript), as they do not do conversion
themselves. If you are using the latter filter, you may also find the Apsfilter handbook to be
quite useful.
In addition, we recommend the following general resources for learning more about Linux
and preparing for LPI certification in particular:
You'll find a wealth of guides, HOWTOs, FAQs, and man pages at The Linux Documentation
Project. Be sure to check out Linux Gazette and LinuxFocus as well.
The Linux Network Administrator's guide, available from Linuxdoc.org's "Guides" section, is a
good complement to this series of tutorials -- give it a read! You may also find Eric S.
Raymond's Unix and Internet Fundamentals HOWTO to be helpful.
In the Bash by example article series on developerWorks, learn how to use bash
programming constructs to write your own bash scripts. This series (particularly parts 1 and
2) are excellent additional preparation for the LPI exam:
The Technical FAQ for Linux Users by Mark Chapman is a 50-page in-depth list of
frequently-asked Linux questions, along with detailed answers. The FAQ itself is in PDF
(Acrobat) format. If you're a beginning or intermediate Linux user, you really owe it to yourself
to check this FAQ out. The Linux glossary for Linux users, also from Mark, is also excellent.
If you're not familiar with the vi editor, you should check out Daniel's tutorial on vi. This
developerWorks tutorial will give you a gentle yet fast-paced introduction to this powerful text
editor. Consider this must-read material if you don't know how to use vi.
For more information on the Linux Professional Institute, visit the LPI home page.
Feedback
We look forward to getting your feedback on this tutorial. Additionally, you are welcome to
contact the authors directly:
Colophon
This tutorial was written entirely in XML, using the developerWorks Toot-O-Matic tutorial
generator. The open source Toot-O-Matic tool is an XSLT stylesheet and several XSLT
extension functions that convert an XML file into a number of HTML pages, a zip file, JPEG
heading graphics, and two PDF files. Our ability to generate multiple text and binary formats
from a single source file illustrates the power and flexibility of XML. (It also saves our
production team a great deal of time and effort.)