Rute PDF
Rute PDF
Version 0.9.1
http://rute.sourceforge.net/
http://rute.sourceforge.net/rute.pdf
http://rute.sourceforge.net/rute.ps.gz
http://rute.sourceforge.net/rute.dvi.gz
http://rute.sourceforge.net/rute-HTML.zip
http://rute.sourceforge.net/rute-HTML.tar.gz
c 2000, 2001
Copyright °
Paul Sheer
May 5, 2001
Copying
This license dictates the conditions under which you may copy, modify and distribute
this work. Email the author for purchasing information.
1. This work may not be reproduced in hard copy except for personal use. Further, it
may not be reproduced in hard copy for training material, nor for commercial gain,
nor for public or organisation-wide distribution. Further, it may not be reproduced in
hard copy except where the intended reader of the hard copy initiates the process of
converting the work to hard copy.
2. The work may not be modified except by a generic format translation utility, as
may be appropriate for viewing the work using an alternative electronic media. Such
a modified version of the work must clearly credit the author, display this license, and
include all copyright notices. Such a modified version of the work must clearly state
the means by which it was translated, as well as where an original copy may be
obtained.
3. Verbatim copies of the work may be redistributed through any electronic media.
Modified versions of the work as per 2. above may be redistributed same, provided
that they can reasonably be said to include, albeit in translated form, all the original
source files.
NO WARRANTY
ii
THE POSSIBILITY OF SUCH DAMAGES.
Several people have requested that they may be permitted to make translations into
other languages. If you are interested in translating Rute, please contact the author.
You will be sent several chapters in source form for you to translate. These will then
be reviewed. Should your translation be acceptable, an agreement well be drawn up
which will outline your commitments and remuneration to the project. Note that
translation work will be ongoing, since Rute is continually being added to and
improved.
iii
[email protected]
+27 21 761-7224
Linux
Linux development — cryptography
installations — support — training
iv
Changes
0.9.1 Acknowledgements added.
0.6.0
0.5.0
v
0.4.0
vi
Contents
1 Introduction 1
1.1 What this book covers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Read this next. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 What do I need to get started . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 More about this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 I get very frustrated with U NIX documentation that I don’t understand . 2
1.6 LPI and RHCE requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.7 Not RedHat: RedHat-like . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Computing Sub-basics 5
2.1 Binary, octal, decimal and hexadecimal . . . . . . . . . . . . . . . . . . . . 5
2.2 Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Logging in and changing your password . . . . . . . . . . . . . . . . . . 8
2.5 Listing and creating files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Command line editing keys . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.7 Console switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.8 Creating files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.9 Allowable characters for filenames . . . . . . . . . . . . . . . . . . . . . . 11
2.10 Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
vii
CONTENTS CONTENTS
3 PC Hardware 13
3.1 Motherboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Master/Slave IDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4 Basic Commands 19
4.1 The ls command, hidden files, command-line options . . . . . . . . . . 19
4.2 Error messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3 Wildcards, names, extensions and glob expressions . . . . . . . . . . . . . 23
4.4 Usage summaries and the copy command . . . . . . . . . . . . . . . . . . 27
4.5 Manipulating directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.6 Relative vs. absolute pathnames . . . . . . . . . . . . . . . . . . . . . . . . 29
4.7 System manual pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.8 System info pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.9 Some basic commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.10 Multimedia commands for fun . . . . . . . . . . . . . . . . . . . . . . . . 34
4.11 Terminating commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.12 Compressed files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.13 Searching for files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.14 Searching within files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.15 Copying to MSDOS and Windows formatted floppy disks . . . . . . . . 38
4.16 Archives and backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.17 The PATH where commands are searched for . . . . . . . . . . . . . . . . 40
5 Regular Expressions 43
5.1 Basic regular expression exposition . . . . . . . . . . . . . . . . . . . . . . 43
5.2 The fgrep command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.3 Regular expression \{ \} notation . . . . . . . . . . . . . . . . . . . . . . . 45
viii
CONTENTS CONTENTS
7 Shell Scripting 55
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2 Looping: the while and until statements . . . . . . . . . . . . . . . . . 56
7.3 Looping: the for statement . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.4 breaking out of loops and continueing . . . . . . . . . . . . . . . . . . 59
7.5 Looping over glob expressions . . . . . . . . . . . . . . . . . . . . . . . . 60
7.6 The case statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.7 Using functions: the function keyword . . . . . . . . . . . . . . . . . . 61
7.8 Properly processing command line args: shift . . . . . . . . . . . . . . 62
7.9 More on command-line arguments: $@ and $0 . . . . . . . . . . . . . . . 64
7.10 Single forward quote notation . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.11 Double quote notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.12 Backward quote substitution . . . . . . . . . . . . . . . . . . . . . . . . . 65
ix
CONTENTS CONTENTS
10 Mail 91
10.1 Sending and reading mail . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
10.2 The SMTP protocol — sending mail raw to port 25 . . . . . . . . . . . . . 93
x
CONTENTS CONTENTS
xi
CONTENTS CONTENTS
xii
CONTENTS CONTENTS
xiii
CONTENTS CONTENTS
25 Introduction to IP 233
25.1 Internet Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
25.2 Special IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
25.3 Network Masks and Addresses . . . . . . . . . . . . . . . . . . . . . . . . 235
xiv
CONTENTS CONTENTS
xv
CONTENTS CONTENTS
28 NFS 273
28.1 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
28.2 Configuration example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
28.3 Access permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
28.4 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
28.5 Kernel NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
xvi
CONTENTS CONTENTS
xvii
CONTENTS CONTENTS
xviii
CONTENTS CONTENTS
xix
CONTENTS CONTENTS
xx
CONTENTS CONTENTS
xxi
CONTENTS CONTENTS
43 X 469
43.1 The X protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
43.2 Widget libraries and desktops . . . . . . . . . . . . . . . . . . . . . . . . . 474
43.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
43.2.2 Qt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
43.2.3 Gtk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
43.2.4 GNUStep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
43.3 XFree86 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
43.4 The X distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
43.5 X documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
43.6 Configuring X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
43.7 Visuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
43.8 The startx and xinit commands . . . . . . . . . . . . . . . . . . . . . . . 488
43.9 Login screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
43.10X Font naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . 489
43.11 Font configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
43.12The font server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
xxii
CONTENTS CONTENTS
xxiii
CONTENTS CONTENTS
xxiv
Strangely, the thing that least intrigued me was how they’d managed to get it all
done. I suppose I sort of knew. If I’d learnt one thing from travelling, it was that
the way to get things done was to go ahead and do them. Don’t talk about going to
Borneo. Book a ticket, get a visa, pack a bag, and it just happens.
The Beach
Alex Garland
xxv
CONTENTS CONTENTS
xxvi
Acknowledgments
A special thanks goes to Abraham van der Merwe for his careful reviewing of the
text. Thanks to Jonathan Maltz, Jarrod Cinman and Alan Tredgold for introducing
me to GNU/Linux back in 1994 or so. Credits are owed to all the Free software
developers that went into LATEX, TEX, GhostScript, GhostView, Autotrace, XFig, XV,
Gimp, the Palatino font, the various LATEX extension styles, DVIPS, DVIPDFM, Im-
ageMagick, XDVI and LaTeX2HTML without which this document would scarcely be
possible. To name a few: John Bradley, David Carlisle, Eric Cooper, John Cristy, Pe-
ter Deutsch, Nikos Drakos, Mark Eichin, Carsten Heinz, Spencer Kimball, Paul King,
Donald Knuth, Peter Mattis, Frank Mittelbach, Johannes Plass, Sebastian Rahtz, Tomas
Rokicki, Bob Scheifler, Rainer Schoepf, Brian Smith, Supoj Sutanthavibul, Tim Theisen,
Paul Vojta, Martin Weber, Mark Wicks, Ken Yap, Herman Zapf. Thanks of course go to
the countlesss developers of Free software, and the many readers that gave valuable
feedback on the web site.
xxvii
CONTENTS CONTENTS
xxviii
Chapter 1
Introduction
While books shelved beside this one will get your feet wet, this one lets you actually
paddle for a bit, then thrusts your head underwater while feeding you oxygen.
This book covers GNU /L INUX system administration, for popular distri-
butions like RedHat and Debian , as a tutorial for new users and a reference for ad-
vanced administrators. It aims to give concise, thorough explanations and practical
examples of each aspect of a U NIX system. Anyone who wants a comprehensive text
on (what is commercially called) “L INUX” can look no further — there is little that is
not covered here.
The ordering of the chapters is carefully designed to allow them to be read in sequence
without missing anything. You should hence read from beginning to end, in order that
later chapters do not reference unseen material. I have also packed in useful examples
which must be practised as you read.
You will need to install a basic L INUX system. There are a number of vendors now
shipping point-and-click-install CD’s: you should try get a Debian or “RedHat-like”
1
1.4. More about this book 1. Introduction
distribution. One hint: try and install as much as possible so that when I mention a
software package in this text, you are likely to have it installed already, and can use
it immediately. Most cities with a sizable IT infrastructure will have a L INUX user
group to help you source a cheap CD. These are getting really easy to install, and there
is no longer much of a need to read lengthy installation instructions.
It also aims to satisfy the requirements for course notes for a GNU /L INUX
training course. Here in South Africa &I wrote with South African users in mind, and hence give
links to South African resources and examples derived from a South African context. I hope to keep the
identity of the text in this way. Of course this does not detract from the applicability of the text in other
-
countries. , I use the initial chapters as part of a 36 hour GNU /L INUX training
course given in 12 lessons. The details of the layout for this course is given in Appendix
A.
Note that all “L INUX ” systems are really composed mostly of GNU soft-
ware, but from now on I will refer to the GNU system as “L INUX ” in the way
almost everyone (incorrectly) does.
Any system reference will require you to read it at least three times before you get
a reasonable picture of what to do. If you need to read it more than three times, then
there is probably some other information that you really should be reading first. If you
are only reading a document once, then you are being too impatient with yourself.
It is very important to identify the exact terms that you fail to understand in a
document. Always try to back-trace to the precise word before you continue.
Its also probably not a good idea to learn new things according to deadlines. Your
U NIX knowledge should evolve by grace and fascination, rather than pressure.
2
1. Introduction 1.6. LPI and RHCE requirements
The difference in being able to pass an exam, and actually do something useful, is of
course huge.
The LPI and RHCE are two certifications that will give you an introduction to
Linux. This book covers far more than both these two certifications in most places, but
occasionally leaves out minor items as an exercise. It certainly covers in excess of what
you need to know to pass both these certifications.
The LPI and RHCE requirements are given in Appendix B and C.
These two certifications are merely introductions to U NIX. They do not expect
a user to write nifty shell scripts to do tricky things, or understand the subtle or ad-
vanced features of many standard services, let alone expect a wide overview of the
enormous numbers of non-standard and very useful applications out there. To be
blunt: you can pass these courses and still be considered quite incapable by the stan-
dards of companies that do system integration &System integration is my own term. It refers to
the act of getting L INUX to do non-basic functions, like writing complex shell scripts; setting up wide area
-
dialup networks; creating custom distributions; or interfacing database, web and email services together. .
In fact, these certifications make no reference to computer programming whatsoever.
Throughout this book I refer to “RedHat” and “Debian ” specific examples. What I
actually mean by this are systems that use .rpm (redHat package manager) packages
as opposed to systems that use .deb (debian) packages — there are lots of both. This
just means that there is no reason to avoid using a distribution like Mandrake, which
is .rpm based, and possibly viewed by many as being better than RedHat.
In short, brand names no longer have any meaning in the Free software community.
3
1.7. Not RedHat: RedHat-like 1. Introduction
4
Chapter 2
Computing Sub-basics
This chapter explains some basics that most computer users of any sort will already
be familiar with. If you are new to U NIX however, you may want to gloss over the
commonly used key bindings for reference.
The best way of thinking about how a computer stores and manages information
is to ask yourself how you would. Most often the way a computer works is exactly
the way one would expect it to if you were inventing it for the first time. The only
limitation on this are those imposed by logical feasibility and imagination, but most
anything else is allowed.
When you first learned to count, you did so with 10 digits, and hence ordinary numbers
(like telphone numbers) are called “base ten” numbers. Postal codes that include letters
and digits are called “base 36” numbers because of the addition of 26 letters onto the
usual 10 digits. The simplest base possible is “base two” using only two digits: 0 and
1. Now a 7 digit telephone number has 10 × 10 × 10 × 10 × 10 × 10 × 10 = 107 =
| {z }
7 digits
10000000 possible combinations. A postal code with four characters has 364 = 1679616
possible combinations. However an 8 digit binary number only has 28 = 256 possible
combinations.
Since the internal representation of numbers within a computer is binary, and
since it is rather tedious to convert between decimal and binary, computer scientists
have come up with new bases to represent numbers: these are “base sixteen” and
“base eight” known as hexadecimal and octal respectively. Hexadecimal numbers use
5
2.1. Binary, octal, decimal and hexadecimal 2. Computing Sub-basics
the digits 0 through 9 and the letters A through F, while octal numbers use only the
digits 0 through 7. Hexadecimal is often abbreviated as hex.
U NIX makes heavy use of 8, 16 and 32 digit binary numbers, often representing
them as 2, 4 and 8 digit hex numbers. You should get used to seeing numbers like 0xffff
(or FFFFh) which in decimal is 65535 and in binary is 1111111111111111.
6
2. Computing Sub-basics 2.2. Files
2.2 Files
Common to every computer system invented is the file. A file holds a single contiguous
block of data. Any kind of data can be stored in a file and there is no data that cannot
be stored in a file. Furthermore, there is no kind of data that is stored anywhere else
except in files. A file holds data of the same type, for instance a single picture will be
stored in one file. During production, this book had each chapter stored in a file. It is
uncommon for different types of data (say text and pictures) to be stored together in
the same file because it is inconvenient. A computer will typically contain about 10000
files that have a great many purposes. Each file will have its own name. The file name
on a L INUX or U NIX machine can be up to 256 characters long and may almost any
character.
The file name is usually explanatory — you might call a letter you wrote to
your friend something like Mary Jones.letter (from now on, whenever you see
the typewriter font &A style of print: here is typewrite font.-, it means that those are
words that might be read off the screen of the computer). The name you choose has
no meaning to the computer, and could just as well be any other combination of let-
ters or digits, however you will refer to that data with that file name whenever you
give an instruction to the computer regarding that data, hence you would like it to be
descriptive. &It is important to internalise the fact that computers do not have an interpretation for
anything. A computer operates with a set of interdependent logical rules. Interdependent means that the rules
have no apex, in the sense that computers have no fixed or single way of working; for example, the reason
a computer has files at all is because computer programmers have decided that this is the most universal and
-
convenient way of storing data, and if you think about it, it really is.
The data in each file is merely a long list of numbers. The size of the file is
just the length of the list of numbers. Each number is called a byte. Each byte con-
tains 8 bits. Each bit is either a one or a zero and therefore, once again, there are
2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 = |{z}
256 possible combinations. Hence a byte can only
| {z }
8 bits 1 byte
hold a number as large as this. There is no type of data which cannot be represented as
a list of bytes. Bytes are sometimes also called octets. Your letter to Mary will be encoded
into bytes in order to be stored on the computer. We all know that a television picture
is just a sequence of dots on the screen that scan from left to right — it is in this way
that a picture might be represented in a file: i.e. as a sequence of bytes where each byte
is interpreted as a level of brightness; 0 for black, and 255 for white. For your letter, the
convention is to store an A as 65 a B as 66 and so on. Each punctuation character also
has a numerical equivalent.
7
2.3. Commands 2. Computing Sub-basics
2.3 Commands
The second thing common to every computer system invented is the command. You tell
the computer what to do with single words — one at a time — typed into the computer.
Modern computers appear to have done away the typing in of commands by having
beautiful graphical displays that work with a mouse, but, fundamentally, all that is
happening is that commands are being secretely typed in for you. Using commands is
still the only way to have complete power over the computer. You don’t really know
anything about a computer until you get to grips with the commands it uses. Using a
computer will very much involve typing in a word, pressing the enter key, and then
waiting for the computer screen to spit something back at you. Most commands are
typed in to do something useful to a file.
Turn on your L INUX box. After a few minutes of initialisation, you will see the
login prompt. A prompt is one or more characters displayed on the screen that you are
expected to type something in after. Here this may state the name of the computer
(each computer has a name — typically consisting of about eight lowercase letters)
and then the word login:. L INUX machines now come with a graphical desktop
by default (most of the time), so you might get a pretty graphical login with the same
effect. Now you should type your login name — a sequence of about eight lower case
letters that would have been assigned to you by your computer administrator — and
then press the ‘Enter’ (or ‘Return’) key (i.e. ). A password prompt will appear
after which you should type your password. Your password may be the same as your
login name. Note that your password will not be shown on the screen as you type it,
but will be invisible. After typing your password, press the ‘Enter’ or ‘Return’ key
again. The screen might show some message and prompt you for a login again — in
this case you have probably typed something incorrectly and should give it another
try. From now on, you will be expected to know that the ‘Enter’ or ‘Return’ key should
be pressed at the end of every line you type in, analogous to the mechanical typewriter.
You will also be expected to know that human error is very common — when you type
something incorrectly the computer will give an error message, and you should try
again until you get it right. It is very uncommon for a person to understand computer
concepts after a first reading, or to get commands to work on the first try.
Now that you have logged in, you will see a shell prompt — a shell is place where
you are able to type commands. prompt. This is where you will spend most of your
time as a system administrator &Computer manager.-, but it needn’t look as bland as you
see now. Your first exercise is to change your password. Type the command passwd.
You will then be asked for a new password and then asked to confirm that password.
8
2. Computing Sub-basics 2.5. Listing and creating files
The password you choose consist of letters, numbers and punctuation — you will see
later on why this security measure is a good idea. Take good note of your password for
the next time you log in. Then you will arrive back in the shell. The password you have
chosen will take effect immediately, replacing the previous password that you used to
log in. The password command might also have given some message indicating what
effect it actually had. You may not understand the message, but you should try to get
an idea of whether the connotation was positive or negative.
When you are using a computer, it is useful to imagine yourself as being in dif-
ferent places within the computer, rather than just typing commands into it. After you
entered the passwd command, you were no longer in the shell, but moved into the
password place. You could not use the shell until you had moved out of the passwd
command.
Type in the command ls. ls is short for list, abbreviated to two letters like most other
U NIX commands. ls will list all your current files. You may find that ls does nothing,
but just returns you back to the shell. This would be because you have no files just yet.
Most U NIX commands do not give any kind of message unless something went wrong
(the passwd command above was an exception). If there were files, you would see
their names listed rather blandly in columns with no indication of what they are for.
The following keys are useful for editing the command line. Note that U NIX has had a
long and twisted evolution from the mainframe, and the Home, End etc. keys may not
work properly. The following keys bindings are however common throughout many
L INUX applications:
9
2.7. Console switching 2. Computing Sub-basics
Note the prefixes Alt for , Ctrl for , and Shift for , indicate to
hold the key down through the pressing and releasing of the letter key. These are
known as key modifiers. Note also, that the Ctrl key is always case insensitive; hence
Ctrl-D (i.e. – – ) and Ctrl-d (i.e. – ) are identical. The Alt
modifier (i.e. –?) is in fact a short way of pressing and releasing before
entering the key combination; hence Esc then f is the same as Alt-f — U NIX is different
to other operating systems’ use of Esc. The Alt modifier is not case insensitive although
some applications will make a special effort to respond insensitively. The Alt key is
also sometimes referred to as the Meta key. All of these keys are sometimes referred to
by their abbreviations: for example C-a for Ctrl-a, or M-f for Meta-f and Alt-f.
Your command-line keeps a history off all the commands you have typed in.
Ctrl-p and Ctrl-n will cycle through previous commands entered. New users seem to
gain tremendous satisfaction from typing in lengthy commands over and over. Never
type in anything more than once — use your command history instead.
Ctrl-r will allow you to search your command history. Hitting Ctrl-r in the mid-
dle of a search will find the next match while Ctrl-s will revert to the previous match.
The Tab command is tremendously useful for saving key strokes. Typing a par-
tial directory name, file name or command, and then pressing Tab once or twice in
sequence will complete the word for you without you having to type it all in full.
If you are in text mode, you can type Alt-F2 to switch to a new independent login.
Here you can login again and run a separate session. There are six of these virtual
consoles — Alt-F1 through Alt-F6 — to choose from; also called virtual terminals. If you
are in graphical mode, you will have to instead press Ctrl-Alt-F? because the Alt-F?
keys are often used by applications. The convention is that the seventh virtual console
is graphical, hence Alt-F7 will always get you back to graphical mode.
10
2. Computing Sub-basics 2.8. Creating files
There are many ways of creating a file. Type cat > Mary Jones.letter and then
type out a few lines of text. You will use this file in later examples. The cat command
is used here to write from the keyboard into a file Mary Jones.letter. At the end of
the last line, press Enter one more time and then press – . Now, if you type
ls again, you will see the file Mary Jones.letter listed with any other files. Type
cat Mary Jones.letter without the >. You will see that the command cat writes
the contents of a file to the screen, allowing you to view your letter. It should match
exactly what you typed in.
Although U NIX filenames can contain almost any character, standards dictate that only
the following characters are preferred in filenames:
¨ ¥
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
a b c d e f g h i j k l m n o p q r s t u v w x y z
0 1 2 3 4 5 6 7 8 9 . _ - ˜
§ ¦
Hence never use other punctuation characters, brackets or control characters to name
files. Also, never use the space or tab character in a filename, and never begin a file
name with a - character.
2.10 Directories
Before we mentioned that a system may typically contain 10000 files. It would be
combersome if you were to see all 10000 of them whenever you typed ls, hence files
are placed in different “cabinets” so that files of the same type get placed together
and can be easily isolated from other files. For instance your letter above might go
in a seperate “cabinet” with other letters. A “cabinet” in computer terms is actually
called a directory. This is the third commonality between all computer systems: all
files go in one or other directory. To get and idea of how this works, type the command
mkdir letters where mkdir stands for make directory. Now type ls. This will show
the file Mary Jones.letter as well as a new file letters. The file letters is not
really a file at all, but the name of a directory in which a number of other files can be
placed. To go into the directory letters you can type cd letters where cd stands
for change directory. Since the directory is newly created, you would not expect it to
contain any files, and typing ls will verify this by not listing anything. You can now
11
2.10. Directories 2. Computing Sub-basics
create a file using the cat command as you did before (try this). To go back into the
original directory that you were in you can use the command cd .. where the .. has
a special meaning of taking you out of the current directory. Type ls again to verify
that you have actually gone up a directory.
It is however bothersome that one cannot tell the difference between files and
directories. The way to do this is with the ls -l command. -l stands for long format.
If you enter this command you will see a lot of details about the files that may not
yet be comprehensible to you. The three things you can watch for are the filename on
the far right, the file size (i.e. the number of bytes that the file contains) in the fifth
column from the left, and the file type on the far left. The file type is a string of letters
of which you will only be interested in one: the character on the far left is either a
- or a d. A - indicates a regular file, and a d indicates a directory. The command
ls -l Mary Jones.letter will list only the single file Mary Jones.letter and
is useful for finding out the size of a single file.
In fact, there is no limitation on how many directories you can create within each
other. In what follows, you will get a glimpse of the layout of all the directories on the
computer.
Type the command cd / where the / has the special meaning to go to the top
most directory on the computer called the root directory. Now type ls -l. The listing
may be quite long and may go off the top of the screen — in this case try ls -l |
less (then use PgUp and PgDn, and press q when done). You will see that most, if
not all, are directories. You can now practice moving around the system with the cd
command, not forgetting that cd .. takes you up and cd / takes you to the root
directory.
At any time you can type pwd (present working directory) to show you the directory
you are currently in.
When you have finished, log out of the computer using the logout command.
12
Chapter 3
PC Hardware
This will explain a little about PC hardware. Anyone that has built their own PC,
or has experience configuring devices on Windows can probably skip this section. It
is added purely for completeness. This chapter actually comes under the subject of
Microcomputer Organisation, i.e. how your machine is electronically structured.
3.1 Motherboard
Inside your machine you will find a single large circuit board called the motherboard. It
gets powered by a humming power-supply, and has connector leads to the keyboard
and other, peripherals devices &Anything that is not the motherboard, not the power supply and not
purely mechanical.-.
The motherboard contains several large microchips chips on it and many small
ones. The important ones are:
RAM Random Access Memory or just memory. The memory is a single linear sequence
of bytes that get erased when there is no power. It contains sequences of simple
coded instructions of 1 to several bytes in length, that do things like: add this
number to that; move this number to this device; go to another part of RAM to
get other instructions; copy this part of RAM to this other part etc. When your
machine has “64 megs” (64 MegaBytes) it means it has 64 × 1024 × 1024 bytes
of RAM. Locations within that space then called memory address’s, so that saying
“memory address 1000” means the 1000th byte in memory.
ROM A small part of RAM does not reset when the computer switches off. It is called
ROM, Read Only Memory. It is factory fixed and usually never changes through
13
3.1. Motherboard 3. PC Hardware
%
$
'
"
&
"
14
3. PC Hardware 3.1. Motherboard
the life of a PC, hence the name. It overlaps the area of RAM close to the end of
the first megabyte of memory, so that area of RAM is not physically usable. ROM
contains instructions to start up the PC and access certain peripherals.
CPU Stands for Central Processing Unit. It is the thing that is called 80486, 80586, Pen-
tium or whatever. On startup, it jumps to memory address 1040475 (0xFE05B)
and starts reading instructions. The first instructions it gets are actually to fetch
more instructions from disk, and give a Boot failure message to the screen if
it finds nothing useful. The CPU requires a timer to drive it. The timer operates
at a high speed of hundreds of millions of ticks per second (Hertz), and causes
the machine to be named, for example, a “400MHz” (400 MegaHertz) machine.
The MHz of the machine is roughly proportional to the number of instructions it
can process from RAM per second.
IO-ports Stands for Input Output ports. This is a block of RAM that sits in parallel to
the normal RAM. There are 65536 IO ports, hence IO is small compared to RAM.
IO-ports are used to write to peripherals. When the CPU writes a byte to IO-port
632 (0x278), it is actually sending out a byte through your parallel port. Most
IO-ports are not used. There is no specific IO-port chip though.
ISA slots ISA (eye-sah) is a shape of socket for plugging in peripheral devices like mo-
dem cards and sound cards. Each card expects to be talked to via an IO-port (or
several consecutive IO-ports). What IO-port the card uses is sometimes config-
ured by the manufacturer, and other times is selectable on the card using jumpers
&Little pin bridges that you can pull off with your fingers.- or switches on the card. Other
times still, it can be set by the CPU using a system called Plug and Pray &This
means that you plug the device in, then beckon your favourite diety for spiritual assistance. Actually,
some people complained that this might be taken seriously — no, its a joke: the real term is Plug ’n
Play- or PnP. The cards also sometimes need to signal the CPU to indicate that it
is ready to send or receive more bytes through an IO-port. They do this through
one of 16 connectors inside the ISA slot called Interrupt Request lines or IRQ
lines (or sometimes just Interrupts), numbered 0 through 15. Like IO-ports, the
IRQ your card uses is sometimes also jumper selectable, sometimes not. If you
unplug an old ISA card, you can often see the actual copper thread that goes from
the IRQ jumper to the edge connector. Finally, ISA cards can also access memory
directly through one of eight Direct Memory Access Channels or DMA Channels,
which are also possibly selectable by jumpers. Not all cards use DMA however.
In summary there are three things that the peripheral and the CPU need to coop-
erate on: The IO Port, the IRQ, and the DMA. If any two cards clash by using
either the same IO-port, IRQ number or DMA channel then they won’t work
(at worst your machine will crash &Come to a halt and stop responding.-).
15
3.1. Motherboard 3. PC Hardware
“8 bit” ISA slots Old motherboards have shorter ISA slots. You will notice yours is
a double slot (called “16 bit” ISA) with a gab between them. The larger slot can
still take an older “8 bit” ISA card: like many modem cards.
PCI slots PCI (pee-see-eye) slots are like ISA, but are a new standard aimed at high
performance peripherals like networking cards and graphics cards. They also use
an IRQ, IO-port and possibly a DMA channel. These however are automatically
configured by the CPU as a part of the PCI standard, hence there will rarely be
jumpers on the card.
AGP slots AGP slots are even higher performance slots for Accelerated Graphics Pro-
cessors. In other words, cards that do 3D graphics for games. The are also auto-
configured.
Serial ports A serial port connection may come straight off your motherboard to a
socket on your case. There are usually two of these. They may drive an external
modem, and some kinds of mice and printers. Serial is a simple and cheap way to
connect a machine where relatively slow (less that 10 kilobytes per second) data
transfer speeds are needed. Serial ports have their own “ISA card” built into the
motherboard that uses IO-port 0x3F8–0x3FF and IRQ 4 for the first serial port
(also called COM1 under DOS/Windows) and IO-port 0x2F8–0x2FF and IRQ 3
for COM2.
Parallel port Normally only your printer would plug in here. Parallel ports are how-
ever extremely fast (being able to transfer 50 kilobytes per second), and hence
many types of parallel port devices are available (like CDROM drives that plug
into a parallel port). Parallel port cables however can only be a few meters in
length before you start getting transmission errors. The parallel port uses IO-port
0x378–0x37A and IRQ 7. A second parallel port usually would use 0x278–0x27A
and not use an IRQ at all.
USB port The Universal Serial Bus aims to allow any type of hardware to plug into one
plug. The idea is that one day all serial and parallel ports will be scrapped in
favour of a single USB socket that all external peripherals will daisy chain from.
IDE ribbon The IDE ribbon plugs into your hard disk drive or C: drive on
Windows/DOS and also into your CDROM drive (sometimes called an IDE
CDROM). The IDE cable actually attaches to its own PCI card internal to the
motherboard. There are two IDE connectors that use IO-ports 0xF000–0xF007
and 0xF008–0xF00F, and IRQ 14 and 15 respectively. Most IDE CDROM’s are
also ATAPI CDROM’s. ATAPI is a standard (similar to SCSI, below) that allows
many different kinds of devices to plug into an IDE ribbon cable. You get special
floppy drives, tape drives and other devices that plug into the same ribbon. They
will all be called ATAPI-(this or that).
SCSI ribbon Another ribbon may be present coming out of a card (called the SCSI
host adaptor or SCSI card) or your motherboard. Home PC’s will rarely have
16
3. PC Hardware 3.2. Master/Slave IDE
this, since these are expensive and used mostly for high end servers. SCSI cables
are more densely wired than an IDE cable. They will also end in a disk drive,
tape drive, CDROM, or some other device. SCSI cables are not allowed to just-be-
plugged-in: they have to be connected end on end with the last device connected
in a special way called SCSI termination. There are however a few SCSI devices
that are automatically terminated. More on this on page 461.
Two IDE hardrives can be connected to a single IDE ribbon. The ribbon alone has
nothing to distinguish which is which, so the drive itself has jumper pins on it (usually
close to the power supply) which can be set to one of several options. These are either
Master (MA), Slave (SL), Cable Select (CS) or Master-only/Single-Drive/etc. The MA
option means that your drive is the “first” drive of two on this IDE ribbon. The SL
option means that your drive is the “second” drive of two on this IDE ribbon. The CS
option means that your machine will make its own decision (some boxes only work
with this setting), and the Master-only option means that there is no second drive on
this ribbon.
,.-0/1" 22
*
!"#$%& '( )
*+
There may also be a second IDE ribbon, giving you a total of 4 possible drives.
The first ribbon is known as IDE1 (labeled on your motherboard) or the primary ribbon,
while the second is known as IDE2 or the secondary ribbon. You four drives then called
primary master, primary slave, secondary master and secondary slave. Their labeling under
L INUX is discussed in Section 18.4.
17
3.3. CMOS 3. PC Hardware
3.3 CMOS
The “CMOS” & Stands for Complementary Metal Oxide Semiconductor, which has to do with the tech-
nology used to store setup information through power downs. - is a small application built into
ROM. It is also known as the ROM BIOS configuration. You can start it instead of your
operating system by pressing F2 or Del (or something else) just after you switch your
machine on. There will usually be a message Press <key> to enter setup. to
explain this. Inside you will be able to change your machine’s configuration. CMOS
programs are different for each motherboard.
Inside the CMOS, you will be able to enable or disabled builtin devices (like your
mice and serial ports); set your machine’s “hardware clock” (so that your machine has
the correct time and date); and select the boot sequence (whether to load the operating
system off the hard-drive or CDROM — which you will need for installing L INUX
from a bootable CDROM). Boot means to startup the computer &The term comes from
the lack of resources with which to begin: the operating system is on disk, but you might need the op-
-
erating system to load from the disk — like trying to lift yourself up from your “bootstraps”. . You
will also be able to configure your hard drive. You should always select Hardrive
auto-detection whenever installing a new machine, or adding/removing disks.
Different CMOS’s will have different ways of doing this, so browse through all the
menus to see what your CMOS can do.
The CMOS is important when it comes to configuring certain devices built into
the motherboard. Modern CMOS’s allow you set the IO-Ports and IRQ numbers that
you would like particular devices to use. For instance you can make your CMOS
switch COM1 with COM2, or use an IO-Port for your parallel port that is non-standard.
When it comes to getting such devices to work under L INUX , you will often have to
power down your machine to see what the CMOS has to say about that device. More
on this in Chapter 42.
18
Chapter 4
Basic Commands
In addition to directories and ordinary text files, there are other types of files, although
all files contain the same kind of data (i.e. a list of bytes). The hidden file is a file
that will not ordinarily appear when you type the command ls to list the contents
of a directory. To see a hidden file you have to use the command ls -a. The -a
option means to list all files as well as hidden files. Another variant ls -l which
lists the contents in long format. The - is used in this way to indicate variations on a
command. These are called command-line options or command-line arguments, and most
U NIX commands can take a number of them. They can be strung together in any way
that is convenient &Commands under the GNU free software license are superior in this way: they
have a greater number of options than traditional U NIX commands and are therefore more flexable.-, for
example ls -a -l, ls -l -a or ls -al — either of these will list all files in long
format.
All GNU commands take an additional argument -h and --help. You can
type a command with just this on the command line and get a usage summary. This is
some brief help that will summarise options that you may have forgotten if you are
19
4.2. Error messages 4. Basic Commands
already familiar with the command — it will never be an exhaustive description of the
usage. See the later explanation about man pages.
The difference between a hidden file and an ordinary file is merely that the file
name of a hidden file starts with a period. Hiding files in this way is not for security,
but for convenience.
The option ls -l is somewhat cryptic for the novice. Its more explanatory ver-
sion is ls --format=long. Similarly, the all option can be given as ls --all, and
will mean the same thing as ls -a.
Although commands usually do not display a message when they execute &The com-
- succesfully, commands do report errors in a
puter accepted and processed the command.
consistant format. The format will vary from one command to another, but will often
appear as follows: command-name: what was attempted: error message. For example,
the command ls -l qwerty will give an error ls: qwerty: No such file or
directory. What actually happened was that the command ls attempted to read the
file qwerty. Since this file does not exist, an error code 2 arose. This error code corre-
sponds to a situation when a file or directory is not found. The error code is automat-
ically translated into the sentance No such file or directory. It is important
to understand the distinction between an explanatory message that a command gives
(such as the messages reported by the passwd command in the previous chapter) and
an error code that was just translated into a sentance. This is because a lot of different
kinds of problems can result in an identical error code (there are only about a hundred
different error codes). Experience will teach you that error messages do not tell you
what to do, only what went wrong, and should not be taken as gospel.
A complete list
of basic error codes can be found in /usr/include/asm/errno.h. In addition to
these, several other header files &Files ending in .h- may define their own error codes.
Under U NIX however, these are 99% of all the errors you are ever likely to get. Most of
them will be meaningless to you at the moment, but are included here as a reference:
¨ ¥
#ifndef _I386_ERRNO_H
#define _I386_ERRNO_H
20
4. Basic Commands 4.2. Error messages
21
4.2. Error messages 4. Basic Commands
22
4. Basic Commands 4.3. Wildcards, names, extensions and glob expressions
#endif
§ ¦
ls can produce a lot of output if there are a large number of files in a directory. Now
say that we are only interested in files that ended with the letters tter. To list only
these files you can use ls *tter. The * matches any number of any other char-
acters. So, for example, the files Tina.letter, Mary Jones.letter and the file
splatter, would all be listed if they were present. While a file Harlette would not
be listed. While the * matches any length of characters, then ? matches only one char-
acter. For example the command ls ?ar* would list the files Mary Jones.letter
and Harlette.
When naming files, it is a good idea to choose names that group files of the
same type together. We do this by adding an extension to the file name that de-
scribes the type of file it is. We have already demonstrated this by calling a file
Mary Jones.letter instead of just Mary Jones. If you keep this convention, you
will be able to easily list all the files that are letters by entering ls *.letter. The
file-name Mary Jones.letter is then said to be composed of two parts: the name,
Mary Jones, and the extension, letter.
Some common U NIX extentions you may see are
23
4.3. Wildcards, names, extensions and glob expressions 4. Basic Commands
24
4. Basic Commands 4.3. Wildcards, names, extensions and glob expressions
.log Log file of a system service. This file grows with status messages of some system
program.
.pcf PCF image file — intermediate representation for fonts. X Window System font.
.php PHP program source code (used for web page design).
.so Shared object file. dynamically linked library lib*.so is a Dynamically Linked
Library &Executable program code shared by more than one program to save disk space and
memory.-.
.texi, .texinfo Texinfo source. This is from what info pages are compiled.
25
4.3. Wildcards, names, extensions and glob expressions 4. Basic Commands
.tex TEX or LATEX document. LATEX is for document processing and typesetting.
.tga TARGA image file.
.tgz tarred and gzipped directory tree. Also a package for the Slackware distribu-
tion.
.tiff Tiff image file.
.tfm LATEX font metric file.
.ttf True type font.
.txt Plain English text file.
.voc Audio format (Sound Blaster’s own format).
.wav Audio format (sound files common to Microsoft Windows).
.xpm XPM image file.
.y yacc source file.
.Z File compressed with the compress compression program.
.zip File compressed with the pkzip (or PKZIP.EXE for DOS) compression pro-
gram.
.1, .2 . . . Man page.
In addition, files that have no extension and a capitalised descriptive name are usually
plain English text and meant for your reading. This will come bundled with packages
and are for documentation purposes. You will see the hanging around all over the
place.
Some full file names you may see are
26
4. Basic Commands 4.4. Usage summaries and the copy command
NEWS Info about new features and changes for the layman about this package.
Glob expressions
There is also a way to restrict characters of a file-name within certain ranges, like if
you only want to list the files that begin with A through M, you can do ls [A-M]*.
Here the brackets have a special meaning — they match a single character like a ?,
but only those given by the range. You can use this in a variety of ways, for example
[a-dJW-Y]* matches all files beginning with a, b, c, d, J, W, X or Y; while *[a-d]id
matches all files ending with aid, bid, cid or did; and *.{cpp,c,cxx} matches all
files ending in .cpp, .c or .cxx. This way of specifying a file-name is called a glob
expression. Glob expressions are used in many different contexts as you will see later.
The command cp stands for copy and is used to make a duplicate of one file or a num-
ber of files. The format is
cp <file> <newfile>
cp <file> [<file> ...] <dir>
or
cp file newfile
cp file [file ...] dir
The above lines are called a usage summary. The < and > signs mean that you don’t
actually type out these characters but replace <file> with a file-name of your own.
These are also sometimes written in italics like, cp file newfile. In rare cases they are
written in capitals like, cp FILE NEWFILE. <file> and <dir> are called parameters.
Sometimes they are obviously numeric, like a command that takes <ioport> &Anyone
emailing me to ask why typing in literal, <, i, o, p, o, r, t and > characters did not work will get a rude
-
reply . These are common conventions used to specify the usage of a command. The
[ and ] brackets are also not actually typed but mean that the contents between them
are optional. The ellipses ... mean that <file> can be given repeatedly, and these
27
4.5. Manipulating directories 4. Basic Commands
also are never actually typed. From now on you will be expected to substitute your
own parameters by interpretting the usage summary. You can see that the second of
the above lines is actually just saying that one or more file names can be listed with a
directory name last.
From the above usage summary it is obvious that there are two ways to use the
cp command. If the the last name is not a directory then cp will copy that file and
rename it to the filename given. If the last name is a directory then cp will copy all the
files listed into that directory.
The usage summary of the ls command would be as follows:
¨ ¥
ls [-l, --format=long] [-a, --all] <file> <file> ...
ls -al
§ ¦
where the comma indicates that either option is valid. Similarly with the passwd
command:
¨ ¥
passwd [<username>]
§ ¦
You should practice using the cp command now by moving some of your files from
place to place.
28
4. Basic Commands 4.6. Relative vs. absolute pathnames
Command can be given file name arguments in two ways. If you are in the same di-
rectory as the file (i.e. the file is in the current directory), then you can just enter the
file name on its own (eg. cp my file new file). Otherwise, you can enter the full
path name, like cp /home/jack/my file /home/jack/new file. Very often ad-
ministrators use the notation ./my file to be clear about the distinction, for instance:
cp ./my file ./new file. The leading ./ makes it clear that both files are relative
to the current directory. Filenames not starting with a / are called relative pathnames,
and otherwise, absolute pathnames.
(See Chapter 16 for a complete overview of all documentation on the system, and also
how to print manual pages out in a properly typeset format.)
documentation documentation
The command man [<section>|-a] <command> is used to get help on a partic-
ular topic and stands for manual. Every command on the entire system is documented
in so named man pages. In the past few years a new format of documentation has
evolved called info. These are considered the modern way to document commands,
but most system documentation is still available only through man. There are very few
packages that are not documented in man however. Man pages are the authoritative
reference on how a command works because they are usually written by the very pro-
grammer who created the command. Under U NIX, any printed documentation should
be considered as being second hand information. Man pages however will often not
contain the underlying concepts needed to understand in what context a command is
used. Hence it is not possible for a person to learn about U NIX purely from man pages.
29
4.8. System info pages 4. Basic Commands
However once you have the necessary background for a command, then its man page
becomes an indispensable source of information and other introductory material may
be discarded.
Now, man pages are divided into sections, numbered 1 through 9. Section 1 con-
tains all man pages for system commands like the ones you have been using. Sections
2-7 also exist, but contain information for programmers and the like, which you will
probably not have to refer to just yet. Section 8 contains pages specifically for system
administration commands. There are some additional sections labelled with letters;
other than these, there are no manual pages outside of the sections 1 through 9. sec-
tions
You should now use the man command to look up the manual pages for all
the commands that you have learned. Type man cp, man mv, man rm, man mkdir,
man rmdir, man passwd, man cd, man pwd and of course man man. Much of the
information may be incomprehensible to you at this stage. Skim through the pages to
get an idea of how they are structured, and what headings they usually contain. Man
pages are referenced using a notation like, cp(1), for the cp command in Section 1,
which can be read with man 1 cp.
info pages contain some excellent reference and tutorial information in hypertext
linked format. Type info on its own to go to the top level menu of the entire info
hierarchy. You can also type info <command> for help on many basic commands.
Some packages will however not have info pages, and other U NIX systems do not sup-
port info at all.
info is an interactive program with keys to navigate and search documentation. Typ-
ing info will bring you to a top-level menu. Typing h will then envoke the help screen.
30
4. Basic Commands 4.9. Some basic commands
df Stands for disk free. This tells you how much free space is left on your system. The
available space usually has the units of kilobytes (1024 bytes) (although on some
other U NIX systems this will be 512 bytes or 2048 bytes). The right most column
tells the directory (in combination with any directories below that) under which
that much space is available.
dircmp Directory compare. This can be used to compare directories to see if changes
have been made between them. You will often want to see where two trees
different (eg. check for missing files) possibly on different computers. Do a
man dircmp. (This is a System 5 command and is not present on L INUX . You
can however do directory comparisons with the Midnight Commander mc).
free Prints out available free memory. You will notice two listings: swap space and
physical memory. These are contigous as far as the user is concerned. The swap
space is a continuation of your installed memory that exists on disk. It is obvi-
ously slow to access, but provides the illusion of having much more RAM which
avoids ever running out of memory (which can be quite fatal).
31
4.9. Some basic commands 4. Basic Commands
file <filename> This command prints out the type of data contained in
a file. file portrate.jpg will tell you that portrate.jpg is a
JPEG image data, JFIF standard. file detects an enormous amount of
file types, across every platform. file works by checking whether the first few
bytes of a file match certain tell-tail byte sequences. The byte sequences are called
magic numbers. Their complete list is stored in in /usr/share/magic &The word
“magic” under U NIX normally refers to byte sequences or numbers that have a specific meaning or
-
implication. So-called magic-numbers are invented for source code, file formats and file-systems. .
more Displays a long file by stopping at the end of each page. Do the following:
ls -l /bin > bin-ls, and then more bin-ls. The first command creates
a file with the contents of the output of ls. This will be a long file because the
directory /bin has a great many entries. The second command views the file.
The space bar can be used to page through the file. When you get bored, just
press q. you can also try ls -l /bin | more which will do the same thing in
one go.
less This is the GNU version of more, but has extra features. On your system the
two commands may be the same. With less, you can use the arrow keys to page
up and down through the file. You can do searches by pressing / then typing in
a word to search for and then pressing Enter. Found words will be highlighted,
and the text will be scrolled to the first found word. The important command
are:
32
4. Basic Commands 4.9. Some basic commands
F Scroll forward and keep trying to read more of the file in case some other pro-
gram is appending to it — useful for log files.
nnng Goes to line nnn of the file.
q Quit. (Used by many U NIX text based applications.)
(less can be made to stop beeping in the irritating way that it does by editing
the file /etc/profile and adding the lines,
¨ ¥
LESS=-Q
export LESS
§ ¦
and then logging out and logging in again. But this is an aside that will make
more sense later.)
lynx <url> Opens up a URL &URL stands for Uniform Resource Locator — a web address.- at
the console. Try lynx http://lwn.net/.
nohup <command> & Runs a command in the background, appending any output
the command may produce to the file nohup.out in your home directory.
nohup has the useful feature that the command will continue to run even after
you have logged out. Uses for nohup will become obvious later.
sort <filename> Prints out a file with lines sorted in alphabetical order. Create a
file called telephone with each line containing a short telephone book entry.
Then type sort telephone, or sort telephone | less and see what hap-
pens. sort takes many interesting options to sort in reverse (sort -r), to elim-
inate duplicate entries (sort -u), ignore leading whitespace (sort -b), and so
on. See the man page for details.
strings [-n <len>] <filename> Prints out a binary file, but strips any unread-
able characters. Readable groups of characters are placed on separate lines. If
you have a binary file that you think may contain something interesting, but
looks completely garbelled when viewed normally, use strings to sift out the in-
teresting stuff: try less /bin/cp and then try strings /bin/cp. By default
strings does not print sequences smaller than 4. The -n option can alter this
limit.
split ... Splits a file into many seperate files. This might have been used when
a file was too big to be copied onto a floppy disk and needed to be split into,
say, 360kB pieces. Its sister, csplit, can split files along specified lines of text
within the file. The commands are seldom used but are actually very useful when
writing programs that manipulate text.
33
4.10. Multimedia commands for fun 4. Basic Commands
tac <filename> [<filename> ...] Writes the contents of all the files listed to
the screen, reversing the order of the lines — i.e. printing the last line of the file
first. tac is cat backwards and behaves similarly.
tail [-f] [-n <lines>] <filename> Prints out the last <lines> lines of a file
or 10 lines if the -n option is not given. The -f option means to watch the file for
lines being appended to the end of it.
head [-n <lines>] <filename> Prints out the first <lines> lines of a file or 10
lines if the -n option is not given.
uname Prints out the name of the U NIX operating system1 you are currently using.
uniq <filename> Prints out a file with duplicate lines deleted. The file must first
be sorted.
wc [-c] [-w] [-l] <filename> Counts the number characters/bytes (with -c),
words (with -w) or lines (with -l) in a file.
whoami Prints out your login name.
You should practice using each of these commands if you have your sound card con-
figured &I don’t want to give the impression that L INUX does not have graphical applications to do
all the functions in this section, but you should be aware that for every graphical application, there is a text
-
mode one that works better and consumes less resources. . You may also find that some of these
packages are not installed, in which case you can come back to this later.
play [-v <volume>] <filename> Plays linear audio formats out through your
sound card. These formats are .8svx, .aiff, .au, .cdr, .cvs, .dat, .gsm,
.hcom, .maud, .sf, .smp, .txw, .vms, .voc, .wav, .wve, .raw, .ub, .sb,
.uw, .sw or .ul files. In other words, it plays almost every type of “basic” sound
file there is: most often this will be a simple Windows .wav file. <volume> is in
percent.
rec ¡filename¿ Records from your microphone into a file. play and rec are from the
same package.
mpg123 <filename> Plays audio from MPEG files level 1, 2 or 3. Useful options
are -b 1024 (for increasing the buffer size to prevent jumping) and --2to1
(down samples by a factor of 2 for reducing CPU load). MPEG files contain sound
and/or video, stored very compactly using digital processing techniques that the
software industry seems to think are very sophisticated.
1 The brand of U NIX — there are number of different vendors and for each hardware platform.
34
4. Basic Commands 4.11. Terminating commands
aumix Sets your sound card’s volumn, gain, recording volume etc. You can use it
interactively, or just enter aumix -v <volume> to immediately set the volume
in percent. Note that this is a dedicated mixer program, and is considered to be a
separate application to any that play music. Rather do not set the volume from
within a sound playing application, even if it claims this feature — you have
much better control with aumix.
Files typically contain a lot of data they one can imagine might be represented with a
smaller number of bytes. Take for example the letter you typed out. The word ‘the’ was
probably repeated many times. You were probably also using lowercase letters most
of the time. The file was by far not a completely random set of bytes and repeatedly
used spaces as well as using some letters more than others &English text in fact contains,
on average, only about 1.3 useful bits (there are eight bits in a byte) of data per byte.-. Because of this
the file can be compressed to take up less space. Compression envolves representing
the same data using a smaller number of bytes, in such a way that the original data
can be reconstructed exactly. This usually involves finding patterns in the data. The
command to compress a file is gzip <filename> which stands for GNU zip. gzip a
file in your home directory and then do an ls to see what happened. Now use more to
35
4.13. Searching for files 4. Basic Commands
view the compressed file. To uncompress the file you can use gzip -d <filename>.
Now use more to view the file again. There are many files on the system which are
stored in compressed format. For example man pages are often stored compressed
and are uncompressed automatically when you read them.
You used the command cat previously to view a file. You can use the com-
mand zcat to do the same thing with a compressed file. Gzip a file and then type
zcat <filename>. You will see that the contents of the file are written to the screen.
Generally, when commands and files have a z in them they have something to do with
compression — z stands for zip. You can use zcat <filename> | less, to view a
compressed file proper. You can also use the command zless <filename>, which
does the same as zcat <filename> | less. (Note that your less may actually
have the functionality of zless combined.)
The command find can be used to search for files. Change to the root directory, and
enter find. It will spew out all the files it can see by recursively descending &Goes into
each subdirectory and all its subdirectories, and repeats the command find. - into all subdirectories.
In other words, find, when executed while in the root directory, will print out all the
files on the system. find will work for a long time if you enter it as you have — press
– to stop it.
Now change back to your home directory and type find again. You will see all
your personal files. There are a number of options find can take to look for specific
files.
find -type d will show only directories and not the files they contain.
find -type f will show only files and not the directories that contain them, even
though it will still descend into all directories.
36
4. Basic Commands 4.14. Searching within files
find -name <filename> will find only files that have the name <filename>. For
instance, find -name ’*.c’. Will find all files that end in a .c extension
(find -name *.c without the quote characters will not work. You will see
why later). find -name Mary Jones.letter will find the file with the name
Mary Jones.letter.
find -size [[+|-]]<size> will find only files that have a size larger (for +) or
smaller (for -) than <size> kilobytes, or the same as <size> kilobytes if the
sign is not specified.
find <directory> [<directory> ...] will start find in each of the directo-
ries given.
There are many more of these options for doing just about any type of search for a
file. See the find man page for more details. find however has the deficiency of
actively reading directories to find files. This is slow, especially when you start from
the root directory. An alternative command is locate <filename>. This searches
through a previously created database of all the files on the system, and hence finds
files instantaneously. Its counterpart updatedb is used to update the database of files
used by locate. On some systems updatedb runs automatically every day at 04h00.
Try these (updatedb will take several minutes):
¨ ¥
updatedb
locate rpm
locate passwd
locate HOWTO
5 locate README
§ ¦
Very often one would like to search through a number of files to find a particular word
or phrase. An example might be where a number of files contain lists of telephone
numbers with peoples names and addresses. The command grep does a line by line
search through a file and prints out only those lines that contain a word that you have
specified. grep has the command summary:
¨ ¥
grep [options] <pattern> <filename> [<filename> ...]
§ ¦
&The words word, string or pattern are used synonomously in this context, basically meaning a short length
of letters and/or numbers that you are trying to find matches for. A pattern can also be a string with kinds
-
of wildcards in it that match different characters, as we shall see later.
37
4.15. Copying to MSDOS and Windows formatted floppy disks 4. Basic Commands
Do a grep for the word “the” to display all lines containing it: grep
’the’ Mary Jones.letter. Now try grep ’the’ *.letter.
grep -n <pattern> <filename> will show the line number in the file where the
word was found.
grep -<num> <pattern> <filename> will print out <num> of the lines that
came before and after each of the lines in which the word was found.
grep -A <num> <pattern> <filename> will print out <num> of the lines that
came After each of the lines in which the word was found.
grep -B <num> <pattern> <filename> will print out <num> of the lines that
came Before each of the lines in which the word was found.
grep -v <pattern> <filename> will print out only those lines that do not con-
tain the word you are searching for & You may think that the -v option is no longer doing
the same kind of thing that grep is advertised to do: i.e. searching for strings. In fact U NIX commands
often suffer from this — they have such versatility that their functionality often overlaps with those
of other commands. One actually never stops learning new and nifty ways of doing things hidden in
-
the dark corners of man pages. .
grep -i <pattern> <filename> does the same as an ordinary grep but is case
insensitive.
There is a package called the mtools package that enables reading and writing to MS-
DOS/Windows floppy disks. These are not standard U NIX commands but are pack-
aged with most L INUX distributions. It supports long filename type floppy disks.
Put an MSDOS disk in your A: drive. Try
¨ ¥
mdir A:
touch myfile
mcopy myfile A:
mdir A:
§ ¦
Note that there is no such thing as an A: disk under L INUX . Only the mtools package
understand A: in order to retain familiarity for MSDOS users. The complete list of
commands is
38
4. Basic Commands 4.16. Archives and backups
¨ ¥
mattrib mdeltree mlabel mrd
mbadblocks mdir minfo mren
mcd mdu mmd mshowfat
mcopy mformat mmount mtoolstest
5 mdel mkmanifest mmove mzip
mpartition xcopy
§ ¦
and can be gotten by typing info mtools. In general you can take any MSDOS
command, put it into lower case and add an m in front of it, to give you a command
that you can use on L INUX .
Create a directory with a few files in it and run the tar command to back it up. A
file of <filename> will be created. Take careful note of any error messages that tar
reports. List the file and check that its size is appropriate for the size of the directory
you are archiving. You can also use the verify option (see the man page) of the tar
command to check the integrity of <filename>. Now remove the directory, and then
restore it with the extract option of the tar command:
¨ ¥
tar -x -f <filename>
§ ¦
You should see your directory recreated with all its files intact. A nice option to give to
tar is -v. This will list all the files that are being added to or extracted from the archive
as they are processed, and is useful to watch the progress of archiving. It is obvious
39
4.17. The PATH where commands are searched for 4. Basic Commands
that you can call your archive anything you like, however the common practice is to
call it <directory>.tar, which makes it clear to all exactly what it is.
Once you have your tar file, you would probably want to compress it with
gzip. This will create a file <directory>.tar.gz, which is sometimes also called
<directory>.tgz for brevity.
A second kind of archiving utility is cpio. cpio is actually more powerful than
tar, but is considered to be more cryptic to use. The principles of cpio are quite similar
and its usage is left as an exercise.
When you type a command at the shell prompt, it has to be read off disk out of one or
other directory. On U NIX all such executable commands are located in one of about four
directories. A file is located in the directory tree according to its type, rather than ac-
cording to what software package it belongs to. Hence, for example, a word processor
may have its actual executable stored in a directory with all other executables, while
its font files are stored in a director with other fonts from all other packages.
The shell has a procedure for searching for executables when you type them in.
If you type in a command with slashes, like /bin/cp then it tries to run the named
program, cp, out of the /bin directory. If you just type cp on its own, then it tries
to find the cp command in each of the subdirectories of your PATH. To see what your
PATH is, just type
¨ ¥
echo $PATH
§ ¦
You will see a colon separated list of four or more directories. Note that the current
directory . is not listed. It is very important that the current directory not be
listed for security reasons. To execute a command in the current directory, we hence
always type ./<command>.
To append for example a new directory /opt/gnome/bin to your PATH, do
¨ ¥
PATH="$PATH:/opt/gnome/bin"
export PATH
§ ¦
40
4. Basic Commands 4.17. The PATH where commands are searched for
which is also useful in shell scripts to tell if there is a command at all, and hence check
if a particular package is installed, like which netscape.
41
4.17. The PATH where commands are searched for 4. Basic Commands
42
Chapter 5
Regular Expressions
In the previous chapter you learned that the ? character can be used to signify
that any character can take its place. This is said to be a wildcard and works with
filenames. With regular expressions, the wildcard to use is the . character. So you can
use the command grep .3....8 <filename> to find the seven character telephone
number that you are looking for in the above example.
Regular expressions are used for line by line searches. For instance, if the seven
characters were spread over two lines (i.e. they had a line break in the middle), then
grep wouldn’t find them. In general a program that uses regular expressions will
consider searches one line at a time.
Here are some examples that will teach you the regular expression basics. We
43
5.1. Basic regular expression exposition 5. Regular Expressions
will use the grep command to show the use of regular expressions (remember that the
-w option matches whole words only). Here the expression itself will be enclosed in ’
quotes for reasons which will be explained later.
grep -w ’t[a-i]e’ Matches the words tee, the and tie. The brackets have a
special significance. They mean to match one characters that can be anything
from a to i.
grep -w ’kr.*n’ Matches the words kremlin and krypton, because the .
matches any character and the * means to match the dot any number of times.
egrep -w ’(th|sh).*rt’ Matches the words shirt, short, and thwart. The
| means to match either the th or the sh. egrep is just like grep but supports
extended regular expressions which allow for the | feature & The | character often de-
notes a logical OR, meaning that either the thing on the left or the right of the | is applicable. This is
true of many programming languages. - . Note how the square brackets mean-one-of-
several-characters while the round brackets with |’s mean one-of-several-words.
grep -w ’thr[aeiou]*t’ Matches the words threat and throat. As you can
see, a list of possible characters may be placed inside the square brackets.
grep -w ’thr[ˆa-f]*t’ Matches the words throughput and thrust. The ˆ af-
ter the first bracket means to match any character except the characters listed.
Hence, for example, the word thrift is not matched because it contains an f.
The above regular expressions all match whole words (because of the -w option). If the
-w option was not present they might match parts of words which would result in a far
greater number of matches. Also note that although the * means to match any number
of characters, it also will match no characters as well, for example: t[a-i]*e could
actually match the letter sequence te. i.e a t and an e with zero characters between
them.
Usually, you will use regular expression to search for whole lines that match, and
sometimes you would like to match a line that begins or ends with a certain string.
The ˆ character is used to specify the beginning of a line and the $ character for the
end of the line. For example ˆThe matches all lines that start with a The, while hack$
matches all lines that end with hack, and ’ˆ *The.*hack *$’ matches all lines that
begin with The and end with hack, even if there is whitespace at the beginning or end
of the line.
44
5. Regular Expressions 5.2. The fgrep command
Because regular expression use certain characters in a special way (these are . \
[ ] * + ?), these characters cannot be used to match characters. This provides a severe
limitation when trying to match, say, file-names which often use the . character. To
match a . you can use the sequence \. which forces interpretation as an actual . and
not as a wildcard. Hence the regular expression myfile.txt might match the let-
ter sequence myfileqtxt or myfile.txt, but the regular expression myfile\.txt
will match only myfile.txt.
fgrep is an alternative to grep. The difference is that while grep (the more commonly
used command) matches regular expressions, fgrep matches literal strings. In other
words you can use fgrep where you would like to search for an ordinary string that
is not a regular expression, instead of preceding special characters with \.
x* matches zero to infinite instances of a character x. You can specify other ranges of
numbers of characters to be matched with, for example x\{3,5\}, which will match at
least three, but not more than five x’s, that is xxx, xxxx, or xxxx.
x\{4\}, can then be used to match 4 x’s exactly and no more and no less. x\{7,\}
will match seven or more x’s — the upper limit is omitted to mean that there is no
maximum number of x’s.
As in all the examples above, the x can be a range of characters (like [a-k]) just
as well as a single charcter.
45
5.4. + ? \< \> ( ) | notation 5. Regular Expressions
There is an enhanced version of regular expressions that allows for a few more useful
features. Where these conflict with existing notation, they are only available through
the egrep command.
+ is analogous to \{1,\}, it does the same as *, but matches one or more character,
instead of zero or more characters.
The following examples should make the last two notations clearer.
grep ’trot’ Matches the words electrotherapist, betroth and so on, but
46
Chapter 6
6.1 vi
To edit a text file means to interactively modify its content. The creation and modifi-
cation of an ordinary text file is known as text-editing. A word processor is a kind of
editor, but more basic than that is the U NIX or DOS text editor. The important editor
to learn how to use is vi. After that you can read why, and a little more about other,
more user friendly editors.
Type simply,
¨ ¥
vi <filename>
§ ¦
To exit out of vi, press <ESC>, then the key sequence :q! and then press .
vi has a short tutorial which should get you going in 20 minutes. If you get bored in
the middle, you can skip it and learn vi as you need to edit things. To read the tutorial,
enter:
¨ ¥
vimtutor
§ ¦
47
6.1. vi 6. Editing Text Files
5 Vim is a very powerful editor that has many commands, too many to
explain in a tutor such as this. This tutor is designed to describe
enough of the commands that you will be able to easily use Vim as
an all-purpose editor.
You are supposed to edit the tutor file itself as practice, following through 6 lessons.
Copy it first to your home directory.
The following table is a quick reference for vi, it contains only a few of the many
hundreds of commands under vi, but is enough to do all basic editing operations.
vi has several modes of operation. If you type the i key you enter insert-mode. You
then enter text as you would in a normal DOS text editor, but you cannot arbitrarily move
the cursor and delete characters while in insert-mode. Pressing <ESC> will get you out of
insert-mode, where you are not able to insert characters, but can now do things like
arbitrary deletions, moves etc.
Typing : gets you into command-line-mode, where you can do operations like import
a file, save the current file etc. Typically, you type : then some text, and then hit
<ENTER>.
The word register is used below. A register is a hidden clipboard.
A useful tip is to enter :set ruler before doing anything. This shows you what line
and column you are on in the bottom right corner of the screen.
48
6. Editing Text Files 6.1. vi
49
6.1. vi 6. Editing Text Files
50
6. Editing Text Files 6.2. Syntax highlighting
Something all U NIX users are used to (and have come to expect) is syntax highlighting.
This basically means that a bash (explained later) script will look like:
instead of
6.3 Editors
Although U NIX has had full graphics capability for a long time now, most administra-
tion of low level services still takes place inside text configuration files. Word process-
ing is also best accomplished with typesetting systems that require creation of ordinary
text files &This is in spite of all the hype regarding the WYSIWYG (what you see is what you get) word
processor. This document itself was typeset using LATEX and the Cooledit text editor. -.
Historically the standard text editor used to be ed. ed allows the user to only see
one line of text of a file at a time (primitive by todays standards). Today ed is mostly
used in its streaming version, sed. ed has long since been superceded by vi.
The editor is the place you will probably spend most of your time. Whether you
are doing word processing, creating web pages, programming or administrating. It is
your primary interactive application.
51
6.3. Editors 6. Editing Text Files
6.3.1 Cooledit
Today vi is considered the standard. It is the only editor which will be installed by
default on any U NIX system. vim is the GNU version that (as usual) improves upon
the original vi with a host of features. It is important to learn the basics of vi even
if your day to day editor is not going to be vi. This is because every administrator is
bound to one day have to edit a text file over some really slow network link and vi is
the best for this.
On the other hand, new users will probably find vi unintuitive and tedious, and
will spend a lot of time learning and remembering how to do all the things they need
to. I myself cringe at the thought of vi pundits recommending it to new U NIX users.
In defence of vi, it should be said that many people use it exclusively, and it is
probably the only editor that really can do absolutely everything. It is also one of the
few editors that has working versions and consistent behaviour across all U NIX and
non-U NIX systems. vim works on AmigaOS, AtariMiNT, BeOS, DOS, MacOS, OS/2,
RiscOS, VMS, and Windows (95/98/NT4/NT5/2000) as well as all U NIX variants.
6.3.3 Emacs
Emacs stands for Editor MACroS. It is the monster of all editors and can do almost
everything one could imagine a single software package to be able to do. It has become
52
6. Editing Text Files 6.3. Editors
Other editors to watch out for are joe, jed, nedit, pico, nano, and many others that
try to emulate the look and feel of well known DOS, Windows or AppleMac develop-
ment environments, or bring better interfaces using Gtk/Gnome or Qt/KDE. The list
gets longer each time I look. In short, don’t think that the text editors that your vendor
has chosen to put on your CD are the best or only free ones out there. The same goes
for other applications.
53
6.3. Editors 6. Editing Text Files
54
Chapter 7
Shell Scripting
7.1 Introduction
This chapter will introduce you to the concept of computer programming. So far, you
have entered commands one at a time. Computer programming is merely the idea of
getting a number of commands to be executed, that in combination do some unique
powerful function.
To see a number of commands get executed in sequence, create a file with a .sh
extension, into which you will enter your commands. The .sh extension is not strictly
necessary, but serves as a reminder that the file contains special text called a shell script.
From now on, the word script will be used to describe and sequence of commands
placed in a text file. Now do a,
¨ ¥
chmod 0755 myfile.sh
§ ¦
which dictates that the following program is a shell script, meaning that it accepts the
same sort of commands that you have normally been typing at the prompt. Now enter
a number of commands that you would like to be executed. You can start with
¨ ¥
55
7.2. Looping: the while and until statements 7. Shell Scripting
Now exit out of your editor and type ./myfile.sh. This will execute &Cause the com-
- the file. Note
puter to read and act on your list of commands, also called running the program.
that typing ./myfile.sh is no different from typing any other command at the shell
prompt. Your file myfile.sh has in fact become a new U NIX command all of its own.
Now note what the read command is doing. It creates a pigeon-hole called NM,
and then inserts text read from the keyboard into that pigeon hole. Thereafter, when-
ever the shell encounters NM, its contents are written out instead of the letters NM
(provided you write a $ in front of it). We say that NM is a variable because its contents
can vary.
You can use shell scripts like a calculator. Try,
¨ ¥
echo "I will work out X*Y"
echo "Enter X"
read X
echo "Enter Y"
5 read Y
echo "X*Y = $X*$Y = $[X*Y]"
§ ¦
The [ and ] mean that everything between must be evaluated &Substituted, worked-out, or
reduced to some simplified form.-as a numerical expression &Sequence of numbers with +, -, * etc.
between them. - . You can in fact do a calculation at any time by typing it at the prompt:
¨ ¥
echo $[3*6+2*8+9]
§ ¦
& Note that the Bash shell that you are using allows such [ ] notation. On some U NIX systems you will
-
have to use the expr command to get the same effect.
The shell reads each line in succession from top to bottom: this is called program flow.
Now suppose you would like a command to be executed more than once — you would
like to alter the program flow so that the shell reads particular commands repeatedly.
The while command executes a sequence of commands many times. Here is an ex-
ample (-le stands for less than or equal):
56
7. Shell Scripting 7.3. Looping: the for statement
¨ ¥
N=1
while test "$N" -le "10"
do
echo "Number $N"
5 N=$[N+1]
done
§ ¦
The N=1 creates a variable called N and places the number 1 into it. The while com-
mand executes all the commands between the do and the done repetatively until the
“test” condition is no longer true (i.e until N is greater than 10). The -le stands for
-less-than-or-equal-to. Do a man test to see the other types of tests you can do on vari-
ables. Also be aware of how N is replaced with a new value that becomes 1 greater
with each repetition of the while loop.
You should note here that each line is distinct command — the commands are
newline-separated. You can also have more than one command on a line by seperating
them with a semicolon as follows:
¨ ¥
N=1 ; while test "$N" -le "10"; do echo "Number $N"; N=$[N+1] ; done
§ ¦
(Try counting down from 10 with -ge (-greater-than-or-equal).) It is easy to see that
shell scripts are extremely powerful, because any kind of command can be executed
with conditions and loops.
The until statement is identical to while except that the reverse logic is ap-
plied. The same functionality can be achieved using -gt (greater-than):
¨ ¥
N=1 ; until test "$N" -gt "10"; do echo "Number $N"; N=$[N+1] ; done
§ ¦
The for command also allows execution of commands multiple times. It works like
this::
¨ ¥
for i in cows sheep chickens pigs
do
echo "$i is a farm animal"
done
5 echo -e "but\nGNUs are not farm animals"
§ ¦
The for command takes each string after the in, and executes the lines between do
and done with i substituted for that string. The strings can be anything (even num-
57
7.3. Looping: the for statement 7. Shell Scripting
Now let us create a script that interprets its arguments. Create a new script called
backup-lots.sh, containing:
¨ ¥
#!/bin/sh
for i in 0 1 2 3 4 5 6 7 8 9 ; do
cp $1 $1.BAK-$i
done
§ ¦
58
7. Shell Scripting 7.4. breaking out of loops and continueing
A loop that requires premature termination can include the break statement within it:
¨ ¥
#!/bin/sh
for i in 0 1 2 3 4 5 6 7 8 9 ; do
NEW_FILE=$1.BAK-$i
if test -e $NEW_FILE ; then
5 echo "backup-lots.sh: **error** $NEW_FILE"
echo " already exists - exitting"
break
else
cp $1 $NEW_FILE
10 fi
done
§ ¦
which causes program execution to continue from after the done. If two loops are
nested within each other, then the command break 2 will cause program execution
to break out of both loops; and so on for values above 2.
The continue statement is also useful for terminating the current iteration of
the loop. This means that if a continue statement is encountered, execution will
immediately continue from the top of the loop, thus ignoring the remainder of the
body of the loop:
¨ ¥
#!/bin/sh
for i in 0 1 2 3 4 5 6 7 8 9 ; do
NEW_FILE=$1.BAK-$i
if test -e $NEW_FILE ; then
5 echo "backup-lots.sh: **warning** $NEW_FILE"
echo " already exists - skipping"
continue
fi
cp $1 $NEW_FILE
10 done
§ ¦
59
7.5. Looping over glob expressions 7. Shell Scripting
Note that both break and continue work inside for, while and until loops.
We know that the shell can expand file names when given wildcards. For instance, we
can type ls *.txt to list all files ending with .txt. This applies equally well in any
situation: for instance:
¨ ¥
#!/bin/sh
for i in *.txt ; do
echo "found a file:" $i
done
§ ¦
The *.txt is expanded to all matching files. These files are searched for in the current
directory. If you include an absolute path then the shell will search in that directory:
¨ ¥
#!/bin/sh
for i in /usr/doc/*/*.txt ; do
echo "found a file:" $i
done
§ ¦
Which demonstrates the shells ability to search for matching files and expand to an
absolute path.
The case statement can make a potentially complicated program very short. It is best
explained with an example.
¨ ¥
#!/bin/sh
case $1 in
-{}-test|-t)
echo "you used the -{}-test option"
5 exit 0
;;
-{}-help|-h)
echo "Usage:"
echo " myprog.sh {[}-{}-test|-{}-help|-{}-version{]}"
10 exit 0
;;
-{}-version|-v)
echo "myprog.sh version 0.0.1"
exit 0
60
7. Shell Scripting 7.7. Using functions: the function keyword
15 ;;
-*)
echo "No such option $1"
echo "Usage:"
echo " myprog.sh {[}-{}-test|-{}-help|-{}-version{]}"
20 exit 1
;;
esac
Above you can see that we are trying to process the first argument to a program. It can
be one of several options, so using if statements will come to a long program. The
case statement allows us to specify several possible statement blocks depending on
the value of a variable. Note how each statement block is separated by ;;. The strings
before the ) are glob expression matches. The first successful match causes that block
to be executed. The | symbol allows us to enter several possible glob expressions.
So far our programs execute mostly from top to bottom. Often, code needs to be re-
peated, but it is considered bad programming practice to repeat groups of statements
that have the same functionality. Function definitions provide a way to group state-
ment blocks into one. A function groups a list of commands and assigns it a name. For
example:
¨ ¥
#!/bin/sh
function usage ()
{
5 echo "Usage:"
echo " myprog.sh [--test|--help|--version]"
}
case $1 in
10 --test|-t)
echo "you used the --test option"
exit 0
;;
--help|-h)
15 usage
;;
--version|-v)
echo "myprog.sh version 0.0.2"
exit 0
61
7.8. Properly processing command line args: shift 7. Shell Scripting
20 ;;
-*)
echo "Error: no such option $1"
usage
exit 1
25 ;;
esac
Whereever the usage keyword appears, it is effectively substituted for the two lines
inside the { and }. There are obvious advantages to this approach: if you would like
to change the program usage description, you only need to change it in one place in
the code. Good programs use functions so liberally that they never have more than 50
lines of program code in a row.
Most programs we have seen can take many command-line arguments, sometimes in
any order. Here is how we can make our own shell scripts with this functionality. The
command-line arguments can be reached with $1, $2, etc. the script,
¨ ¥
#!/bin/sh
echo "The first argument is: $1, second argument is: $2, third argument is: $3"
§ ¦
and prints,
¨ ¥
The first argument is: dogs, second argument is: cats, third argument is: birds
§ ¦
Now we need to loop through each argument and decide what to do with it. A script
like
¨ ¥
for i in $1 $2 $3 $4 ; do
<statments>
done
§ ¦
62
7. Shell Scripting 7.8. Properly processing command line args: shift
doesn’t give us much flexibilty. The shift keyword is meant to make things easier.
It shifts up all the arguments by one place so that $1 gets the value of $2, $2 gets the
value of $3 and so on. (!= tests if the "$1" is not equal to "", i.e. if it is empty and is
hence past the last argument.) Try:
¨ ¥
while test "$1" != "" ; do
echo $1
shift
done
§ ¦
and run the program with lots of arguments. Now we can put any sort of condition
statements within the loop to process the arguments in turn:
¨ ¥
#!/bin/sh
function usage ()
{
5 echo "Usage:"
echo " myprog.sh [--test|--help|--version] [--echo <text>]"
}
63
7.9. More on command-line arguments: $@ and $0 7. Shell Scripting
Whereas $1, $2, $3 etc. expand to the individual arguments passed to the program,
$@ expands to all arguments. This is useful for passing all remaining arguments onto
a second command. For instance,
¨ ¥
if test "$1" = "--special" ; then
shift
myprog2.sh "$@"
fi
§ ¦
$0 means the name of the program itself and not any command line argument. It
is the command used to envoke the current program. In the above cases it will be
./myprog.sh. Note that $0 is immune to shift’s.
Single forward quotes ’ protect the enclosed text from the shell. In other words,
you can place any odd characters inside forward quotes and the shell will treat them
literally and reproduce your text exactly. For instance you may want to echo an actual
$ to the screen to produce an output like costs $1000. You can use echo ’costs
$1000’ instead of echo "costs $1000".
Double quotes " have the oposite sence to single quotes. They allow all shell
interpretations to take place inside them. The reason they are used at all is only to
group text containing white-space into a single word, becuase the shell will usually
break up text along whitespace boundaries. Try,
¨ ¥
for i in "henry john mary sue" ; do
echo "$i is a person"
done
§ ¦
compared to,
¨ ¥
for i in henry john mary sue ; do
echo $i is a person
done
§ ¦
64
7. Shell Scripting 7.12. Backward quote substitution
Backward quote ‘ have a special meaning to the shell. When a command is inside
backward quotes it means that the command should be run and its output substituted
in place of the backquotes. Take for example the cat command. Create a small file
to be catted with only the text daisy inside it. Create a shell script
¨ ¥
X=‘cat to_be_catted‘
echo $X
§ ¦
The value of X is set to the output of the cat command, which in this case is the word
daisy. This is a powerful tool. Consider the expr command:
¨ ¥
X=‘expr 100 + 50 ’*’ 3‘
echo $X
§ ¦
Hence we can use expr and backquotes to do mathematics inside our shell script.
Here is function to calculate factorials. Note how we enclose the * in forward quotes.
This is to prevent the shell from thinking that we want it to expand the * into matching
file-names:
¨ ¥
function factorial ()
{
N=$1
A=1
5 while test $N -gt 0 ; do
A=‘expr $A ’*’ $N‘
N=‘expr $N - 1‘
done
echo $A
10 }
§ ¦
We can see that the square braces used further above can actually suffice for most of
the times where we would like to use expr. (However $[] notation is an extension
of the GNU shells, and is not a standard feature on all varients of U NIX.) We can
now run factorial 20 and see the output. If we would like to assign the output to
a variable, this can be done with, X=‘factorial 20‘.
Note that another notation for the backward quote is $(command) instead of
‘command‘. Here I will always use the older backward quote style.
65
7.12. Backward quote substitution 7. Shell Scripting
66
Chapter 8
8.1 Introduction
&The ability to use pipes is one of the powers of UNIX. This is one of the principle deficiancies of some
non-U NIX systems. Pipes used on the command line as explained in this section are a neat trick, but Pipes
used inside C programs enormously simplify program interaction. Without pipes, huge amounts of complex
and buggy code usually needs to be written to perform simple tasks. It is hoped that these ideas will give
the reader an idea of why U NIX is such a ubiquitous and enduring standard.-The commands grep,
echo, df and so on print some output to the screen. In fact, what is happening on a
lower level is that they are printing characters one by one into a theoretical data stream
(also called a pipe) called the stdout pipe. The shell itself performs the action of reading
those characters one by one and displaying them on the screen. The word pipe itself
means exactly that: a program places data in the one end of a funnel while another
program reads that data from the other end. The reason for pipes is to allow two
seperate programs to perform simple communications with each other. In this case,
the program is merely communicating with the shell in order to display some output.
The same is true with the cat command explained previously. This command
run with no arguments reads from the stdin pipe. By default this is the keyboard. One
further pipe is the stderr pipe which a program writes error messages to. It is not pos-
sible to see whether a program message is caused by the program writing to its stderr
or stdout pipe, because usually both are directed to the screen. Good programs how-
ever always write to the appropriate pipes to allow output to be specially separated
for diagnostic purposes if need be.
67
8.2. Tutorial 8. Streams and sed — the stream editor
8.2 Tutorial
Create a text file with lots of lines that contain the word GNU and one line that contains
the word GNU as well the word Linux. Then do grep GNU myfile.txt. The result
is printed to stdout as usual. Now try grep GNU myfile.txt > gnu lines.txt.
What is happening here is that the output of the grep command is being redirected into
a file. The > gnu lines.txt tells the shell to create a new file gnu lines.txt and
fill it with any output from stdout, instead of displaying the output as it usually does.
If the file already exists, it will be truncated &Shortened to zero length -.
Now suppose you want to append further output to this file. Using >> instead
of > will not truncate the file but append any output to it. Try this:
¨ ¥
echo "morestuff" >> gnu_lines.txt
§ ¦
The real power of pipes is when one program can read from the output of another pro-
gram. Consider the grep command which reads from stdin when given no arguments:
run grep with one argument on the command line:
¨ ¥
# grep GNU
A line without that word in it
Another line without that word in it
A line with the word GNU in it
5 A line with the word GNU in it
I have the idea now
ˆC
#
§ ¦
grep’s default is to read from stdin when no files are given. As you can see, it is
doing its usual work of printing out lines that have the word GNU in them. Hence lines
containing GNU will be printed twice - as you type them in and again when grep reads
them and decides that they contain GNU.
Now try grep GNU myfile.txt | grep Linux. The first grep outputs all
lines with the word GNU in them to stdout. The | tells that all stdout is to be typed as
stdin (us we just did above) into the next command, which is also a grep command.
The second grep command scans that data for lines with the word Linux in them.
68
8. Streams and sed — the stream editor 8.4. A complex piping example
grep is often used this way as a filter &Something that screens data. - and be used multiple
times eg. grep L myfile.txt | grep i | grep n | grep u | grep x.
The < character redirects the contents of a file in place of stdin. In other words,
the contents of a file replaces what would normally come from a keyboard. Try:
¨ ¥
grep GNU < gnu_lines.txt
§ ¦
& A backslash \ as the last character on a line indicates that the line is to be continued. You can leave
out the \but then you must leave out the newline as well. - The file english.hash contains
the U NIX dictionary normally used for spell checking. With a bit of filtering you can
create a dictionary that will make solving crossword puzzles a breese. First we use
the command strings explained previously to extract readable bits of text. Here we
are using its alternate mode of operation where it reads from stdin when no files are
specified on its command-line. The command tr (abbreviated from translate see the
tr man page.) then converts upper to lower case. The grep command then filters
out lines that do not start with a letter. Finally the sort command sorts the words in
alphabetical order. The -u option stands for unique, and specifies that there should
be not duplicate lines of text. Now try less mydict.
Try the command ls nofile.txt > A. ls should give an error message if the file
doesn’t exist. The error message is however displayed, and not written into the file A.
This is because ls has written its error message to stderr while > has only redirected
stdout. The way to get both stdout and stderr to both go to the same file is to use
a redirection operator. As far as the shell is concerned, stdout is called 1 and stderr is
called 2, and commands can be appended with a redirection like 2>&1 to dictate that
stderr is to be mixed into the output of stdout. The actual words stderr and stdout are
only used in C programming. Try the following:
69
8.5. Redirecting streams with >& 8. Streams and sed — the stream editor
¨ ¥
touch existing_file
rm -f non-existing_file
ls existing_file non-existing_file
§ ¦
ls will output two lines: a line containing a listing for the file existing file and
a line containing an error message to explain that the file non-existing file does
not exist. The error message would have been written to stderr or file descriptor number
2, and the remaining line would have been written to stdout or file descriptor number
1. Next we try
¨ ¥
ls existing_file non-existing_file 2>A
cat A
§ ¦
Now A contains the error message, while the remaining output came to the screen.
Now try,
¨ ¥
ls existing_file non-existing_file 1>A
cat A
§ ¦
The notation 1>A is the same as >A because the shell assumes that you are referring to
file descriptor 1 when you don’t specify any. Now A contains the stdout output, while
the error message has been redirected to the screen. Now try,
¨ ¥
ls existing_file non-existing_file 1>A 2>&1
cat A
§ ¦
Now A contains both the error message and the normal output. The >& is called a
redirection operator. x>&y tells to write pipe x into pipe y. Redirection is specified from
right to left on the command line. Hence the above command means to mix stderr
into stdout and then to redirect stdout to the file A. Finally,
¨ ¥
ls existing_file non-existing_file 2>A 1>&2
cat A
§ ¦
We notice that this has the same effect, except that here we are doing the reverse: redi-
recting stdout into stderr, and then redirecting stderr into a file A. To see what happens
if we redirect in reverse order, we can try,
¨ ¥
ls existing_file non-existing_file 2>&1 1>A
cat A
§ ¦
which means to redirect stdout into a file A, and then to redirect stderr into stdout.
This will therefore not mix stderr and stdout because the redirection to A came first.
70
8. Streams and sed — the stream editor 8.6. Using sed to edit streams
ed used to be the standard text editor for U NIX. It is cryptic to use, but is compact
and programmable. sed stands for stream editor, and is the only incarnation of ed
that is commonly used today. sed allows editing of files non-interactively. In the
way that grep can search for words and filter lines of text; sed can do search-replace
operations and insert and delete lines into text files. sed is one of those programs
with no man page to speek of. Do info sed to see sed’s comprehensive info pages
with examples. The most common usage of sed is to replace words in a stream with
alternative words. sed reads from stdin and writes to stdout. Like grep, it is line
buffered which means that it reads one line in at a time and then writes that line out
again after performing whatever editing operations. Replacements are typically done
with:
¨ ¥
cat <file> | sed -e ’s/<search-regexp>/<replace-text>/<option>’ \
> <resultfile>
§ ¦
where search-regexp is a regular expression, replace-text is the text you would like to
replace each found occurance with, and option is nothing or g, which means to replace
every occurance in the same line (usually sed just replaces the first occurance of the
regular expression in each line). (There are other options, see the sed info page.) For
demonstration, type
¨ ¥
sed -e ’s/e/E/g’
§ ¦
The section explains how to do the apparently complex task of moving text around
within lines. Consider for example the output of ls: now say you want to automati-
cally strip out only the size column — sed can do this sort of editing using the special
\( \) notation to group parts of the regular expression together. Consider the follow-
ing example:
¨ ¥
sed -e ’s/\(\<[ˆ ]*\>\)\([ ]*\)\(\<[ˆ ]*\>\)/\3\2\1/g’
§ ¦
71
8.7. Regular expression sub-expressions 8. Streams and sed — the stream editor
Here sed is searching for the expression \<.*\>[ ]*\<.*\>. From the chapter on
regular expressions, we can see that it matches a whole word, an arbitrary amount of
whitespace, and then another whole word. The \( \) groups these three so that they
can be referred to in replace-text. Each part of the regular expression inside \( \) is
called a sub-expression of the regular expresion. Each sub-expression is numbered —
namely \1, \2 etc. Hence \1 in replace-text is the first \<[ˆ ]*\>, \2 is [ ]*, and
finally, \3 is the second \<[ˆ ]*\>. Now test to see what happens when you run this:
¨ ¥
sed -e ’s/\(\<[ˆ ]*\>\)\([ ]*\)\(\<[ˆ ]*\>\)/\3\2\1/g’
GNU Linux is cool
Linux GNU cool is
§ ¦
To return to our ls example (note that this is just an example, to count file sizes you
should rather use the du command), think about if we would like to sum the bytes
sizes of all the files in a directory:
¨ ¥
expr 0 ‘ls -l | grep ’ˆ-’ | \
sed ’s/ˆ\([ˆ ]*[ ]*\)\\{4,4\\}\([0-9]*\).*$/ + \2/’‘
§ ¦
We know that ls -l output lines start with - for ordinary files. So we use grep to
strip lines not starting with -. If we do an ls -l, we see the output is divided into
four columns of stuff we are not interested in, and then a number indicating the size
of the file. A column (or field) can be described by the regular expression [ˆ ]*[ ]*,
i.e. a length of text with no whitespace, followed by a length of whitespace. There are
four of these, so we bracket it with \( \), and then use the \{ \} notation to indicate
that we want exactly 4. After that comes our number [0-9]*, and then any trailing
characters which we are not interested in, .*$. Notice here that we have neglected
to use \< \> notation to indicate whole words. This is because sed tries to match the
maximum number of characters legally allowed, and in the situation we have here, has
exactly the same effect.
If you haven’t yet figured it out, we are trying to get that column of bytes sizes
into the format like,
¨ ¥
+ 438
+ 1525
+ 76
+ 92146
§ ¦
. . . so that expr can understand it. Hence we replace each line with sub-expression
\2 and a leading + sign. Backquotes give the output of this to expr, which sums them
studiously, ignoring any newline characters as though the summation were typed in on
a single line. There is one minor problem here: the first line contains a + with nothing
before it, which will cause expr to complain. To get around this, we can just add a 0
72
8. Streams and sed — the stream editor 8.8. Inserting and deleting lines
sed can perform a few operations that make it easy to write scripts that edit configu-
ration files for you. For instance,
¨ ¥
sed -e ’7a\
an extra line.\
another one.\
one more.’
§ ¦
deletes all the lines starting from a line matching the regular expression Dear Henry
up until a line matching Love Jane (or the end of the file if one does not exist).
This applies just as well to to insertions:
¨ ¥
sed -e ’/Love Jane/i\
Love Carol\
Love Beth’
§ ¦
73
8.8. Inserting and deleting lines 8. Streams and sed — the stream editor
and finally, the negation symbol, !, is used to match all lines not specified, for instance
¨ ¥
sed -e ’7,11!D’
§ ¦
74
Chapter 9
9.1 Introduction
On U NIX, when you run a program (like any of the shell commands you have been
using) the actual computer instructions are read out of a file on disk out of one of the
bin/ directories and placed in RAM. The program then gets executed in memory and
becomes a process. A process is some command/program/shell-script that is being run
(or executed) in memory. When the process is finished running, it is removed from
memory. There are usually about 50 processes running simultaneously at any one time
on a system with one person logged in. The CPU hops between each of them to give a
share of its execution time &Time given to carry out the instructions of a particular program. Note this
is in contrast to Windows or DOS where the program itself has to allow the others a share of the CPU: under
U NIX, the process has no say in the matter.- .Each process is given a process number called
the PID (Process ID). Besides the memory actually occupied by the process, the process
itself ceases addition memory for its operations.
In the same way that a file is owned by a particular user and group, a process also
has an owner — usually the person who ran the program. Whenever a process tries
to access a file, its ownerships is compared to that of the file to decide if the access is
permissable. Because all devices are files, the only way a process can do anything is
through a file, and hence file permission restrictions are the only kind of restrictions
there need ever be on U NIX. This is how U NIX security works.
The center of this operation is called the U NIX kernel. The kernel is what actually does
the hardware access, execution, allocation of Process IDs, sharing of CPU time and
75
9.2. Tutorial 9. Processes and environment variables
ownership management.
9.2 Tutorial
Login on a terminal and type the command ps. You should get some output like:
¨ ¥
PID TTY STAT TIME COMMAND
5995 2 S 0:00 /bin/login -- myname
5999 2 S 0:00 -bash
6030 2 R 0:00 ps
§ ¦
ps with no options shows 3 processes to be running. These are the only three processes
visible to you as a user, although there are other system processes not belonging to you.
The first process was the program that logged you in by displaying the login prompt
and requesting a password. It then ran a second process call bash, the Bourne Again
Shell &The Bourne shell was the original U NIX shell- where you have been typing commands.
Finally you ran ps, hence it must have found itself when it checked for what processed
were running, but then exited immediately afterward.
The shell has many facilities for controlling and executing processes — this is called
job control. Create a small script called proc.sh:
¨ ¥
#!/bin/sh
echo "proc.sh: is running"
sleep 1000
§ ¦
Run the script with chmod 0755 proc.sh and then ./proc.sh. The shell will block
waiting for the process to exit. Now hit ˆZ &ˆ means to hold down the Ctrl key and press the Z
key. -. This will stop the process. Now do a ps again. You will see your script listed.
However it is not presently running because it is in the condition of being stopped.
Type bg standing for background. The script will now be un-stopped and run in the
background. You can now run other processes in the mean time. Type fg and the
script will return to the foreground. You can then type ˆC to interrupt the process.
76
9. Processes and environment variables 9.2. Tutorial
¨ ¥
#!/bin/sh
echo "proc.sh: is running"
while true ; do
echo -e ’\a’
5 sleep 2
done
§ ¦
Now perform the ˆZ, bg, fg and ˆC operations from before. To put a process immedi-
ately into the background, you can use:
¨ ¥
./proc.sh &
§ ¦
The JOB CONTROL section of the bash man page (bash(1)) looks like this:
Job control refers to the ability to selectively stop (suspend) the execution of
processes and continue (resume) their execution at a later point. A user typ-
ically employs this facility via an interactive interface supplied jointly by
the system’s terminal driver and bash.
The shell associates a job with each pipeline &What does this mean? It means that
each time you execute something in the background, it gets its own unique number called
-
the job number. . It keeps a table of currently executing jobs, which may be
listed with the jobs command. When bash starts a job asynchronously (in
the background), it prints a line that looks like:
[1] 25647
indicating that this job is job number 1 and that the process ID of the last
process in the pipeline associated with this job is 25647. All of the processes
in a single pipeline are members of the same job. Bash uses the job abstrac-
tion as the basis for job control.
77
9.2. Tutorial 9. Processes and environment variables
If the operating system on which bash is running supports job control, bash
allows you to use it. Typing the suspend character (typically ˆZ, Control-Z)
while a process is running causes that process to be stopped and returns
you to bash. Typing the delayed suspend character (typically ˆY, Control-Y)
causes the process to be stopped when it attempts to read input from the
terminal, and control to be returned to bash. You may then manipulate the
state of this job, using the bg command to continue it in the background,
the fg command to continue it in the foreground, or the kill command to
kill it. A ˆZ takes effect immediately, and has the additional side effect of
causing pending output and typeahead to be discarded.
There are a number of ways to refer to a job in the shell. The character
% introduces a job name. Job number n may be referred to as %n. A job
may also be referred to using a prefix of the name used to start it, or using
a substring that appears in its command line. For example, %ce refers to
a stopped ce job. If a prefix matches more than one job, bash reports an
error. Using %?ce, on the other hand, refers to any job containing the string
ce in its command line. If the substring matches more than one job, bash
reports an error. The symbols %% and %+ refer to the shell’s notion of the
current job, which is the last job stopped while it was in the foreground. The
previous job may be referenced using %-. In output pertaining to jobs (e.g.,
the output of the jobs command), the current job is always flagged with a
+, and the previous job with a -.
The shell learns immediately whenever a job changes state. Normally, bash
waits until it is about to print a prompt before reporting changes in a job’s
status so as to not interrupt any other output. If the -b option to the set
builtin command is set, bash reports such changes immediately. (See also
the description of notify variable under Shell Variables above.)
If you attempt to exit bash while jobs are stopped, the shell prints a mes-
sage warning you. You may then use the jobs command to inspect their
status. If you do this, or try to exit again immediately, you are not warned
again, and the stopped jobs are terminated.
78
9. Processes and environment variables 9.2. Tutorial
The kill command actually sends a signal to the process causing it to execute some
function. In some cases, the developers would not have bothered to account for this
signal and some default behaviour happens.
To send a signal to a process you can name the signal on the command-line or
use its numerical equivalent:
¨ ¥
kill -SIGTERM 12345
§ ¦
or
¨ ¥
kill -15 12345
§ ¦
Which is the signal that kill normally sends: the termination signal.
or
¨ ¥
kill -9 12345
§ ¦
This is useful when you are sure that there is only one of a process running, either
because there is no one else logged in on the system, or because you are not logged in
as super user.
79
9.2. Tutorial 9. Processes and environment variables
SIGKILL Kill Signal. This is one of the signals that can never be caught by a process.
If a process gets this signal it has to quit immediately and will not perform any
clean-up operations (like closing files or removing temporary files). You can send
a process a SIGKILL signal if there is no other means of destroying it.
SIGPIPE Pipe died. A program was writing to a pipe, the other end of which is no
longer available.
All processes are allocated execution time by the kernel. If all processes were allocated
the same amount of time, performance would obviously get worse as the number of
processes increased. The kernel uses heuristics &Sets of rules.- to guess how much time
each process should be allocated. The kernel tries to be fair, hence when two users are
competing for CPU usage, they should both get the same.
Most processes spend their time waiting for either a key press, or some network input,
or some device to send data, or some time to elapse. They hence do not consume CPU.
On the other hand, when more than one process runs flat out, it can be difficult for the
kernel to decide if it should be given greater priority, than another process. What if a
process is doing some more important operation than another process? How does the
80
9. Processes and environment variables 9.2. Tutorial
kernel tell? The answer is the U NIX feature of scheduling priority or niceness. Scheduling
priority ranges from +20 to -20. You can set a process’s niceness with the renice
command.
¨ ¥
renice <priority> <pid>
renice <priority> -u <user>
renice <priority> -g <group>
§ ¦
&
A typical example is the SETI SETI stands for Search for Extraterrestrial Intelligence. SETI is an
initiative funded by various obscure sources to scan the skies for radio signals from other civilisations. The
data that SETI gathers has to be intensively processed. SETI distributes part of that data to anyone who
wants to run a seti program in the background. This puts the idle time of millions of machines to “good”
use. There is even a SETI screen-saver that has become quite popular. Unfortunately for the colleague in my
office, he runs seti at -19 instead of +19 scheduling priority, so nothing on his machine works right. On
the other hand, I have inside information that the millions of other civilisations in this galaxy and others are
probably not using radio signals to communicate at all :-) - program. Set its priority to +19 with:
¨ ¥
renice +19 <pid>
§ ¦
Also useful is the -u and -g options which set the priority of all the processes that a
user or group owns.
Further, there is the nice command which starts a program under a defined niceness
relative to the current nice value of the present user. For example
¨ ¥
nice +<priority> <pid>
nice -<priority> <pid>
§ ¦
Finally, there is the snice command which can set, but also display the current nice-
ness. This command doesn’t seem to work.
¨ ¥
snice -v <pid>
81
9.2. Tutorial 9. Processes and environment variables
§ ¦
The top command sorts all process by their CPU and memory consumption and dis-
plays top twenty or so in a table. top is to be used whenever you want to see whats
hogging your system. top -q -d 2 is useful for scheduling the top command itself
to a high priority, so that it is sure to refresh its listing without lag. top -n 1 -b >
top.txt is useful for listing all process, and top -n 1 -b -p <pid> prints info on
one process.
top has some useful interactive responses to key presses:
f Shows a list of displayed fields that you can alter interactively. By default the only
fields shown are USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME
COMMAND which is usually what you are most interested in. (The field meanings
are given below.)
r renices a process.
k kills a process.
The top man page describes the field meanings. Some of these are confusing and
assume knowledge of the internals of C programs. The main question people ask is:
How much memory is a process using? This is given by the RSS field, which stands
for Resident Set Size. RSS means the amount of RAM that a process consumes alone.
The following show totals for all process running on my system (which had 65536
kilobytes RAM at the time.). They represent the total of the SIZE, RSS and SHARE
fields respectively.
¨ ¥
echo ‘echo ’0 ’ ; top -q -n 1 -b | sed -e ’1,/PID *USER *PRI/D’ | \
awk ’{print "+" $5}’ | sed -e ’s/M/\\*1024/’‘ | bc
68016
The SIZE represents the total memory usage of a process. RSS is the same, but ex-
cludes memory not needing actual RAM (this would be memory swapped to the swap
partition). SHARE is the amount shared between processes.
82
9. Processes and environment variables 9.2. Tutorial
uptime This line displays the time the system has been up, and the three
load averages for the system. The load averages are the average num-
ber of process ready to run during the last 1, 5 and 15 minutes. This
line is just like the output of uptime(1). The uptime display may be
toggled by the interactive l command.
processes The total number of processes running at the time of the last
update. This is also broken down into the number of tasks which are
running, sleeping, stopped, or undead. The processes and states dis-
play may be toggled by the t interactive command.
CPU states Shows the percentage of CPU time in user mode, system
mode, niced tasks, and idle. (Niced tasks are only those whose nice
value is negative.) Time spent in niced tasks will also be counted
in system and user time, so the total will be more than 100%. The
processes and states display may be toggled by the t interactive com-
mand.
Mem Statistics on memory usage, including total available memory, free
memory, used memory, shared memory, and memory used for buffers.
The display of memory information may be toggled by the m interac-
tive command.
Swap Statistics on swap space, including total swap space, available swap
space, and used swap space. This and Mem are just like the output of
free(1).
PID The process ID of each task.
PPID The parent process ID each task.
UID The user ID of the task’s owner.
USER The user name of the task’s owner.
PRI The priority of the task.
NI The nice value of the task. Negative nice values are lower priority. (Ac-
tually higher — quoted directly from the man page: seems to a typo.)
SIZE The size of the task’s code plus data plus stack space, in kilobytes, is
shown here.
TSIZE The code size of the task. This gives strange values for kernel pro-
cesses and is broken for ELF processes.
DSIZE Data + Stack size. This is broken for ELF processes.
TRS Text resident size.
SWAP Size of the swapped out part of the task.
D Size of pages marked dirty.
83
9.2. Tutorial 9. Processes and environment variables
LIB Size of use library pages. This does not work for ELF processes.
RSS The total amount of physical memory used by the task, in kilobytes,
is shown here. For ELF processes used library pages are counted here,
for a.out processes not.
SHARE The amount of shared memory used by the task is shown in this
column.
STAT The state of the task is shown here. The state is either S for sleep-
ing, D for uninterruptible sleep, R for running, Z for zombies, or T for
stopped or traced. These states are modified by a trailing ¡ for a pro-
cess with negative nice value, N for a process with positive nice value,
W for a swapped out process (this does not work correctly for kernel
processes).
WCHAN depending on the availablity of either /boot/psdatabase or the ker-
nel link map /boot/System.map this shows the address or the name
of the kernel function the task currently is sleeping in.
TIME Total CPU time the task has used since it started. If cumulative mode
is on, this also includes the CPU time used by the process’s children
which have died. You can set cumulative mode with the S command
line option or toggle it with the interactive command S. The header
line will then be changed to CTIME.
%CPU The task’s share of the CPU time since the last screen update, ex-
pressed as a percentage of total CPU time per processor.
%MEM The task’s share of the physical memory.
COMMAND The task’s command name, which will be truncated if it is too
long to be displayed on one line. Tasks in memory will have a full
command line, but swapped-out tasks will only have the name of the
program in parentheses (for example, ”(getty)”).
Each process that runs does so with the knowledge of several var=value text pairs. All
this means is that a process can look up the value of some variable that it may have
inherited from its parent process. The complete list of these text pairs is called the
environment of the process, and each var is called an environment variable. Each process
has its own environment, which is copied from the parent processes environment.
After you have logged in and have a shell prompt, the process you are using
(the shell itself) is just like any other process with an environment with environment
variables. To get a complete list of these variables, just type:
¨ ¥
set
§ ¦
84
9. Processes and environment variables 9.2. Tutorial
This is useful to find the value of an environment variable whose name you are unsure
of:
¨ ¥
set | grep <regexp>
§ ¦
Try set | grep PATH to see the PATH environment variable discussed previously.
The purpose of an environment is just to have an alternative way of passing
parameters to a program (in addition to command-line arguments). The difference
is that an environment is inherited from one process to the next: i.e. a shell might
have certain variable set (like the PATH) and may run a file manager which may run
a word-processor. The word-processor inherited its environment from file-manager
which inherited its environment from the shell.
Try
¨ ¥
X="Hi there"
echo $X
§ ¦
You have now run a new process which is a child of the process you were just in. Type
¨ ¥
echo $X
§ ¦
You will see that X is not set. This is because the variable was not exported as an
environment variable, and hence was not inherited. Now type
¨ ¥
exit
§ ¦
You will see that the new bash now knows about X.
Above we are setting an arbitrary variable for our own use. bash (and many
other programs) automatically set many of their own environment variables. The bash
man page lists these (when it talks about unsetting a variable, it means using the com-
mand unset <variable>). You may not understand some of these at the moment,
but they are included here as a complete reference for later:
85
9.2. Tutorial 9. Processes and environment variables
Shell Variables
The following variables are set by the shell:
86
9. Processes and environment variables 9.2. Tutorial
There are also many variables that bash uses which may be set by the user. These
are:
The following variables are used by the shell. In some cases, bash assigns
a default value to a variable; these cases are noted below.
IFS The Internal Field Separator that is used for word splitting after expan-
sion and to split lines into words with the read builtin command. The
default value is “<space><tab><newline>”.
PATH The search path for commands. It is a colon-separated list of di-
rectories in which the shell looks for commands (see COMMAND
EXECUTION below). The default path is system-dependent, and
is set by the administrator who installs bash. A common value is
“/usr/gnu/bin:/usr/local/bin:/usr/ucb:/bin:/usr/bin:.”.
HOME The home directory of the current user; the default argument for
the cd builtin command.
CDPATH The search path for the cd command. This is a colon-separated
list of directories in which the shell looks for destination directories
specified by the cd command. A sample value is ‘‘.:˜:/usr’’.
ENV If this parameter is set when bash is executing a shell script, its
value is interpreted as a filename containing commands to initialize
the shell, as in .bashrc. The value of ENV is subjected to parameter ex-
pansion, command substitution, and arithmetic expansion before be-
ing interpreted as a pathname. PATH is not used to search for the
resultant pathname.
MAIL If this parameter is set to a filename and the MAILPATH variable
is not set, bash informs the user of the arrival of mail in the specified
file.
MAILCHECK Specifies how often (in seconds) bash checks for mail. The
default is 60 seconds. When it is time to check for mail, the shell does
so before prompting. If this variable is unset, the shell disables mail
checking.
MAILPATH A colon-separated list of pathnames to be checked for mail.
The message to be printed may be specified by separating the path-
name from the message with a ‘?’. $ stands for the name of the cur-
rent mailfile. Example:
87
9.2. Tutorial 9. Processes and environment variables
Bash supplies a
MAILPATH=’/usr/spool/mail/bfox?"You have mail":˜/shell-mail?"$_ has mail!"’
default value for this variable, but the location of the user mail files
that it uses is system dependent (e.g., /usr/spool/mail/$USER).
MAIL WARNING If set, and a file that bash is checking for mail has been
accessed since the last time it was checked, the message “The mail in
mailfile has been read” is printed.
PS1 The value of this parameter is expanded (see PROMPTING below)
and used as the primary prompt string. The default value is “bash\$
”.
PS2 The value of this parameter is expanded and used as the secondary
prompt string. The default is “> ”.
PS3 The value of this parameter is used as the prompt for the select com-
mand (see SHELL GRAMMAR above).
PS4 The value of this parameter is expanded and the value is printed be-
fore each command bash displays during an execution trace. The first
character of PS4 is replicated multiple times, as necessary, to indicate
multiple levels of indirection. The default is “+ ”.
HISTSIZE The number of commands to remember in the command his-
tory (see HISTORY below). The default value is 500.
HISTFILE The name of the file in which command history is saved. (See
HISTORY below.) The default value is ˜/.bash history. If unset, the
command history is not saved when an interactive shell exits.
HISTFILESIZE The maximum number of lines contained in the history
file. When this variable is assigned a value, the history file is truncated,
if necessary, to contain no more than that number of lines. The default
value is 500.
OPTERR If set to the value 1, bash displays error messages generated by
the getopts builtin command (see SHELL BUILTIN COMMANDS
below). OPTERR is initialized to 1 each time the shell is invoked or a
shell script is executed.
PROMPT COMMAND If set, the value is executed as a command prior
to issuing each primary prompt.
IGNOREEOF Controls the action of the shell on receipt of an EOF charac-
ter as the sole input. If set, the value is the number of consecutive EOF
characters typed as the first characters on an input line before bash
exits. If the variable exists but does not have a numeric value, or has
no value, the default value is 10. If it does not exist, EOF signifies the
end of input to the shell. This is only in effect for interactive shells.
TMOUT If set to a value greater than zero, the value is interpreted as the
number of seconds to wait for input after issuing the primary prompt.
88
9. Processes and environment variables 9.2. Tutorial
Bash terminates after waiting for that number of seconds if input does
not arrive.
FCEDIT The default editor for the fc builtin command.
FIGNORE A colon-separated list of suffixes to ignore when performing
filename completion (see READLINE below). A filename whose suffix
matches one of the entries in FIGNORE is excluded from the list of
matched filenames. A sample value is “.o:˜”.
INPUTRC The filename for the readline startup file, overriding the default
of ˜/.inputrc (see READLINE below).
notify If set, bash reports terminated background jobs immediately, rather
than waiting until before printing the next primary prompt (see also
the -b option to the set builtin command).
history control
HISTCONTROL If set to a value of ignorespace, lines which begin with a
space character are not entered on the history list. If set to a value of
ignoredups, lines matching the last history line are not entered. A value
of ignoreboth combines the two options. If unset, or if set to any other
value than those above, all lines read by the parser are saved on the
history list.
command oriented history If set, bash attempts to save all lines of a
multiple-line command in the same history entry. This allows easy
re-editing of multi-line commands.
glob dot filenames If set, bash includes filenames beginning with a ‘.’ in
the results of pathname expansion.
allow null glob expansion If set, bash allows pathname patterns which
match no files (see Pathname Expansion below) to expand to a null
string, rather than themselves.
histchars The two or three characters which control history expansion and
tokenization (see HISTORY EXPANSION below). The first character
is the history expansion character, that is, the character which signals the
start of a history expansion, normally ‘!’. The second character is the
quick substitution character, which is used as shorthand for re-running
the previous command entered, substituting one string for another in
the command. The default is ‘ˆ’. The optional third character is the
character which signifies that the remainder of the line is a comment,
when found as the first character of a word, normally ‘#’. The history
comment character causes history substitution to be skipped for the
remaining words on the line. It does not necessarily cause the shell
parser to treat the rest of the line as a comment.
nolinks If set, the shell does not follow symbolic links when executing
commands that change the current working directory. It uses the
89
9.2. Tutorial 9. Processes and environment variables
90
Chapter 10
Electronic Mail or email is the way most people first come into contact with the internet.
Although you may have used email in a graphical environment, here we will show you
how mail was first intended to be used on a multi-user system. To a large extent what
applies here is really what is going on in the background of any system that supports
mail.
A mail message is a block of text sent from one user to another using some mail
command or mailer program. Although a mail message will usually be also be accom-
panied by a subject explaining what the mail is about. The idea of mail is that a message
can be sent to someone even though he may not be logged in at the time and the mail
will be stored for him until he is around to read it. An email address is probably fa-
miliar to you, such as: [email protected]. This means that bruce has a user
account on a computer called kangeroo.co.au. The text after the @ is always the
name of the machine. Todays Internet does not obey this exactly, but there is always
a machine that bruce does have an account on where mail is eventually sent. &That
-
machine is also usually a Unix machine.
When mail is received for you (from another user on the system or from a user
from another system) it is appended to the file /var/spool/mail/<username>
called the mail file or mail box file. Where <username> is your login name. You then
run some program which interprets your mail file, allowing you to browse the file as a
sequence of mail messages and read and reply to them.
An actual addition to your mail file might look like this:
¨ ¥
From [email protected] Mon Jun 1 21:20:21 1998
Return-Path: <[email protected]>
Received: from lava.cranzgot.co.za ([email protected] [192.168.2.254])
by ra.cranzgot.co.za (8.8.7/8.8.7) with ESMTP id VAA11942
5 for <[email protected]>; Mon, 1 Jun 1998 21:20:20 +0200
Received: from mail450.icon.co.za (mail450.icon.co.za [196.26.208.3])
91
10.1. Sending and reading mail 10. Mail
hey paul
30 its me
how r u doing
i am well
what u been upot
hows life
35 hope your well
amanda
§ ¦
Each mail message begins with a From at the beginning of a line, followed by a
space. Then comes the mail header, explaining where the message was routed from to
get it to your mail box, who sent the message, where replies should go to, the subject of
the mail, and various other fields. Above, the header is longer than the mail messages.
Examine the header carefully.
The header ends with the first blank line. The message itself (or body) starts right
after. The next header in the file will once again start with a From. From’s on the
beginning of a line never exist within the body. If they do, the mailbox is considered
to be corrupt.
Some mail readers store their messages in a different format. However the above
format (called the mbox format) is the most common for U NIX.
The simplest way to send mail is to use the mail command. Type mail
-s "hello there" <username>. mail will then wait for you to type out your
message. When you are finished, enter a . on its own on a single line. The user-
name will be another user on your system. If no one else is on your system then send
mail to root with mail -s "Hello there" root or mail -s "Hello there"
92
10. Mail 10.2. The SMTP protocol — sending mail raw to port 25
root@localhost (if the @ is not present then local machine, localhost, is implied).
You can use mail to view your mailbox. This is a primitive utility in comparison
to modern graphical mail readers but is probably the only mail reader that can handle
arbitrary sized mailboxes. Sometimes you may get a mailbox that is over a gigabyte
in size, and mail is the only way to delete messages from it. To view your mailbox,
type mail, and then z to read your next window of messages, and z- to view the
previous window. Most commands work like command message number, eg delete
14 or reply 7 etc. The message number is the left column with an N next to it for
new mail, etc.
For the state of the art in terminal based mail readers, try mutt and pine.
There are also some graphical mail readers in various stages of development. At
the time I am writing this, I have been using balsa for a few months, which was the
best mail reader I could find.
To send mail, it is actually not necessary to have a mail client at all. The mail client
just follows SMTP (Simple Mail Transfer Protocol), which you can type in from the
keyboard.
For example, you can send mail by telneting to port 25 of a machine that has an
MTA (Mail Transfer Agent — also called the mailer daemon) running. The word daemon
is used to denote programs that run silently without user intervention.
&
This is in fact how, so-called, anonymous mail or spam mail Spam is a term used to
indicate unsolicited email — that is junk mail that is posted in bulk to large numbers of arbitrary email
- is sent on the Internet. A mailer daemon
address. This is considered unethical Internet practice.
runs in most small institutions in the world, and has the simple task of receiving mail
requests and relaying them onto other mail servers. Try this for example (obviously
substituting mail.cranzgot.co.za for the name of a mail server that you normally
use):
¨ ¥
[root@cericon tex]# telnet mail.cranzgot.co.za 25
Trying 192.168.2.1...
Connected to 192.168.2.1.
Escape character is ’ˆ]’.
5 220 ra.cranzgot.co.za ESMTP Sendmail 8.9.3/8.9.3; Wed, 2 Feb 2000 14:54:47 +0200
HELO cericon.cranzgot.co.za
250 ra.cranzgot.co.za Hello cericon.ctn.cranzgot.co.za [192.168.3.9], pleased to meet you
MAIL FROM:[email protected]
250 [email protected]... Sender ok
10 RCPT TO:[email protected]
250 [email protected]... Recipient ok
DATA
354 Enter mail, end with "." on a line by itself
hi there
93
10.2. The SMTP protocol — sending mail raw to port 25 10. Mail
.
250 OAA04620 Message accepted for delivery
QUIT
20 221 ra.cranzgot.co.za closing connection
Connection closed by foreign host.
[root@cericon tex]#
§ ¦
The above causes the message “hi there here is a short message” to
be delivered to [email protected] (the ReCiPienT). Of course I can enter any
address that I like as the sender, and it can be difficult to determine who sent the mes-
sage.
Now, you may have tried this and got a rude error message. This might be be-
cause the MTA is configured not to relay mail except from specific trusted machines
— say only those machines within that organisation. In this way anonymous email is
prevented.
On the other hand, if you are connecting to the very users own mail server, it
has to necessarily receive the mail. Hence the above is a useful way to supply a bogus
FROM address and thereby send mail almost anonymously. By “almost” I mean that
the mail server would still have logged the machine from which you connected and
the time of connection.
94
Chapter 11
U NIX intrinsically supports multiple users. Each user has a personal home directory
/home/<username> in which their own files are stored, hidden from other users.
So far you may have been using the machine as the root user, who is the system
administrator and has complete access to every file on the system. The home directory
of the root user is /root. Note that there is an ambiguity here: the root directory is
the top most directory, known as the / directory. The root user’s home directory is
/root and is called the home directory of root.
Other than root, every other user has limited access to files and directories. Al-
ways use your machine as a normal user. Login as root only to do system adminis-
tration. This will save you from the destructive power that the root user has. Here
we will show how to manually and automatically create new users.
Users are also divided into sets, called groups. A user may belong to several
groups and there can be as many groups on the system as you like. Each group is
defined by a list of users that are part of that set. In addition each user has a group of
the same name, to which only he belongs.
95
11.2. File ownerships 11. User accounts and ownerships
Each file on a system is owned by a particular user and also owned by a particular group.
When you do an ls -al you can see the user that owns the file in the third column
and the group that owns the file in the fourth column (these will often be identical
indicating that the file’s group is a group to which only the user belongs). To change
the ownership of the file simply use the chown, change ownerships, command as follows.
¨ ¥
chown <user>[:<group>] <filename>
§ ¦
The only place in the whole system where a user name is registered is in this file &Ex-
ceptions to this rule are several distributed authentication schemes, and the Samba package, but you needn’t
-
worry about these for now. . Once a user is added to this file, they exist on the system. If
you might have thought that user accounts where stored in some unreachable dark
corner then this should dispel this idea. This is also known as the password file to ad-
ministrators. View this file with less:
¨ ¥
root:x:0:0:Paul Sheer:/root:/bin/bash
bin:x:1:1:bin:/bin:
daemon:x:2:2:daemon:/sbin:
adm:x:3:4:adm:/var/adm:
5 lp:x:4:7:lp:/var/spool/lpd:
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:
10 news:x:9:13:news:/var/spool/news:
uucp:x:10:14:uucp:/var/spool/uucp:
gopher:x:13:30:gopher:/usr/lib/gopher-data:
ftp:x:14:50:FTP User:/home/ftp:
nobody:x:99:99:Nobody:/:
15 alias:x:501:501::/var/qmail/alias:/bin/bash
paul:x:509:510:Paul Sheer:/home/paul:/bin/bash
jack:x:511:512:Jack Robbins:/home/jack:/bin/bash
silvia:x:511:512:Silvia Smith:/home/silvia:/bin/bash
§ ¦
Above is an extract of my own password file. Each user is stored on a separate line.
Many of these are not human login accounts, but are used by other programs.
Each line contains seven fields separated by colons. The account for jack looks
like this:
96
11. User accounts and ownerships 11.4. Shadow password file: /etc/shadow
511 The user’s user identification number, UID &This is used by programs as short alternative
to the users login name. In fact, internally, the login name is never used, only the UID.-.
512 The user’s group identification number, GID &Similar applies to the GID. Groups will be
discussed later-.
Jack Robbins The user’s full name &Few programs ever make use of this field.-.
/home/jack The user’s home directory. The HOME environment variable will be set
to this when the user logs in.
The problem with traditional passwd files is that they had to be world readable &Every-
one on the system can read the file- in order for programs to extract information about the
user: such as the users full name. This means that everyone can see the encrypted pass-
word in the second field. Anyone can copy any other user’s password field and then
try billions of different passwords to see if they match. If you have a hundred users
on the system, there is bound to be several that chose passwords that match some
word in the dictionary. The so-called dictionary attack will simply try all 80000 English
words until a match is found. If you think you are clever to add a number in front
of an easy-to-guess dictionary word, password cracking algorithms know about these
as well &And about every other trick you can think of.-. To solve this problem the shadow
password file was invented. The shadow password file is used only for authentication
&Verifying that the user is the genuine owner of the account.- and is not world readable — there
is no information in the shadow password file that a common program will ever need
— no regular user has permission see the encrypted password field. The fields are
colon separated just like the passwd file.
Here is an example line from a /etc/shadow file:
¨ ¥
jack:Q,Jpl.or6u2e7:10795:0:99999:7:-1:-1:134537220
§ ¦
97
11.5. The groups command and /etc/group 11. User accounts and ownerships
Q,Jpl.or6u2e7 The user’s encrypted password known as the hash of the pass-
word. This is the user’s 8 character password with a one way hash function ap-
plied to it. It is simply a mathematical algorithm applied to the password that
is known to produce a unique result for each password. To demonstrate: the
(rather poor) password Loghimin hashes to :lZ1F.0VSRRucs: in the shadow
file. An almost identical password loghimin gives a completely different hash
:CavHIpD1W.cmg:. Hence trying to guess the password from the hash can only
be done by trying every possible password, and is therefore considered compu-
tationally expensive but not impossible. To check if an entered password matches,
just apply the identical mathematical algorithm to it: if it matches then the pass-
word is correct. This is how the login command works. Sometimes you will see a
* in place of a hashed password. This means that the account has been disabled.
10795 Days since the January 1, 1970 that the password was last changed.
0 Days before which password may not be changed. Usually zero. This field is not
often used.
99999 Days after which password must be changed. This is also rarely used, and will
be set to 99999 by default.
7 Days before password is to expire that user is warned of pending password expira-
tion.
-1 Days after password expires that account is considered inactive and disabled. -1
is used to indicate infinity — i.e. to mean we are effectively not using this feature.
On a U NIX system you may want to give a number of users the same access rights. For
instance, you may have five users that should be allowed to access some privileged file,
and another ten users that are allowed to run a certain program. You can group these
users into, for example, two groups previl and wproc and then make the relevant
file and directories owned by that group with, say,
¨ ¥
chown root:previl /home/somefile
chown root:wproc /usr/lib/wproc
§ ¦
Permissions &explained later.- will dictate the kind of access, but for the mean time, the
file/directory must at least be owned by that group.
98
11. User accounts and ownerships 11.6. Manually creating a user account
The /etc/group file is also colon separated. A line might look like this:
¨ ¥
wproc:x:524:jack,mary,henry,arthur,sue,lester,fred,sally
§ ¦
wproc The name of the group. There should really also be a user of this name as well.
x The groups password. This field is usually set with an x and is not used.
524 The GID group ID. This must be unique in the groups file.
You can obviously study the group file to find out which groups a user belongs to
&That is, not “which users is a group comprised of”, which is easy to see at a glance.-, but when
there are a lot of groups it can be tedious to scan through the entire file. The groups
command prints out this information.
/etc/passwd entry To create an entry in this file, simply edit it and copy an existing
line &When editing configuration files, never write out a line from scratch if it has some kind of
special format. Always copy an existing entry that has proven itself to be correct, and then edit in
-
the appropriate changes. This will prevent you from making errors . Always add users from
the bottom and try to preserve the “pattern” of the file — i.e. if you see numbers
increasing, make yours fit in; if you are adding a normal user, add it after the
existing lines of normal users. Each user must have a unique UID and should
usually have a unique GID. So if you are adding a line to the end of the file, make
your new UID and GID the same as the last line but incremented by one.
/etc/shadow entry Create a new shadow password entry. At this stage you do not
know what the hash is, so just make it a *. You can set the password with the
passwd command later.
/etc/group entry Create a new group entry for the user’s group. Make sure the
number in the group entry matches that in the passwd file.
99
11.7. Automatically: useradd and groupadd 11. User accounts and ownerships
/etc/skel This directory contains a template home directory for the user. Copy
the entire directory and all its contents into /home directory, renaming it to the
name of the user. In the case of our jack example, you should have a directory
/home/jack.
Home directory ownerships You need to now change the ownerships of the home di-
rectory to match the user. The command chown -R jack:jack /home/jack
will accomplish this.
Setting the password Use passwd <username> to set the users password.
The above process is tedious. Two commands that perform all these updates automat-
ically are useradd, userdel and usermod. The man pages will explain the use of
these commands in detail. Note that different flavours of U NIX have different com-
mands to do this. Some may even have graphical programs or web interfaces to assist
in creating users.
In addition, there are the commands groupadd, groupdel and groupmod
which do the same with respect to groups.
A user most often gains access to the system through the login program. This looks
up the UID and GID from the passwd and group file, and authenticates the user.
The following is quoted from the login man page:
login is used when signing onto a system. It can also be used to switch
from one user to another at any time (most modern shells have support for
this feature built into them, however).
If an argument is not given, login prompts for the username.
If the user is not root, and if /etc/nologin exists, the contents of this file are
printed to the screen, and the login is terminated. This is typically used to
prevent logins when the system is being taken down.
If special access restrictions are specified for the user in /etc/usertty, these
must be met, or the log in attempt will be denied and a syslog &System error
100
11. User accounts and ownerships 11.8. User logins
log program — syslog writes all system messages are to the file /var/log/messages] -
message will be generated. See the section on ”Special Access Restrictions”.
If the user is root, then the login must be occuring on a tty listed in
/etc/securetty &If this file is not present, then root logins will be allowed from anywhere. It
is worth deleting this file if your machine is protected by a firewall and you would like to eas-
ily login from another machine on your LAN &
Local Area Network — see Chapter 25.1 -.
If /etc/securetty is present, then logins are only allowed from the terminals it lists. -.
Failures will be logged with the syslog facility.
After these conditions are checked, the password will be requested and
checks (if a password is required for this username). Ten attempts are al-
lowed before login dies, but after the first three, the response starts to get
very slow. Login failures are reported via the syslog facility. This facility is
also used to report any successful root logins.
If the file .hushlogin exists, then a ”quiet” login is performed (this disables
the checking of the checking of mail and the printing of the last login time
and message of the day). Otherwise, if /var/log/lastlog exists, the last login
time is printed (and the current login is recorded).
Random administrative things, such as setting the UID and GID of the tty
are performed. The TERM environment variable is preserved, if it exists
(other environment variables are preserved if the -p option is used). Then
the HOME, PATH, SHELL, TERM, MAIL, and LOGNAME environment
variables are set. PATH defaults to /usr/local/bin:/bin:/usr/bin:. &Note that the
. — the current directory — is listed in the PATH. This is only the default PATH however. -
for normal users, and to /sbin:/bin:/usr/sbin:/usr/bin for root. Last, if this is
not a ”quiet” login, the message of the day is printed and the file with the
user’s name in /usr/spool/mail will be checked, and a message printed if it
has non-zero length.
The user’s shell is then started. If no shell is specified for the user in
/etc/passwd, then /bin/sh is used. If there is no directory specified in
/etc/passwd, then / is used (the home directory is checked for the .hushlogin
file described above).
This will prompt you for a password unless you are the root user to start off with.
This does nothing more than change the current user to have the access rights of jack.
Most environment variables will remain the same. The HOME, LOGNAME and USER
101
11.8. User logins 11. User accounts and ownerships
environment variables will be set to jack, but all other environment variables will be
inherited. su is therefore not the same as a normal login.
To use su to give you the equivalent of a login, do
¨ ¥
su - jack
§ ¦
This will cause all initialisation scripts that are normally run when the user logs in to be
executed &What actually happens is that the subsequent shell is started with a - in front of the zero’th
argument. This makes the shell read the user’s personal profile. The login command also does this.-.
Hence after running su with the - option, you are as though you had logged in with
the login command.
who and w gives list of users logged into the system and how much CPU they are using
etc. who --help gives:
¨ ¥
Usage: who [OPTION]... [ FILE | ARG1 ARG2 ]
A little more information can be gathered from the info pages for this command.
The idle time indicates how long since the user has last pressed a key. Most often, one
just types who -Hiw.
w is similar. Its man page says:
w displays information about the users currently on the machine, and their
processes. The header shows, in this order, the current time, how long the
system has been running, how many users are currently logged on, and the
system load averages for the past 1, 5, and 15 minutes.
The following entries are displayed for each user: login name, the tty name,
the remote host, login time, idle time, JCPU, PCPU, and the command line
102
11. User accounts and ownerships 11.8. User logins
Finally, from a shell script the users command is useful for just seeing who is
logged in. You can use in a shell script, for example:
¨ ¥
for user in ‘users‘ ; do
<etc>
§ ¦
id prints your real and effective UID and GID. A user will normally have a UID and a
GID but may also have an effective UID and GID as well. The real UID and GID are
what a process will generally think you are logged in as. The effective UID and GID
are the actual access permissions that you have when trying to read, write and execute
files. These will be discussed in more detail later (possibly unwritten) chapters.
103
11.8. User logins 11. User accounts and ownerships
104
Chapter 12
This chapter should make you aware of the various methods for transferring files and
data over the Internet, and remotely accessing U NIX machines.
telnet is a program for talking to a U NIX network service. It is most often used to do
a remote login. Try
¨ ¥
telnet <remote_machine>
telnet localhost
§ ¦
105
12.2. FTP 12. Using Internet Services
¨ ¥
ssh -l <username> <remote_machine>
§ ¦
12.2 FTP
FTP stands for File Transfer Protocol. If FTP is set up on your local machine, then other
machines can download files. Type
¨ ¥
ftp metalab.unc.edu
§ ¦
or
¨ ¥
ncftp metalab.unc.edu
§ ¦
ftp was the tradition command-line U NIX FTP client &“client” always indicates the user
program accessing some remote service.-, while ncftp is a more powerful client that will not
always be installed.
You will now be inside an FTP session. You will be asked for a login name and
a password. The site metalab.unc.edu is one that allows anonymous logins. This
means that you can type anonymous as your username, and then anything you like as
a password. You will notice that it will ask you for an email address as your password.
Any sequence of letters with a @ symbol will suffice, but you should put your actual
email address out of politeness.
The FTP session is like a reduced shell. You can type cd, ls and ls -al to
view file lists. help brings up a list of commands and you can also type help
<command> to get help on a specific command. You can download a file using the
get <filename> command, but before you do this, you must set the transfer type to
binary. The transfer type indicates whether or not new-line characters will be translated
to DOS format or not. Typing ascii turns this on, while binary turns it off. You may also
want to enter hash which will print a # for every 1024 bytes of download. This is useful
to watch the progress of a file. Goto a directory that has a README file in it and enter,
¨ ¥
get README
§ ¦
106
12. Using Internet Services 12.3. finger
to upload the file that you have just downloaded. Most FTP sites have an /incoming
directory which is flushed periodically.
FTP allows far more than just uploading of files, although the administrator has
the option to restrict access to any further features. You can create directories, change
ownerships and do almost anything you can on a local file system.
If you have several machines on a LAN, all should have FTP enabled to allow
users to easily copy files between machines. configuring the FTP server will be dealt
with later.
12.3 finger
finger is a service for telling who is logged in on a remote system. Try finger
@<hostname> to see who is logged in on <hostname>. The finger service will often
be disabled on machines for security reasons.
Mail is becoming used more and more for transferring files between machines. It is
bad practice to send mail messages over 64 kilobytes over the Internet because it tends
to excessively load mail servers. Any file larger than 64 kilobytes should be uploaded
by FTP onto some common FTP server. Most small images are smaller than this size,
hence sending a small JPEG &A common internet image file format. These are especially compressed
and are usually under 100 kilobytes for a typical screen sized photograph.- image is considered ac-
ceptable.
To send files by mail if you have to is best accomplished using uuencode. This
utility takes binary files and packs them into a format that mail servers can handle. If
you send a mail message containing arbitrary binary data, it will more than likely be
corrupted in the way, because mail agents are only designed to handle a limited range
of characters. uuencode takes a binary file and represents it in allowable characters
albeit taking up slightly more space.
Here is a neat trick to pack up a directory and send it to someone by mail.
¨ ¥
tar -czf - <mydir> | uuencode <mydir>.tar.gz \
| mail -s "Here are some files" <user>@<machine>
§ ¦
107
12.4. Sending files by email 12. Using Internet Services
MIME encapsulation
Most mail readers have the ability to attach files to mail messages and read these at-
tachments. The way they do this is not with uuencode but in a special format known
as MIME encapsulation. MIME is way of representing multiple files inside a single mail
message. The way binary data is handled is similar to uuencode, but in a format
known as base64.
If needed, there are two useful command-line utilities in the same vein as
uuencode that can create and extract MIME messages. These are mpack and
munpack.
108
Chapter 13
L INUX resources
Very often it is not even necessary to connect to the Internet to find the information you
need. Chapter 16 contains a description of most of the documentation on a L INUX
distribution.
It is however essential to get the most up to date information where security
and hardware driver support is concerned. It is also fun and worthwhile to interact
with L INUX users from around the globe. The rapid development of free software
could mean that you may miss out on important new features that could streamline IT
services. Hence reviewing web news, reading newsgroups and subscribing to mailing
lists are essential parts of a system administrators role.
The metalab.unc.edu FTP site is considered the primary site for free software the
world over. It is mirrored in almost every country that has a significant IT infrastruc-
ture.
Our local South Africa mirror’s are ftp.is.co.za in the directory
/linux/sunsite, and also somewhere on the site ftp.sdn.co.za.
It is advisable to browse around these ftp sites. In particular you should try to
find the locations of:
• The directory where all sources for official GNU packages are stored. This
would be a mirror of the Free Software Foundation’s FTP archives. These are
packages that were commissioned by the FSF, and not merely released under the
109
13.2. HTTP — web sites 13. L INUX resources
GPL. The FSF will distribute them in source form (.tar.gz) for inclusion into
various distributions. They will of course compile and work under any U NIX.
• The mirror of the metalab. This is known as the sunsite mirror because it
used to be called metalab.unc.edu. It contains enumerable U NIX packages in
source and binary form, categorised in a directory tree. For instance, mail clients
have their own directory with many mail packages inside. metalab is the place
where a new developer can host some new software that they have produced.
There are instructions on the FTP site to upload software and to request it to be
placed into a directory.
• The kernel sources. This is a mirror of the kernel archives where Linus and other
maintainers upload new stable &Meaning that the software is well tested and free of serious
bugs.- and beta &Meaning that the software is in its development stages.- kernel versions
and kernel patches.
• The various distributions. RedHat, Debian and possibly other popular distri-
butions will be mirrored.
Most users should already be familiar with using a web browser. You should
also become familiar with the concept of a web search. This is when you point
your web browser to a popular search engine like http://www.google.com/,
http://www.google.com/linux, http://infoseek.go.com/,
http://www.altavista.com/ or http://www.yahoo.com/ and search for a
particular key word. Searching is a bit of a black art with the billions of web pages
out there. Always consult the search engine’s advanced search options to see how you
can do more complex searches than just plain word searches.
The web sites in the FAQ (excluding the list of known distributions) should all be
consulted to get a overview on some of the primary sites of interest to L INUX users.
Especially important is that you keep up to do with the latest L INUX news.
I find the Linux Weekly News http://lwn.net/ excellent for this. Also, The fa-
110
13. L INUX resources 13.3. Mailing lists
mous (and infamous) SlashDot http://slashdot.org/ web site gives daily updates
about “stuff that matters” and therefore contains a lot about free software.
Fresh Meat http://freshmeat.net/ is a web site devoted to new software
releases. You will find new or updated packages uploaded every few hours or so.
Linux Planet http://www.linuxplanet.com/ seems to be a new (?) web site
that I just found while writing this. It looks like it contains lots of tutorial information
on L INUX .
Realistically though, a new L INUX web site is created every week; almost any-
thing prepended or append to “linux” is probably a web site already.
A mailing list is a special address that when posted to, automatically sends email a
long list of other addresses. One usually subscribes to a mailing list by sending some
especially formatted email, or requesting a subscription from the mailing list manager.
Once you have subscribed to a list, any email you post to the list will be sent to
every other subscriber, and every other subscribers posts to the list will be sent to you.
There are mostly three types of mailing lists. Those over the majordomo type,
those of the listserv type, and those of the *-request type.
To subscribe to
the majordomo type variety send mail message to majordomo@<machine> with no
subject and a one line message:
¨ ¥
subscribe <mailing-list-name>
§ ¦
111
13.4. Newsgroups 13. L INUX resources
*-request
13.4 Newsgroups
A newsgroup is a notice board that everyone in the world can see. There are tens of
thousands of newsgroups and each group is unique in the world.
The client software you will use to read a newsgroup is called a news reader. rtin
is a popular text mode reader, while netscape is graphical. pan is an excellent graph-
ical one that I use.
Newsgroups are named like Internet hosts. One you might be interested in is
comp.os.linux.announce. The comp is the broadest subject description for com-
puters, os stands for operating systems, etc. There are many other linux newsgroups
devoted to various L INUX issues.
Newsgroups servers are big hungry beasts. They form a tree like structure on the
Internet. When you send mail to a newsgroups it takes about a day or so for the mail
you sent to propagate to every other server in the world. Likewise you can see a list of
all the messages posted to each newsgroup by anyone anywhere.
What’s the difference between a newsgroup and a mailing list? The advantage of
a newsgroup is that you don’t have to download the messages you are not interested
112
13. L INUX resources 13.5. RFC’s
in. If you are on a mailing list, you get all the mail sent to the list. With a newsgroup
you can look at the message list and retrieve only the messages you are interested in.
Why not just put the mailing list on a web page? If you did, then everyone in the
world will have to go over international links to get to the web page. It would load
the server in proportion to the number of subscribers. This is exactly what SlashDot
is. However your newsgroup server is local, hence you retrieve mail over a faster link
and save Internet traffic.
13.5 RFC’s
and,
¨ ¥
2068 Hypertext Transfer Protocol -- HTTP/1.1. R. Fielding, J. Gettys,
J. Mogul, H. Frystyk, T. Berners-Lee. January 1997. (Format:
TXT=378114 bytes) (Status: PROPOSED STANDARD)
§ ¦
113
13.5. RFC’s 13. L INUX resources
114
Chapter 14
14.1 Permissions
Every file and directory on a U NIX system, besides being owned by a user and a group,
has access flags &A switch that can either be on or off- dictating what kind of access that user
and group has to the file.
Doing an ls -ald /bin/cp /etc/passwd /tmp will give you a listing:
¨ ¥
-rwxr-xr-x 1 root root 28628 Mar 24 1999 /bin/cp
-rw-r--r-- 1 root root 1151 Jul 23 22:42 /etc/passwd
drwxrwxrwt 5 root root 4096 Sep 25 15:23 /tmp
§ ¦
In the left most column are these flags, which give a complete description of the
access rights to the file.
The furthest flag to the left is, so far, either - or d indicating an ordinary file or
directory. The remaining nine have a - to indicate an unset value or one of several
possible characters. Table 14.1 gives a complete description of file system permissions.
The chmod command is used to change the permissions of a file. It usually used like:
115
14.1. Permissions 14. Permission and Modification Times
¨ ¥
chmod [-R] [u|g|o|a][+|-][r|w|x|s|t] <file> [<file>] ...
§ ¦
For example
¨ ¥
chmod u+x myfile
§ ¦
116
14. Permission and Modification Times 14.1. Permissions
¨ ¥
chmod a-rx myfile
§ ¦
removes read and execute permissions for all — i.e. user, group and other.
The -R options once again means recursive, diving into subdirectories as usual.
Permission bits are often represented in their binary form, especially when pro-
gramming. It is convenient to show the rwxrwxrwx set in octal &See Section 2.1-, where
each digit fits conveniently into three bits. Files on the system are usually created with
mode 0644, meaning rw-r--r--. You can set permissions explicitly with an octal
number:
¨ ¥
chmod 0755 myfile
§ ¦
Gives myfile the permissions rwxr-xr-x. For a full list of octal values for all kinds
of permissions and file types, see /usr/include/linux/stat.h.
In the table you can see s, the setuid or setgid bit. If it is used without execute per-
missions then it has no meaning and is written capitalised as an S. This bit effectively
colourises a x into an s, hence you should read an s as execute with the setuid or setgid
bit set. t is known as the sticky bit. It also has no meaning if there are no execute
permissions and is written as a capital T.
The leading 0 can in be ignored, but is preferred in order to be explicit. It can take
on a value representing the three bits, setuid (4), setgid (2) and sticky (1). Hence a value
of 5764 is 101 111 110 100 in binary and gives -rwsrw-r-T.
umask sets the default permissions for newly created files, it is usually 022. This
means that the permissions of any new file you create (say with the touch command)
will be masked with this number. 022 hence excludes write permissions of group and of
other. A umask of 006 would exclude read and write permissions of other, but allow
read and write of group. Try
¨ ¥
umask
touch <file1>
ls -al <file1>
umask 026
5 touch <file2>
ls -al <file2>
§ ¦
117
14.2. Modification times: stat 14. Permission and Modification Times
026 is probably closer to the kind of mask we like as an ordinary user. Check your
/etc/profile file to see what umask your login defaults to, when and also why.
In addition to permissions, each file has three integers associated with it that represent
in seconds, the last time the file was accessed (read), when it was last modified (written
to), and when its permissions were last changed. These are known as the atime, mtime
and ctime of a file respectively.
To get a complete listing of the file’s permissions, use the stat command. Here
is the result of stat /etc:
¨ ¥
File: "/etc"
Size: 4096 Filetype: Directory
Mode: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Device: 3,1 Inode: 14057 Links: 41
5 Access: Sat Sep 25 04:09:08 1999(00000.15:02:23)
Modify: Fri Sep 24 20:55:14 1999(00000.22:16:17)
Change: Fri Sep 24 20:55:14 1999(00000.22:16:17)
§ ¦
The Size: quoted here is the actual amount of disk space used in order to store the
directory listing, and is the same as reported by ls. In this case it is probably four disk
blocks of 1024 bytes each. The size of a directory as quoted here does not mean the
sum of all files contained under it.
118
Chapter 15
Very often, a file is required to be in two different directories at the same time. Think
for example of a configuration file that is required by two different software packages
that are looking for the file in different directories. The file could simple be copied, but
this would create an administrative nightmare to have to replicate changes in more
than one place. Also consider a document that must be present in many directories,
but which would be easier to update at one point. The way two (or more) files can have
the same data is with links.
Try
¨ ¥
touch myfile
ln -s myfile myfile2
ls -al
cat > myfile
5 a
few
lines
of
text
10 ˆD
cat myfile
cat myfile2
§ ¦
You will notice that the ls -al listing has the letter l on the far left next to
myfile2 while the usual - next to myfile. This indicates that the file is a soft link
119
15.1. Soft links 15. Symbolic and Hard Links
The directory mydir2 is a symbolic link to mydir2 and appears as though it is a replica
of the original. Once again the directory mydir2 does not consume additional disk
space — a program that reads from the link is unaware that it is seeing into a different
directory.
Symbolic links can also be copied and retain their value:
¨ ¥
cp mydir2 /
ls -al /
cd /mydir2
§ ¦
You have now copied the link to the root directory. However the link points to a relative
path mydir in the same directory as the link. Since there is no mydir here, an error is
raised.
Try
¨ ¥
rm -f mydir2 /mydir2
ln -s ‘pwd‘/mydir mydir2
ls -al
§ ¦
Now you will see mydir2 has an absolute path. You can try
¨ ¥
cp mydir2 /
ls -al /
cd /mydir2
§ ¦
120
15. Symbolic and Hard Links 15.2. Hard links
file systems accessible from a different directory. For instance, you may have a large
directory that has to be split over several physical disks. For clarity, you can mount the
disks as /disk1, /disk2 etc. and then link the various sub-directories in a way that
makes efficient use of the space you have.
Another example is the linking of /dev/cdrom to, say, /dev/hdc so that pro-
grams accessing the device (see Chapter 18) file /dev/cdrom, actually access the cor-
rect IDE drive.
U NIX allows the data of a file to have more than one name in separate places in the
same file system. Such a file with more than one name for the same data is called a
hard linked file and is very similar to a symbolic link. Try
¨ ¥
touch mydata
ln mydata mydata2
ls -al
§ ¦
The files mydata2 and mydata2 are indistinguishable. They share the same data, and
have a 2 in second column of the ls -al listing. This means that they are hard linked
twice (that there are two names for this file).
The reason why hard links are sometimes used in preference to symbolic links is
that some programs are not fooled by a symbolic link: if you, say, have a script that
uses cp to copy a file, it will copy the symbolic link instead of the file it points to &cp
actually has an option to override this behaviour-. A hard link however will always be seen as
a real file.
Hard links however cannot be made between files on different file-systems. They
also cannot be made between directories.
121
15.2. Hard links 15. Symbolic and Hard Links
122
Chapter 16
Pre-installed Documentation
123
16. Pre-installed Documentation
/usr/lib/X11/doc /usr/share/texmf/doc
5 /usr/local/doc /usr/share/vim/vim57/doc
§ ¦
This contains information on all hardware drivers except graphics. The kernel has built
in drivers for networking cards, SCSI controllers, sound cards and so on. Hence if you
need to find out if one of these is supported, this is the first place to look.
This is an enormous and comprehensive (and possibly exhaustive) reference to the TEX
typesetting language and the Metafont font generation package.
This is a complete reference to the LATEX typesetting language. (This book itself was
typeset using LATEX.)
This contains some beginners documentation. RedHat seems to no longer ship this
with their base set of packages.
124
16. Pre-installed Documentation
This is an excellent source of laymen tutorials for setting up almost any kind of service
you can imagine. RedHat seems to no longer ship this with their base set of packages.
It is worth listing the contents here to emphasise diversity of topics covered. These are
mirrored all over the Internet, hence you should have no problem finding this from a
search engine:
3Dfx-HOWTO Finnish-HOWTO Modem-HOWTO Security-HOWTO
AX25-HOWTO Firewall-HOWTO Multi-Disk-HOWTO Serial-HOWTO
Access-HOWTO French-HOWTO Multicast-HOWTO Serial-Programming-HOWTO
Alpha-HOWTO Ftape-HOWTO NET-3-HOWTO Shadow-Password-HOWTO
5 Assembly-HOWTO GCC-HOWTO NFS-HOWTO Slovenian-HOWTO
Bash-Prompt-HOWTO German-HOWTO NIS-HOWTO Software-Release-Practice-HOWTO
Benchmarking-HOWTO Glibc2-HOWTO Networking-Overview-HOWTO Sound-HOWTO
Beowulf-HOWTO HAM-HOWTO Optical-Disk-HOWTO Sound-Playing-HOWTO
BootPrompt-HOWTO Hardware-HOWTO Oracle-HOWTO Spanish-HOWTO
10 Bootdisk-HOWTO Hebrew-HOWTO PCI-HOWTO TeTeX-HOWTO
Busmouse-HOWTO INDEX.html PCMCIA-HOWTO Text-Terminal-HOWTO
CD-Writing-HOWTO INFO-SHEET PPP-HOWTO Thai-HOWTO
CDROM-HOWTO IPCHAINS-HOWTO PalmOS-HOWTO Tips-HOWTO
COPYRIGHT IPX-HOWTO Parallel-Processing-HOWTO UMSDOS-HOWTO
15 Chinese-HOWTO IR-HOWTO Pilot-HOWTO UPS-HOWTO
Commercial-HOWTO ISP-Hookup-HOWTO Plug-and-Play-HOWTO UUCP-HOWTO
Config-HOWTO Installation-HOWTO Polish-HOWTO Unix-Internet-Fundamentals-HOWTO
Consultants-HOWTO Intranet-Server-HOWTO Portuguese-HOWTO User-Group-HOWTO
Cyrillic-HOWTO Italian-HOWTO PostgreSQL-HOWTO VAR-HOWTO
20 DNS-HOWTO Java-CGI-HOWTO Printing-HOWTO VME-HOWTO
DOS-Win-to-Linux-HOWTO Kernel-HOWTO Printing-Usage-HOWTO VMS-to-Linux-HOWTO
DOS-to-Linux-HOWTO Keyboard-and-Console-HOWTO Quake-HOWTO Virtual-Services-HOWTO
DOSEMU-HOWTO KickStart-HOWTO README WWW-HOWTO
Danish-HOWTO LinuxDoc+Emacs+Ispell-HOWTO RPM-HOWTO WWW-mSQL-HOWTO
25 Distribution-HOWTO META-FAQ Reading-List-HOWTO XFree86-HOWTO
ELF-HOWTO MGR-HOWTO Root-RAID-HOWTO XFree86-Video-Timings-HOWTO
Emacspeak-HOWTO MILO-HOWTO SCSI-Programming-HOWTO XWindow-User-HOWTO
Esperanto-HOWTO MIPS-HOWTO SMB-HOWTO
Ethernet-HOWTO Mail-HOWTO SRM-HOWTO
125
16. Pre-installed Documentation
These are several online books in HTML format, such as the System Administrators
Guide, SAG, the Network Administrators Guide, NAG, the Linux Programmers Guide, LPG.
RedHat seems to no longer ship this with their base set of packages.
Some packages may install documentation here so that it goes online automat-
ically if your web server is running. In older distributions this directory was
/home/httpd/html.
Apache keeps this reference material online, so that it is the default web page shown
when you install Apache for the first time. Apache is the most popular web server.
All packages installed on the system have their own individual documentation
directory. A package foo will most probably have a documentation directory
/usr/doc/foo (or /usr/share/doc/foo). This most often contains documenta-
tion released with the sources of the package, such as release information, feature
news, example code, FAQ’s that are not part of the FAQ package, etc. If you have a
particular interest in a package, you should always scan its directory in /usr/doc (or
/usr/share/doc) or, better still, download its source distribution.
These are the /usr/doc (or /usr/share/doc) directories that contained more
than a trivial amount of documentation for that package. In some cases, the package
had complete references. (For example, the complete Python references were contained
nowhere else.)
ImageMagick-5.2.2 gawk-3.0.6 jed-common-0.98.7 ncftp-3.0.1 samba-2.0.7
LPRng-3.6.24 gcc-2.96 jikes-1.12 ncurses-devel-5.1 sane-1.0.3
ORBit-0.5.3 gcc-c++-2.96 joystick-1.2.15 netpbm-9.5 sawfish-0.30.3
SDL-devel-1.1.4 gcc-chill-2.96 kaffe-1.0.6 netscape-common-4.75 sendmail
5 SVGATextMode-1.9 gcc-g77-2.96 kdelibs-devel-1.1.2 nfs-utils-0.1.9.1 sgml-tools-1.0.9
WindowMaker-0.62.1 gcc-java-2.96 kernel-doc-2.2.16 njamd-0.7.0 shadow-utils-19990827
XFree86-KOI8-R-1.0 gd-1.8.3 kernel-ibcs-2.2.16 nss_ldap-113 slang-devel-1.4.1
XFree86-doc-4.0.1 gdb-5.0 kernel-pcmcia-cs-2.2.16 ntp-4.0.99j slrn-0.9.6.2
abiword-0.7.10 gdk-pixbuf-0.8.0 krb5-devel-1.2.1 nut-0.44.0 specspo-7.0
10 am-utils-6.0.4s5 gedit-0.9.0 krb5-server-1.2.1 octave-2.0.16 squid-2.3.STABLE4
amanda-2.4.1p1 ghostscript-5.50 krb5-workstation-1.2.1 openjade-1.3 stunnel-3.8
aspell-0.32.5 gimp-1.1.25 lam-6.3.3b28 openldap-devel-1.2.11 stylesheets-1.54.13rh
audiofile-0.1.9 glade-0.5.9 libgcj-2.96 openssh-2.1.1p4 sudo-1.6.3
automake-1.4 glib-1.2.8 libglade-0.13 openssl-0.9.5a taper-6.9b
15 awesfx-0.4.3a glib-gtkbeta-1.3.1b libglade-devel-0.13 p2c-1.22 tcp_wrappers-7.6
bash-2.04 glibc-2.1.92 libgtop-1.0.9 pam-0.72 texinfo-4.0
126
16. Pre-installed Documentation
Manual pages were discussed in Section 4.7. There may be other directory super struc-
tures that contain man pages — on some other U NIX’s man pages are littered every-
where.
To convert a man page to PostScript (for printing or viewing), use for example
(for the cp command),
¨ ¥
groff -Tps -mandoc /usr/man/man1/cp.1 > cp.ps ; gv cp.ps
groff -Tps -mandoc /usr/share/man/man1/cp.1 > cp.ps ; gv cp.ps
§ ¦
127
16. Pre-installed Documentation
128
Chapter 17
Here will be given an overview of how U NIX directories are structured. This is a sim-
plistic overview and not a specification of the L INUX file-system. Chapter 35 contains
proper details of permitted directories and the kinds of files allowed within them.
Packages
L INUX systems are divided into hundreds of small packages each performing some
logical group of operations. On L INUX , many small self-contained packages inter-
operate to give greater functionality than would large self-contained pieces of software.
There is also no clear distinction between what is part of the operating system and what
is an application — everything is just a package.
Each package will unpack to many files which are placed all over the system.
Packages generally do not create major directories but unpack files to existing directo-
ries.
Note that on a newly installed system there are practically no files anywhere that
do not belong to some kind of package.
129
17. Overview of the U NIX Directory Layout
and the /usr/X11R6 directory also looks similar. What is apparent here is that
all these directories contain a similar set of subdirectories. This set of subdirectories
130
17. Overview of the U NIX Directory Layout
is called a directory superstructure or superstructure &To my knowledge this is a new term not
previously used by U NIX administrators.-.
The superstructure will always contain a bin and lib subdirectory, but most all
others are optional.
Each package will install under one of these superstructures, meaning that it will
unpack many files into various subdirectories of the superstructure. A RedHat package
would always install under the /usr or / superstructure, unless it is a graphical X Win-
dow System application which installs under the /usr/X11R6 superstructure. Some
very large applications may install under a /opt/<package-name> superstructure,
and home-made packages usually install under the /usr/local/ superstructure. The
directory superstructure under which a package installs is often called the installation
prefix Packages almost never install files across different superstructures. &An excep-
-
tion to this are configuration files mostly stored in /etc/.
Typically, most of the system is under /usr. This directory can be read only,
since packages should never need to write to this directory — any writing is done
under /var or /tmp (/usr/var and /usr/tmp are often just symlinked to /var or
/tmp respectively). The small amount under / that is not part of another superstruc-
ture (usually about 40 megabytes) perform essential system administration functions.
These are commands needed to bring up or repair the system in the absence of /usr.
The list of superstructure sub-directories and their descriptions is as follows:
bin Binary executables. Usually all bin directories are in the PATH environment vari-
able so that the shell will search all these directories for binaries.
sbin Superuser binary executables. These are programs for system administration only.
Only the super user will have these in their PATH.
lib Libraries. All other data needed by programs goes in here. Most packages have
there own subdirectory under lib to store data files into. Dynamically Linked
Libraries (DLL’s or .so files.) &Executable program code shared by more than one program
in the bin directory to save disk space and memory.- are stored directly in lib.
131
17. Overview of the U NIX Directory Layout
src C source files. These are sources to the kernel or locally built packages.
tmp Temporary files. A convenient place for running programs to create a file for tem-
porarily use.
You can get L INUX to run on a 1.44 megabyte floppy disk if you trim all unneeded
files off an old Slackware distribution with a 2.0.3x kernel. You can compile a small
2.0.3x kernel to about 400 kilobytes (compressed). A file-system can be reduced to 2–3
megabytes of absolute essentials, and when compressed will fit into 1 megabyte. If the
total is under 1.44 megabytes, then you have your L INUX on one floppy. The file-list
might be as follows (includes all links):
/bin /etc /lib /sbin /var
/bin/sh /etc/default /lib/ld.so /sbin/e2fsck /var/adm
/bin/cat /etc/fstab /lib/libc.so.5 /sbin/fdisk /var/adm/utmp
/bin/chmod /etc/group /lib/ld-linux.so.1 /sbin/fsck /var/adm/cron
5 /bin/chown /etc/host.conf /lib/libcurses.so.1 /sbin/ifconfig /var/spool
/bin/cp /etc/hosts /lib/libc.so.5.3.12 /sbin/iflink /var/spool/uucp
/bin/pwd /etc/inittab /lib/libtermcap.so.2.0.8 /sbin/ifsetup /var/spool/uucp/SYSLOG
/bin/dd /etc/issue /lib/libtermcap.so.2 /sbin/init /var/spool/uucp/ERRLOG
/bin/df /etc/utmp /lib/libext2fs.so.2.3 /sbin/mke2fs /var/spool/locks
10 /bin/du /etc/networks /lib/libcom_err.so.2 /sbin/mkfs /var/tmp
/bin/free /etc/passwd /lib/libcom_err.so.2.0 /sbin/mkfs.minix /var/run
/bin/gunzip /etc/profile /lib/libext2fs.so.2 /sbin/mklost+found /var/run/utmp
/bin/gzip /etc/protocols /lib/libm.so.5.0.5 /sbin/mkswap
/bin/hostname /etc/rc.d /lib/libm.so.5 /sbin/mount /home/user
15 /bin/login /etc/rc.d/rc.0 /lib/cpp /sbin/route
/bin/ls /etc/rc.d/rc.K /sbin/shutdown /mnt
/bin/mkdir /etc/rc.d/rc.M /usr /sbin/swapoff
/bin/mv /etc/rc.d/rc.S /usr/adm /sbin/swapon /proc
/bin/ps /etc/rc.d/rc.inet1 /usr/bin /sbin/telinit
20 /bin/rm /etc/rc.d/rc.6 /usr/bin/less /sbin/umount /tmp
/bin/stty /etc/rc.d/rc.4 /usr/bin/more /sbin/agetty
/bin/su /etc/rc.d/rc.inet2 /usr/bin/sleep /sbin/update /dev/<various-devices>
/bin/sync /etc/resolv.conf /usr/bin/reset /sbin/reboot
/bin/zcat /etc/services /usr/bin/zless /sbin/netcfg
25 /bin/dircolors /etc/termcap /usr/bin/file /sbin/killall5
/bin/mount /etc/motd /usr/bin/fdformat /sbin/fsck.minix
/bin/umount /etc/magic /usr/bin/strings /sbin/halt
/bin/bash /etc/DIR_COLORS /usr/bin/zgrep /sbin/badblocks
/bin/domainname /etc/HOSTNAME /usr/bin/nc /sbin/kerneld
30 /bin/head /etc/mtools /usr/bin/which /sbin/fsck.ext2
/bin/kill /etc/ld.so.cache /usr/bin/grep
/bin/tar /etc/psdevtab /usr/sbin
/bin/cut /etc/mtab /usr/sbin/showmount
/bin/uname /etc/fastboot /usr/sbin/chroot
35 /bin/ping /usr/spool
/bin/ln /usr/tmp
/bin/ash
Note that the etc directory differs slightly from a RedHat distribution. The sys-
tem startup files /etc/rc.d are greatly simplified under Slackware.
The /lib/modules directory has been stripped for the creation of this floppy.
/lib/modules/2.0.36 would contain dynamically loadable kernel drivers (mod-
ules). Instead, all needed drivers are compiled into the kernel for simplicity.
At some point, creating a single floppy distribution should be attempted as an
exercise. This would be most instructive to a serious system administrator. At the very
132
17. Overview of the U NIX Directory Layout
least, the reader should look through all of the commands in the bin directories and
the sbin directories above and browse through the man pages of any those that are
unfamiliar.
The above file-system comes from the morecram-1.3 package available at:
¨ ¥
http://rute.sourceforge.net/morecram-1.3.tar.gz
§ ¦
and can be downloaded to give you a very useful rescue and setup disk.
133
17. Overview of the U NIX Directory Layout
134
Chapter 18
U NIX devices
/dev/hda is not really a file at all. When you read from it, you are actually reading
directly from the first physical hard disk of your machine. /dev/hda is known as a
device file, and all of them are stored under the /dev directory.
Device files allow access to hardware. If you have a sound card install and con-
figured, you can try:
¨ ¥
cat /dev/dsp > my_recording
§ ¦
Which will play out the sound through your speakers (note that this will not always
work, since the recording volume may not be set correctly, nor the recording speed.)
If no programs are currently using your mouse, you can also try:
¨ ¥
cat /dev/mouse
135
18.2. Block and character devices 18. U NIX devices
§ ¦
If you now move the mouse, the mouse protocol commands will be written directly to
your screen (it will look like garbage). This is an easy way to see if your mouse is work-
ing. On occasion this test doesn’t work if some command has previously configured
the serial port in some odd way. In this case, also try:
¨ ¥
cu -s 1200 -l /dev/mouse
§ ¦
At a lower level, programs that access device files do so in two basic ways:
• They read and write to the device to send and retrieve bulk data. (Much like
less and cat above).
• They use the C ioctl (IO Control) function to configure the device. (In the case
of the sound card, this might set mono versus stereo, recording speed etc.)
Because every kind of device that one can think of can be twisted to fit these two
modes of operation (except for network cards), U NIX’s scheme has endured since its
inception and is considered the ubiquitous method of accessing hardware.
Hardware devices can generally be categorised into random access devices like disk
and tape drives, and serial devices like mouses, sound cards and terminals.
Random access devices are usually accessed in large contiguous blocks of data
that are stored persistently. They are read from in discrete units (for most disks, 1024
bytes at a time). These are known as block devices. Doing an ls -l /dev/hda shows
that your hard disk is a block device by the b on the far left of the listing:
¨ ¥
brw-r----- 1 root disk 3, 64 Apr 27 1995 /dev/hdb
§ ¦
Serial devices on the other hand are accessed one byte at a time. Data can be read
or written only once. For example, after a byte has been read from your mouse, the
same byte cannot be read by some other program. These are called character devices
and are indicated by a c on the far left of the listing. Your /dev/dsp (Digital Signal
Processor — i.e. sound card) device looks like:
¨ ¥
crw-r--r-- 1 root sys 14, 3 Jul 18 1994 /dev/dsp
§ ¦
136
18. U NIX devices 18.3. Major and Minor device numbers
Devices are divided into sets called major device numbers. For instance, all SCSI disks
are major number 8. Further, each individual device has a minor device number like
/dev/sda which is minor device 0). The major and minor device number is what iden-
tifies the device to the kernel. The file-name of the device is really arbitrary and is
chosen for convenience and consistency. You can see the major and minor device num-
ber (8, 0) in the ls listing for /dev/sda:
¨ ¥
brw-rw---- 1 root disk 8, 0 May 5 1998 /dev/sda
§ ¦
A list of common devices and their descriptions follows. The major num-
bers are shown in braces. The complete reference for Devices is the file
/usr/src/linux/Documentation/devices.txt.
/dev/hd?? hd stands for Hard Disk, but refers here only to IDE devices — i.e. com-
mon hard disks. The first letter after the hd dictates the physical disk drive:
/dev/hda (3) First drive, or primary master.
/dev/hdb (3) Second drive, or primary slave.
/dev/hdc (22) Third drive, or secondary master.
/dev/hdd (22) Fourth drive, or secondary slave.
When accessing any of these devices, you would be reading raw from the actual
physical disk starting at the first sector of the first track, sequentially, until the
last sector of the last track.
Partitions &With all operating systems, disk drives are divided into sections called partitions. A
typical disk might have 2 to 10 partitions. Each partition acts as a whole disk on its own, giving
the effect of having more than one disk. For instance, you might have Windows installed on one
partition, and L INUX installed on another.- are named /dev/hda1, /dev/hda2 etc.
indicating the first, second etc. partition on physical drive a.
/dev/sd?? (8) sd stands for SCSI Disk, the high end drives mostly used by servers.
sda is the first physical disk probed and so on. Probing goes by Scsi ID and has
a completely different system to IDE devices. /dev/sda1 is the first partition on
the first drive etc.
/dev/ttyS? (4) These are serial devices devices numbered from 0 up. /dev/ttyS0
is your first serial port (COM1 under DOS). If you have a multi-port card, these
can go up to 32, 64 etc.
137
18.4. Miscellaneous devices 18. U NIX devices
1 B: drive
m “double density”, “360kB” 5.25 inch
d
“high density”, “1.2MB” 5.25 inch
h
“quad density” 5.25 inch
q
“double density”, “720kB” 3.5 inch
D
“high density”, “1.44MB” 3.5 inch
H
Extra density 3.5 inch.
E
Any 3.5 inch floppy. Note that u is now
u replacing D, H and E, thus leaving it up
to the user to decide if the floppy has
enough density for the format.
nnnn The size of the format. With D, H and
360 410 420 720 E, 3.5 inch floppies only have devices
800 820 830 880 for the sizes that are likely to work.
1040 1120 1200 For instance there is no /dev/fd0D1440
1440 1476 1494 because double density disks won’t
1600 1680 1722 manage 1440kB. /dev/fd0H1440 and
1743 1760 1840 /dev/fd0H1920 are probably the ones
1920 2880 3200 you are most interested in.
3520 3840
/dev/par? (6) Parallel port. /dev/par0 is your first parallel port or LPT1 under DOS.
138
18. U NIX devices 18.4. Miscellaneous devices
/dev/random Random number generator. Reading from this device give pseudo ran-
dom numbers.
/dev/zero (1) Produces zero bytes, and as many of them us you need. This is useful
if you need to generate a block of zeros for some reason. Use dd (see below) to
read a specific number of zeros.
/dev/null (1) Null device. Reads nothing. Anything you write to the device is dis-
carded. This is very useful for discarding output.
/dev/sg? SCSI Generic. This is a general purpose SCSI command interface for devices
like scanners.
/dev/fb? (29) Frame buffer. This represents the kernels attempt at a graphics driver.
/dev/tty? (4) Virtual console. This is the terminal device for the virtual console itself
and is numbered /dev/tty1 through /dev/tty63.
/dev/tty?? (3) and /dev/pty?? (2) Other TTY devices used for emulating a termi-
nal. These are called pseudo-TTY’s and are identified by two lower case letters
and numbers, such as ttyq3. To non-developers, these are mostly of theoretical
interest.
Recommended links
It is recommended that these links exist on all systems:
139
18.5. dd, tar and tricks with block devices 18. U NIX devices
dd probably originally stood for disk dump. It is actually just like cat except it can
read and write in discrete blocks. It essentially reads and writes between devices while
converting the data in some way. It is generally used in one of these ways:
¨ ¥
dd if=<in-file> of=<out-file> [bs=<block-size>] \
[count=<number-of-blocks>] [seek=<output-offset>] \
140
18. U NIX devices 18.5. dd, tar and tricks with block devices
[skip=<input-offset>]
dd works by specifying an input file and an output file with the if= and of=
options. If the of= option is omitted, then dd writes to stdout. If the if= option is
omitted, then dd reads from stdin.
Note that dd is an unforgiving and destructive command that should be used with
caution.
To create a new RedHat boot floppy, find the boot.img file on ftp.redhat.com,
and with a new floppy, do:
¨ ¥
dd if=boot.img of=/dev/fd0
§ ¦
This will write the raw disk image directly to the floppy disk.
Erasing disks
If you have ever tried to repartition a L INUX disk back into a DOS/Windows disk,
you will know that DOS/Windows FDISK has bugs in it that prevent it from recreating
the partition table. A quick:
¨ ¥
dd if=/dev/zero of=/dev/hda bs=1024 count=10240
§ ¦
will write zeros to the first ten megabytes of your first IDE drive. This will wipe out
the partition table as well as any file-system and give you a “brand new” disk.
141
18.5. dd, tar and tricks with block devices 18. U NIX devices
gives x86 boot sector, system )k?/bIHC, FAT (12 bit) for DOS floppies.
Duplicating a disk
If you have two IDE drives that are of identical size, provided that you are sure that
they contain no bad sectors, and provided neither are not mounted, you can do
¨ ¥
dd if=/dev/hdc of=/dev/hdd
§ ¦
to copy the entire disk and avoid having to install an operating system from scratch.
It doesn’t matter what is on the original (Windows, L INUX or whatever) since each
sector is identically duplicated, the new system will work perfectly.
(If they are not the same size, you will have to use tar or mirrordir to replicate the
filesystem exactly.)
Floppy backups
tar can be used to backup to any device. Consider periodic backups to an ordinary
IDE drive instead of a tape. Here we backup to the secondary slave:
¨ ¥
tar -cvzf /dev/hdd /bin /boot /dev /etc /home /lib /sbin /usr /var
§ ¦
142
18. U NIX devices 18.5. dd, tar and tricks with block devices
Tape backups
rewinds scsi tape 0 and archives the /home directory onto it. You should not try
to use compression with tape drives, because they are error prone, and a single error
could make the archive irrecoverable. The mt command stands for magnetic tape, and
is used to control generic SCSI tape devices. See also mt(1).
If you don’t want to see any program output, just append > /dev/null to the com-
mand. For example, we aren’t often interested in the output of make &make will be
discussed later.-, only the error messages:
¨ ¥
make > /dev/null
§ ¦
And,
¨ ¥
make >& /dev/null
§ ¦
also absorbs all error messages. /dev/null finds enumerable uses in shell script-
ing to suppress the output of a command or feed a command dummy (empty) input.
/dev/null is a safe file from a security point of view, and is often used where a file is
required for some feature in some configuration script, where you would like the par-
ticular feature disabled. For instance, specifying the users shell to /dev/null inside
the password file will certainly prevent insecure use of a shell, and is an explicit way
of saying that that account does not allow shell logins.
143
18.6. Creating devices with mknod and /dev/MAKEDEV 18. U NIX devices
Although all devices are listed in the /dev directory, you can create a device anywhere
in the file system using the mknod command:
¨ ¥
mknod [-m <mode>] <file-name> [b|c] <major-number> <minor-number>
§ ¦
The letters b and c are for creating a block or character device respectively.
To demonstrate, try
¨ ¥
mknod -m 0600 ˜/my-floppy b 2 0
ls -al /dev/fd0 ˜/my-floppy
§ ¦
Note that programs giving the error “ENOENT: No such file or directory”
normally means that the device file is missing, whereas “ENODEV: No such
device” normally means the kernel does not have the driver configured or
loaded.
144
Chapter 19
Partitions, file-systems,
formatting, mounting
Physical disks are divided into partitions &See footnote on page 137-. Information as to
how the disk is partitioned up, is stored in a partition table, which is a small area of the
disk separate from the partitions themselves.
The physical drive itself is usually comprised of several actual disks of which
both sides are used. The sides are labelled 0, 1, 2, 3 etc. and are also called heads because
there is a magnetic head per side to do the actual reading and writing. Each side/head
has tracks and each track is divided into segments called sectors. Each sector typically
holds 512 bytes. The total amount of space on the drive in bytes is therefore:
A single track and all the tracks of the same diameter (on all the sides) are called a
cylinder. Disks are normally talked about in terms of “cylinders and sectors” instead of
“sides, tracks and sectors”. Partitions are (usually) divided along cylinder boundaries.
Hence disks do not have arbitrarily sized partitions; rather the size of the partition is
usually a multiple of the amount of data held in a single cylinder. Partitions therefore
have a definite inner and outer diameter.
145
19.1. The physical disk structure 19. Partitions, file-systems, formatting, mounting
Partition Sector
Cylinder
Side 0
Side 1
Side 2
Side 3
Side 4
Side 5
LBA Mode
The above system is quite straight forward except for the curious limitation that par-
tition tables have only 10 bits to store the partition’s cylinder offset. This means that
no disk can have more than 1024 cylinders. This was overcome by multiplying up the
number of heads in software to reduce the number of cylinders &Called LBA (Large Block
Addressing) mode- hence portraying a disk of impossible proportions. The user however
need never be concerned that the physical disk is completely otherwise.
Extended partitions
The partition table has room for only four partitions. To have more partitions, one of
these four partitions can be divided into many smaller partitions, called logical par-
titions. The original four are then called primary partitions. If a primary partition is
subdivided in this way, then it is known as an extended primary or extended partition.
Typically, the first primary partition will be small (/dev/hda1, say). The second pri-
mary partition will fill the rest of the disk as an extended partition (/dev/hda2, say).
/dev/hda3 and /dev/hda4’s entries in the partition table will be left blank. The
146
19. Partitions, file-systems, formatting, mounting 19.2. Partitioning a new disk
A new disk will have no partition information. Typing fdisk will start an interactive
partitioning utility. The command,
¨ ¥
fdisk /dev/hda
§ ¦
In which case, you can just start adding partitions after it.
If you have a SCSI disk the exact same procedure applies. The only difference is
that /dev/hd? changes to /dev/sd?.
A partition session with fdisk is now shown:
¨ ¥
[root@cericon /root]# fdisk /dev/hda
Device contains neither a valid DOS partition table, nor Sun or SGI disklabel
147
19.2. Partitioning a new disk 19. Partitions, file-systems, formatting, mounting
Table 19.1: What directories should have their own partitions, and their partitions’
sizes
148
19. Partitions, file-systems, formatting, mounting 19.2. Partitioning a new disk
. . . of which there are clearly none. Now n lets us add a new partition:
¨ ¥
Command (m for help): n
Command action
e extended
p primary partition (1-4)
5 p
§ ¦
We wish to define the first physical partition starting at the first cylinder:
¨ ¥
Partition number (1-4): 1
First cylinder (1-788, default 1): 1
§ ¦
We would like an 80 Megabyte partition. fdisk calculates the last cylinder automati-
cally with:
¨ ¥
Last cylinder or +size or +sizeM or +sizeK (1-788, default 788): +80M
§ ¦
Our next new partition is going to span the rest of the disk, and will be an extended
partition:
¨ ¥
Command (m for help): n
Command action
e extended
p primary partition (1-4)
5 e
Partition number (1-4): 2
First cylinder (12-788, default 12): 12
Last cylinder or +size or +sizeM or +sizeK (12-788, default 788): 788
§ ¦
149
19.2. Partitioning a new disk 19. Partitions, file-systems, formatting, mounting
l logical (5 or over)
20 p primary partition (1-4)
l
First cylinder (34-788, default 34): 34
Last cylinder or +size or +sizeM or +sizeK (34-788, default 788): +200M
The default partition type is a single byte that the operating system will look at to deter-
mine what kind of file system is stored there. Entering l lists all known types:
¨ ¥
Command (m for help): l
fdisk will set the type to Linux by default. We only need to explicitly set the type of
the swap partition:
¨ ¥
Command (m for help): t
Partition number (1-9): 5
Hex code (type L to list codes): 82
Changed system type of partition 5 to 82 (Linux swap)
§ ¦
Now we need to set the bootable flag on the first partition, since BIOS’s will not boot a
disk without at least one bootable partition:
¨ ¥
Command (m for help): a
Partition number (1-10): 1
§ ¦
150
19. Partitions, file-systems, formatting, mounting 19.3. Formatting devices
¨ ¥
Command (m for help): p
At this point, nothing has been committed to disk. We write it as follows (Note: this
step is irreversible):
¨ ¥
Command (m for help): w
The partition table has been altered!
Even having written the partition, fdisk may give a warning that the kernel does
not know about the new partitions. In this case you will need to reboot. For the above
partition, the kernel will give the following information at boot time:
¨ ¥
Partition check:
hda: hda1 hda2 < hda5 hda6 hda7 hda8 hda9 >
§ ¦
The < . . . > shows that partition hda2 is extended and is subdivided into five smaller
partitions.
Disk drives are usually read in blocks of 1024 bytes (two sectors). From the point of
view of anyone accessing the device, blocks are stored consecutively — there is no need
to think about cylinders or heads — so that any program can read the disk as though
it were a linear tape. Try
¨ ¥
less /dev/hda1
151
19.3. Formatting devices 19. Partitions, file-systems, formatting, mounting
less -f /dev/hda1
§ ¦
Now a directory structure with files of arbitrary size has to be stored in this con-
tiguous partition. This poses the problem of what to do with a file that gets deleted
and leaves data “hole” in the partition, or a file that has to be split into parts because
there is no single contiguous space big enough to hold it. Files also have to be indexed
in such a way that they can be found quickly (consider that there can easily be 10000
files on a system), and U NIX’s symbolic/hard links and devices files also have to be
stored.
To cope with this complexity, operating systems have a format for storing files
called the file-system (fs). Like MSDOS’s FAT file-system or Windows’ FAT32 file-
system, L INUX has a file-system called the 2nd extended file-system, or ext2 &There
are three other file-systems which may soon become standards. These are SGI’s XFS, ext3fs, and reiserfs.
The purpose of these is to support fast and reliable recovery in the event of a power failure. This is called
-
journaling because it works by pre-writing disk writes to a separate table. .
mke2fs
To create a file-system on a blank partition, the command mkfs or one of its variants is
used. To create a L INUX ext2 file-system on the first partition of the primary master:
¨ ¥
mkfs -t ext2 -c /dev/hda1
§ ¦
or, alternatively
¨ ¥
mke2fs -c /dev/hda1
§ ¦
The -c option means to check for bad blocks by reading through the entire disk first.
This is a read-only check and will cause unreadable blocks to be flagged as such and
not be used. To do a full read-write check, use the badblocks command. This will
write to and verify every bit in that partition. Although the -c option should always
be used on a new disk, doing a full read-write test is probably pedantic. For the above
partition, this would be:
¨ ¥
badblocks -o blocks-list.txt -s -w /dev/hda1 88326
mke2fs -l blocks-list.txt /dev/hda1
§ ¦
152
19. Partitions, file-systems, formatting, mounting 19.3. Formatting devices
New kinds of removable devices are being released all the time. Whatever the device,
the same formatting procedure is used. Most are IDE compatible, which means you
can access them through /dev/hd?.
The following examples are a parallel port IDE disk drive, a parallel port ATAPI
CDROM drive, a parallel port ATAPI disk drive, and your “A:” floppy drive respec-
tively:
¨ ¥
mke2fs -c /dev/pda1
mke2fs -c /dev/pcd0
mke2fs -c /dev/pf0
mke2fs -c /dev/fd0
§ ¦
153
19.3. Formatting devices 19. Partitions, file-systems, formatting, mounting
./MAKEDEV -v fd0
superformat /dev/fd0H1440
superformat /dev/fd0H1690
5 superformat /dev/fd0H1920
§ ¦
Note that these are “long filename” floppies (VFAT), not old 13 character filename
MSDOS floppies.
Most would have only ever used a 3.5 inch floppy as a “1.44MB” floppy. In fact
the disk media and magnetic head can write much more densely than this specification,
allowing 24 sectors per track to be stored instead of the usual 18. This is why there is
more than one device file for the same drive. Some inferior disks will however give
errors when trying to format that densely — superformat will show errors when this
happens.
See page 138 for how floppy devices are named, and their many respective for-
mats.
The mkswap command formats a partition to be used as a swap device. For our disk:
¨ ¥
mkswap -c /dev/hda5
§ ¦
Once it is formatted, the kernel can be signalled to use that partition as a swap
partition with
¨ ¥
swapon /dev/hda5
§ ¦
Swap partitions cannot be larger than 128MB, although you can have as many of them
as you like. You can swapon many different partitions simultaneously.
154
19. Partitions, file-systems, formatting, mounting 19.4. mounting devices
The question of how to access files on an arbitrary disk (without C:, D: etc. notation,
of course) is answered here.
In U NIX, there is only one root file-system that spans many disks. Different di-
rectories may actually exist on a different physical disk.
The -t option tells what kind of file-system it is, and can often be omitted since
L INUX can auto-detect most file-systems. <fstype> can be one of adfs, affs,
autofs, coda, coherent, devpts, efs, ext2, hfs, hpfs, iso9660, minix, msdos,
ncpfs, nfs, ntfs, proc, qnx4, romfs, smbfs, sysv, ufs, umsdos, vfat, xenix or
xiafs. The most common ones are discussed below. The -o option is not usually
used. See the mount(8) page for all possible options.
Put your distribution CDROM disk into your CDROM drive and mount it with,
¨ ¥
ls /mnt/cdrom
mount -t iso9660 -o ro /dev/hdb /mnt/cdrom
§ ¦
(Your CDROM might be /dev/hdc or /dev/hdd however — in this case you should
make a soft link /dev/cdrom pointing to the correct device.) Now cd to your
/mnt/cdrom directory. You will notice that it is no longer empty, but “contains” the
CDROM’s files. What is happening is that the kernel is redirecting all lookups from the
directory /dev/cdrom to read from the CDROM disk. You can browse around these
files as thought they were already copied onto your hard drive. This is what makes
U NIX cool.
When you are finished with the CDROM unmount it with,
¨ ¥
umount /dev/hdb
155
19.5. File-system repair: fsck 19. Partitions, file-systems, formatting, mounting
eject /dev/hdb
§ ¦
Instead of using mtools, you could mount the floppy disk with:
¨ ¥
mkdir /mnt/floppy
mount -t vfat /dev/fd0 /mnt/floppy
§ ¦
in order that cached data is committed to the disk. Failing to umount a floppy before
ejecting will probably cause its file-system to be corrupted.
Mounting a Windows partition can also be done with the vfat file-system, and NT
partitions (read only) with the ntfs file-system. VAT32 is auto-detected and sup-
ported. For example,
¨ ¥
mkdir /windows
mount -t vfat /dev/hda1 /windows
mkdir /nt
mount -t ntfs /dev/hda2 /nt
§ ¦
fsck stands for file system check. fsck scans the file-system, reporting and fixing er-
rors. Errors would normally occur because the kernel halted before umounting the
file-system. In this case, it may have been in the middle of a write operation which left
the file-system in an incoherent state. This usually would happen because of a power
failure. The file-system is then said to be unclean.
It is used as follows:
156
19. Partitions, file-systems, formatting, mounting 19.6. Filesystem errors on boot
¨ ¥
fsck [-V] [-a] [-t <fstype>] <device>
§ ¦
although the -t option can be omitted as L INUX auto-detects the file-system. Note
that you cannot run fsck on a mounted file-system.
fsck actually just runs a program specific to that file-system. In the case of ext2,
the command e2fsck (also known as fsck.ext2) is run. Do a e2fsck(8) to get
exhaustive details.
When doing a interactive check (without the -a option, or with the -r option
— the default). Various questions may be asked of you, as regards fixing and saving
things. Best to save stuff if you aren’t sure; it will be placed in the lost+found di-
rectory at the top of that particular device. In the hierarchy just below there would be
a /lost+found, /tmp/lost+found, /var/lost+found, /usr/lost+found etc.
After doing a check on say, /dev/hda9, check the /home/lost+found directory and
delete what you think you don’t need. These will usually be temporary files and log
files (files that change often). Its extremely rare to loose a real files because of an unclean
un-mount.
Just read Section 19.5 again and run fsck on the file-system that reported the error.
Above, manual mounts are explained for new and removable disks. It is of course nec-
essary for file-systems to be automatically mounted at boot time. What gets mounted
and how is specified in the configuration file /etc/fstab.
It will usually look something like this for the disk we partitioned above:
¨ ¥
/dev/hda1 / ext2 defaults 1 1
/dev/hda6 /tmp ext2 defaults 1 2
157
19.7. Automatic mounts: fstab 19. Partitions, file-systems, formatting, mounting
For the moment we are interested in the first six lines only. The first three fields
(columns) dictate the partition, the directory where it is to be mounted, and the file-
system type, respectively. The fourth field gives options (the -o option to mount).
The fifth field tells whether the file-system contains real files. It is used by the
dump to decide if it can should be backed up. This is not commonly used.
The last field tells the order in which an fsck should be done on the partitions.
The / partition should come first with a 1, and all other partitions should come directly
after. By placing 2’s everywhere else, it ensures that partitions on different disks can
be checked in parallel, which speeds things up slightly at boot time.
The floppy and cdrom entries allow you to use an abbreviated form of the
mount command. mount will just look up the corresponding directory and file-system
type from /etc/fstab. Try
¨ ¥
mount /dev/cdrom
§ ¦
These entries also have the user option which allows ordinary users to mount these
devices. The ro option once again tells to mount the CDROM read only, and the
noauto command tells mount not to mount these file-systems at boot time. (See be-
low.)
proc is a kernel info database that looks like a file-system. For example
/proc/cpuinfo is not any kind of file that actually exists on a disk somewhere. Try
cat /proc/cpuinfo.
Many programs use /proc to get information on the status and configuration of
your machine. More on this will be discussed in Section 42.4.
The devpts file-system is another pseudo-file-system that generates terminal
master/slave pairs for programs. This is mostly of concern to developers.
158
19. Partitions, file-systems, formatting, mounting 19.8. RAM and loopback devices
¨ ¥
mount -t proc /proc /proc
§ ¦
This is an exception to the normal mount usage. Note that all common L INUX in-
stallations require /proc to be mounted at boot time. The only time you will need this
command is for manual startup, or when doing a chroot. (See page 171.)
A RAM device is a block device that can be used as a disk, but really points to a physical
area of RAM.
A loopback device is a block device that can be used as a disk, but really points to
an ordinary file somewhere.
If your imagination isn’t already running wild, consider creating a floppy disk
with file-system, files and all, without actually having a floppy disk; and then writing
the results to a file that can be dd’d to a floppy at any time. You can do this with
loopback and RAM devices.
You can have a whole other L INUX system inside a 500MB file on a Windows
partition and boot into it — thus obviating having to repartition a Windows machine
just to run L INUX .
The operations are quite trivial. To create an ext2 floppy inside a 1440kB file, do:
¨ ¥
dd if=/dev/zero of=˜/file-floppy count=1440 bs=1024
losetup /dev/loop0 ˜/file-floppy
mke2fs /dev/loop0
mkdir ˜/mnt
5 mount /dev/loop0 ˜/mnt
ls -al ˜/mnt
§ ¦
When you are finished copying the files that you want into /mnt, merely
¨ ¥
umount ˜/mnt
losetup -d /dev/loop0
§ ¦
159
19.8. RAM and loopback devices 19. Partitions, file-systems, formatting, mounting
¨ ¥
dd if=˜/file-floppy of=/dev/fd0 count=1440 bs=1024
§ ¦
When you are finished copying the files that you want into /mnt, merely
¨ ¥
umount ˜/mnt
§ ¦
CDROM files
Another trick is to move your CDROM to a file for high speed access. Here we use a
shortcut instead of the losetup command:
¨ ¥
dd if=/dev/cdrom of=some_name.iso
mount -t iso9660 -o ro,loop=/dev/loop0 some_name.iso /cdrom
§ ¦
This is useful when you login in single user mode with no write access to your root
partition.
160
19. Partitions, file-systems, formatting, mounting 19.8. RAM and loopback devices
The kernel caches writes in memory for performance reasons. These flush every so
often, but you sometimes want to force a flush. This is done simply with:
¨ ¥
sync
§ ¦
161
19.8. RAM and loopback devices 19. Partitions, file-systems, formatting, mounting
162
Chapter 20
This chapter will complete your knowledge of sh shell scripting begun in Chapter 7
and expanded on in Chapter 9. These three chapters represent almost everything you
can do with the bash shell.
The special operator && and || can be used to execute functions in sequence. For
instance:
¨ ¥
grep ’ˆharry:’ /etc/passwd || useradd harry
§ ¦
The || means to only execute the second command if the first command returns an
error. In the above case, grep will return an exit code of 1 if harry is not in the
/etc/passwd file, causing useradd to be executed.
An alternate representation is
¨ ¥
grep -v ’ˆharry:’ /etc/passwd && useradd harry
§ ¦
where the -v option inverts the sense of matching of grep. && has the opposite mean-
ing to ||: i.e. to execute the second command only if the first succeeds.
Adept script writers often string together many commands to create the most succinct
representation of an operation:
¨ ¥
grep -v ’ˆharry:’ /etc/passwd && useradd harry || \
163
20.2. Special parameters: $?, $* etc. 20. Advanced Shell Scripting
The shell treats several parameters specially. These parameters may only
be referenced; assignment to them is not allowed.
164
20. Advanced Shell Scripting 20.3. Expansion
$0 Expands to the name of the shell or shell script. This is set at shell ini-
tialization. If bash is invoked with a file of commands, $0 is set to the
name of that file. If bash is started with the -c option, then $0 is set
to the first argument after the string to be executed, if one is present.
Otherwise, it is set to the file name used to invoke bash, as given by ar-
gument zero. (Note that basename $0 is a useful way to get the name
of the current command without the leading path.)
$- At shell startup, set to the absolute file name of the shell or shell script
being executed as passed in the argument list. Subsequently, expands
to the last argument to the previous command, after expansion. Also
set to the full file name of each command executed and placed in the
environment exported to that command. When checking mail, this
parameter holds the name of the mail file currently being checked.
20.3 Expansion
Expansion refers to the way bash will modify the command-line before executing it.
bash will perform several textual modifications to the command-line, proceeded in
the following order:
Brace expansion We have already shown how you can use, for example, the shorthand
touch file {one,two,three}.txt to create multiple files file one.txt,
file two.txt and file three.txt. This is known as brace expansion and
occurs before any other kind of modification to the command-line.
Tilde expansion The special character ˜ is replaced with the full path contained in the
HOME environment variable, or the home directory of users login (if $HOME is
null). ˜+ is replaced with the current working directory and ˜- is replaced with
the most recent previous working directory. The latter two are rarely used.
Parameter expansion This refers to expanding anything that begins with a $. Note
that $VAR and ${VAR} do exactly the same thing, except in the latter case, VAR
can contain non-“whole-word” characters that would normally confuse bash.
There are several parameter expansion tricks that you can use to do string ma-
nipulation. Most shell programmers never bother with these, probably because
they are not well supported by other U NIX’s.
${VAR:-default} This will result in $VAR unless VAR is unset or null, in which
case it will result in default.
${VAR:=default} Same as previous except that default is also assigned to VAR if
it is empty.
165
20.3. Expansion 20. Advanced Shell Scripting
166
20. Advanced Shell Scripting 20.4. Built-in commands
Finally The last modifications to the command line are the splitting of the command-
line into words according to the white-space between them. The IFS (Internal
Field Separator) environment variable is used to determine what characters de-
limit command-line words (usually white-space). Having divided the command-
line into words, path names are expanded according to glob wild cards. You
should consult bash(1) for a comprehensive description of the pattern matching
options that most people don’t know about.
Many commands operate some built-in functionality of bash or are especially inter-
preted. These do not invoke an executable off the file-system. Some of these were
described in Chapter 7, and a few more are discussed here. For an exhaustive descrip-
tion, consult bash(1).
: A single colon by itself does nothing. It is useful for a “no-operation” line such as:
¨ ¥
if <command> ; then
:
else
echo "<command> was unsuccessful"
5 fi
§ ¦
source filename args ... Read filename into the current current shell environment.
This is useful for executing a shell script where any environment variables set
by that script must be preserved.
. filename args ... A single dot is the same as the source command.
Some distributions alias the mv, cp and rm commands to the same with the -i
(interactive) option set. This prevents files from being deleted without prompt-
ing, but can be irritating for the administrator. See your /.bashrc file for these
settings.
167
20.4. Built-in commands 20. Advanced Shell Scripting
exec command arg ... Begins executing command under the same process ID as the
current script. This is most often used for shell scripts that are mere “wrapper”
scripts for real programs. The wrapper script will set any environment variables
and then exec the real program binary an its last line. exec should never return.
local var=value Assigns a value to a variable. The resulting variable will be visible
only within the current function.
pushd directory and popd These two commands are useful for jumping around di-
rectories. pushd can be used instead of cd, but unlike cd the directory is saved
onto a list of directories. At any time, entering popd will return you to the pre-
vious directory. This is nice for navigation since it remembers where you have
been to any level.
printf format args ... This is like the C printf function. It outputs to the terminal
like echo but is useful for more complex formatting of output. See printf(3)
for details and try printf "%10.3e\n" 12 as an example.
set Print the value of all environment variables. See also the section on set below.
times Print the accumulated user and system times for the shell and for processes
run from the shell.
ulimit Prints and sets various user resource limits like memory usage limits and
CPU limits. See bash(1) for details.
wait PID Pauses until background process with process ID of PID has exited, then
returns the exit code of the background process.
168
20. Advanced Shell Scripting 20.5. Trapping signals — the trap command
You will often want to make your script perform certain actions in response to a signal.
A list of signals can be found on Page 79. To trap a signal, create a function and then
use the trap command to bind the function to the signal:
¨ ¥
#!/bin/sh
function on_hangup ()
{
5 echo ’Hangup (SIGHUP) signal recieved’
}
10 while true ; do
sleep 1
done
exit 0
§ ¦
Run the above script and then send the process ID the -HUP signal to test it.
function on_exit ()
{
5 echo ’I should remove temp files now’
}
10 while true ; do
sleep 1
done
exit 0
§ ¦
Breaking the above program will cause it to print its own epitaph.
If - is given instead of a function name, then the signal is unbound (i.e. set to its
default value).
169
20.6. Internal settings — the set command 20. Advanced Shell Scripting
The set command can modify certain behavioural settings of the shell. Your current
options can be displayed with echo $-. Various set commands are usually entered
at the top of a script or given as command-line options to bash. Using set +option
instead of set -option disables the option. Here are a few examples:
set -h Cache the location of commands in your PATH. If your L INUX installation
changes, this can confuse the shell. This option is enabled by default.
set -n Read commands without executing them. This is useful for syntax checking.
set -u Report an error when trying to reference a variable that is unset. Usually
bash just fills in an empty string.
set -C Do not overwrite existing files when using >. >| can be used to force over-
writing.
Here is a collection of useful utility scripts that people are always asking for on the
mailing lists. Page 499 for several security check scripts.
20.7.1 chroot
The chroot command makes a process think that its root filesystem is not actually /.
For example, on one system I have a complete Debian installation residing under a
directory, say, /mnt/debian. I can issue the command,
¨ ¥
chroot /mnt/debian bash -i
§ ¦
to run the bash shell interactively, under the root filesystem /mnt/debian. This will
hence run the command /mnt/debian/bin/bash -i. All further commands pro-
cessed under this shell will have no knowledge of the real root directory, so I can use
170
20. Advanced Shell Scripting 20.7. Useful scripts and commands
my Debian installation without having to reboot. All further commands will effec-
tively behave as though they are inside a separate U NIX machine. One caveat: you
may have to remount your /proc filesystem inside your chroot’d filesystem. See
page 158.
This is good for security, because insecure network services can change to a different
root directory — any corrupting will not effect the real system.
Most rescue disks have a chroot command. After booting the disk, you can manually
mount the filesystems on your hard drive, and then issue a chroot to begin using your
machine as usual. Note that the command chroot <new-root> without arguments
invokes a shell by default.
20.7.2 if conditionals
The if test ... was used to control program flow in Chapter 7. Bash however has a
built-in alias for the test function: the left square brace, [.
Using [ instead of test adds only elegance:
¨ ¥
if [ 5 -le 3 ] ; then
echo ’5 < 3’
fi
§ ¦
You may often want to find the differences between files, for example to see what
changes have been made to a file between versions. Or, a large batch of source code
171
20.7. Useful scripts and commands 20. Advanced Shell Scripting
may have been updated —- it is silly to download the entire directory tree if there have
been only a few small changes — you would want a list of alterations instead.
The diff utility dumps the lines that differ between two files with,
¨ ¥
diff -u <old-file> <new-file>
§ ¦
and patch against a directory tree, that can be used to see changes, as well as bring the
old directory tree up to date, can be created with,
¨ ¥
diff -u --recursive --new-file <old-dir> <new-dir> > <patch-file>.diff
§ ¦
Patch files may also end in .patch and are often gzipped. The patch file can be
applied to <old-dir> with,
¨ ¥
cd <old-dir>
patch -p1 -s < <patch-file>.diff
§ ¦
which will make it identical to <new-dir>. The -p1 strips the leading directory name
from the patch file, which often confuses a patching procedure.
The acid test for an Internet connection is a successful DNS query. You can use ping
to test if a server is up, but some networks filter ICMP messages, and ping does not
check if your DNS is working. dig will send a single UDP packet similar to ping.
Unfortunately it takes rather long to time out, so we fudge in a kill after 2 seconds.
This script blocks until it successfully queries a remote name-server. Typically,
the next few lines of following script would run fetchmail and mail server queue
flush, or possibly uucp. Do set the name-server IP to something appropriate like that
of your local ISP; and increase the 2 second time-out if your name-server typically takes
longer to respond:
¨ ¥
MY_DNS_SERVER=197.22.201.154
while true ; do
(
5 dig @$MY_DNS_SERVER netscape.com IN A &
DIG_PID=$!
{ sleep 2 ; kill $DIG_PID ; } &
wait $DIG_PID
) 2>/dev/null | grep -q ’ˆ[ˆ;]*netscape.com’ && break
10 done
§ ¦
172
20. Advanced Shell Scripting 20.7. Useful scripts and commands
Recursively searching through a directory tree can be done easily with the find and
xargs commands. You should consult both these man pages. The following command
pipe searches through the kernel source for anything about the “pcnet” ethernet card,
printing also the line number:
¨ ¥
find /usr/src/linux -follow -type f | xargs grep -iHn pcnet
§ ¦
(You will notice how this returns rather a lot of data. However going through it care-
fully can be quite instructive.)
Limiting to a certain file extension is just another common use of this pipe sequence:
¨ ¥
find /usr/src/linux -follow -type f -name ’*.[ch]’ | xargs grep -iHn pcnet
§ ¦
One often wants to perform a search and replace throughout all the files in an entire
source tree. A typical example is the changing of a function call name throughout lots
of C source. The following script is a must for any /usr/local/bin/. Notice the
way it recursively calls itself:
¨ ¥
#!/bin/sh
N=‘basename $0‘
173
20.7. Useful scripts and commands 20. Advanced Shell Scripting
for i in "$@" ; do
SEARCH=‘echo "$S" | sed ’s,/,\\\\/,g’‘
REPLACE=‘echo "$R" | sed ’s,/,\\\\/,g’‘
25 cat $i | sed "s/$SEARCH/$REPLACE/g" > $T
D="$?"
if [ "$D" = "0" ] ; then
if diff -q $T $i >/dev/null ; then
:
30 else
if [ "$VERBOSE" = "-v" ] ; then
echo $i
fi
cat $T > $i
35 fi
rm -f $T
fi
done
else
40 find . -type f -name "$1" | xargs $0 $VERBOSE "$S" "$R"
fi
§ ¦
The cut command is useful for slicing files into fields; try:
¨ ¥
cut -d: -f1 /etc/passwd
cat /etc/passwd | cut -d: -f1
§ ¦
The awk program is an interpreter for a complete programming language call AWK.
Its primary use today is in field stripping. Its slightly more flexible than cut,
¨ ¥
cat /etc/passwd | awk -F : ’{print $1}’
§ ¦
174
20. Advanced Shell Scripting 20.7. Useful scripts and commands
Scripts can easily use bc to do calculations that expr can’t handle. For example, con-
vert to decimal with:
¨ ¥
echo -e ’ibase=16;FFFF’ | bc
§ ¦
to binary with
¨ ¥
echo -e ’obase=2;12345’ | bc
§ ¦
Note that the search and replace expansion mechanism could also be used to replace
the extensions: ${i/%.pcx/.png} produces the desired result.
Incidentally, the above nicely compresses hi resolution pcx files — possibly the out-
put of a scanning operation, or a LATEX compilation into PostScript rendered with
GhostScript (i.e. gs -sDEVICE=pcx256 -sOutputFile=page%d.pcx file.ps).
175
20.7. Useful scripts and commands 20. Advanced Shell Scripting
Consider wanting to run a process, say the rxvt terminal, in the background. This can
be done simply with:
¨ ¥
rxvt &
§ ¦
However, rxvt still has its output connected to the shell, and is a child process of the
shell. When a login shell exits, it may take its child processes with it. rxvt may also
die of its own accord out of trying to read or write to a terminal that does not exists
without the parent shell. Now try:
¨ ¥
{ rxvt >/dev/null 2>&1 </dev/null & } &
§ ¦
This is known as forking twice, and redirecting the terminal to dev null. The shell can know
about its child processes, but not about the its “grand-child” processes. We have hence
create a daemon process proper with the above command.
Now it is easy to create a daemon process that restarts itself if it happens to die.
Although such functionality is best accomplished within C (which you will get a taste
of in Chapter 22), you can make do with:
¨ ¥
{ { while true ; do rxvt ; done ; } >/dev/null 2>&1 </dev/null & } &
§ ¦
The following command uses the custom format option of ps to print every conceiv-
able attribute of a process:
¨ ¥
ps -awwwxo %cpu,%mem,alarm,args,blocked,bsdstart,bsdtime,c,caught,cmd,comm,\
command,cputime,drs,dsiz,egid,egroup,eip,esp,etime,euid,euser,f,fgid,fgroup,\
flag,flags,fname,fsgid,fsgroup,fsuid,fsuser,fuid,fuser,gid,group,ignored,\
intpri,lim,longtname,lstart,m_drs,m_trs,maj_flt,majflt,min_flt,minflt,ni,\
5 nice,nwchan,opri,pagein,pcpu,pending,pgid,pgrp,pid,pmem,ppid,pri,rgid,rgroup,\
rss,rssize,rsz,ruid,ruser,s,sess,session,sgi_p,sgi_rss,sgid,sgroup,sid,sig,\
sig_block,sig_catch,sig_ignore,sig_pend,sigcatch,sigignore,sigmask,stackp,\
start,start_stack,start_time,stat,state,stime,suid,suser,svgid,svgroup,svuid,\
svuser,sz,time,timeout,tmout,tname,tpgid,trs,trss,tsiz,tt,tty,tty4,tty8,ucomm,\
10 uid,uid_hack,uname,user,vsize,vsz,wchan
§ ¦
176
20. Advanced Shell Scripting 20.7. Useful scripts and commands
best piped to a file and viewed with a non-wrapping text editor. More interestingly,
the awk command can print the Process ID of a command like:
¨ ¥
ps awx | grep -w ’htt[p]d’ | awk ’{print $1}’
§ ¦
prints all the processes having httpd in the command name or command line. This is
useful for killing netscape as follows:
¨ ¥
kill -9 ‘ps awx | grep ’netsc[a]pe’ | awk ’{print $1}’‘
§ ¦
(Note that the [a] in the regular expression prevents grep from finding itself in the
process list.)
Other useful ps’s are:
¨ ¥
ps awwxf
ps awwxl
ps awwxv
ps awwxu
5 ps awwxs
§ ¦
The f option is most useful for showing parent-child relationships. It stands for forest,
and shows the full process tree. For example, here I am running an X desktop with two
windows:
¨ ¥
PID TTY STAT TIME COMMAND
1 ? S 0:05 init [5]
2 ? SW 0:02 [kflushd]
3 ? SW 0:02 [kupdate]
5 4 ? SW 0:00 [kpiod]
5 ? SW 0:01 [kswapd]
6 ? SW< 0:00 [mdrecoveryd]
262 ? S 0:02 syslogd -m 0
272 ? S 0:00 klogd
10 341 ? S 0:00 xinetd -reuse -pidfile /var/run/xinetd.pid
447 ? S 0:00 crond
480 ? S 0:02 xfs -droppriv -daemon
506 tty1 S 0:00 /sbin/mingetty tty1
507 tty2 S 0:00 /sbin/mingetty tty2
15 508 tty3 S 0:00 /sbin/mingetty tty3
509 ? S 0:00 /usr/bin/gdm -nodaemon
514 ? S 7:04 \_ /etc/X11/X -auth /var/gdm/:0.Xauth :0
515 ? S 0:00 \_ /usr/bin/gdm -nodaemon
524 ? S 0:18 \_ /opt/icewm/bin/icewm
20 748 ? S 0:08 \_ rxvt -bg black -cr green -fg whi
749 pts/0 S 0:00 | \_ bash
5643 pts/0 S 0:09 | \_ mc
5645 pts/6 S 0:02 | \_ bash -rcfile .bashrc
25292 pts/6 R 0:00 | \_ ps awwxf
25 11780 ? S 0:16 \_ /usr/lib/netscape/netscape-commu
177
20.8. Shell initialisation 20. Advanced Shell Scripting
The u option shows the useful user format, while the others show virtual memory,
signal and long format.
Here I will briefly discuss what initialisation takes place after logging in, and how to
modify it.
The interactive shell invoked after logging in will be the shell specified in the last
field of the user’s entry in the /etc/passwd file. The login program will invoke
the shell after authenticating the user, placing a - in front of the the command name,
which indicates to the shell that it is a login shell, meaning that it reads and execute
several scripts to initialise the environment. In the case of bash, the files it reads are:
/etc/profile, /.bash profile, /.bash login and /.profile, in that or-
der. In addition, an interactive shell that is not a login shell also read /.bashrc.
Note that traditional sh shells only read /etc/profile and /.profile.
178
20. Advanced Shell Scripting 20.9. File locking
done
15 fi
Often, one would like a process to have exclusive access to a file. By this we mean that
only one process can access the file at any one time. Consider a mail folder: if two
processes were to write to the folder simultaneously it could become corrupted. We
also sometimes want to ensure that a program can never be run twice at the same time;
this is another use for “locking”.
In the case of a mail folder, if the file is being written to, then no other process should
try read it or write to it: and we would like to create a write lock on the file. However
if the file is being read from, no other process should try to write to it: and we would
like to create a read lock on the file. Write locks are sometimes called exclusive locks,
while read locks are sometimes called shared locks. Often exclusive locks are preferred
for simplicity.
Locking can be implemented by simply creating a temporary file to indicate to
other processes to wait before trying some kind of access. U NIX also has some more
sophisticated builtin functions.
There are currently four methods of file locking &The exim sources seem to indicate thorough
-
research in this area, so this is what I am going on.
179
20.9. File locking 20. Advanced Shell Scripting
1. “dot lock” file locking. Here, a temporary file is created with the same name as
the mail folder and the extension .lock added. So long as this file exists, no
program should try to access the folder. This is an exclusive lock only. It is easy
to write a shell script to do this kind of file locking.
2. “MBX” file locking. Similar to 1. but a temporary file is created in /tmp. This is
also an exclusive lock.
(Note how instead of ‘cat $LOCKFILE‘, we use ‘< $LOCKFILE‘ which is faster.)
You can include this in scripts that need to lock any kind file as follows:
¨ ¥
# wait for a lock
180
20. Advanced Shell Scripting 20.9. File locking
There are a couple of interesting bits in this script: note how the ln function is used
to ensure “exclusivity”. ln is one of the few U NIX functions that is atomic, meaning
that only one link of the same name can exist, and its creation excludes the possibility
that another program would think that it had successfully created the same link. One
might naively expect that the program,
¨ ¥
function my_lockfile ()
{
LOCKFILE="$1.lock"
test -e $LOCKFILE && return 1
5 touch $LOCKFILE
return 0
}
§ ¦
is sufficient for file locking. However consider if two programs running simultane-
ously executed line 4 at the same time. Both would think that the lock did not exist and
proceed to line 5. Then both would successfully create the lockfile — not what you
wanted.
The kill command is then useful for checking if a process is running. Sending the 0
signal does nothing to the process, but fails if the process does not exist. This can be
used to remove a lock of a process that died before doing so itself: i.e. a stale lock.
The above script does not work if your file system is mounted over NFS (networked file
system — see Chapter 28). This is obvious because it relies on the PID of the process,
which could clash across different machines. Not so obvious is that the ln function
does not work exactly right over NFS — you need to stat the file and actually check
that the link count has increased to 2.
There is a program that comes with the procmail package called lockfile and an-
other that comes with the mutt email reader called mutt dotlock (perhaps not dis-
tributed). These do similar file locking, but do not store the PID in the lock file. Hence
181
20.9. File locking 20. Advanced Shell Scripting
it is not possible to detect a stale lock file. For example to search your mailbox, you can
do:
¨ ¥
lockfile /var/spool/mail/mary.lock
grep freddy /var/spool/mail/mary
rm -f /var/spool/mail/mary.lock
§ ¦
which will ensure that you are searching a clean mailbox even if /var is a remote NFS
share.
File locking is a headache for the developer. The problem with U NIX is that whereas
we are intuitively thinking about locking a file, what we really mean is locking a file
name within a directory. File locking per se should only be used on perpetual files, such
as database files. For mailbox and passwd files we need directory locking &My own
term.-, meaning the exclusive access of one process to a particular directory entry. In
my opinion, lack of such a feature is a serious deficiency in U NIX, but because it will
require kernel, NFS, and (possibly) C library extensions, will probably not come into
being any time soon.
This is certainly outside of the scope of this text, except to say that you should consult
the source code of reputed packages rather than invent your own locking scheme.
182
Chapter 21
This chapter covers a wide range of concepts about the way U NIX services function.
Every function of U NIX is provided by one or other package. For instance, mail
is often handled by the sendmail or other package, web by the apache package.
Here we will examine how to obtain, install and configure a package, using lpd
as an example. You can then apply this to any other package, and later chapters will
assume that you know these concepts. This will also suffice as an explanation of how
to set up and manage printing.
The command lprm removes pending jobs from a print queue while lpq reports jobs
in progress.
183
21.2. Downloading and installing 21. System Services and lpd
The service that facilitates this all is called lpd. The lpr user program makes a net-
work connection to the lpd background process, sending it the print job. lpd then
queues, filters and feeds the job until it appears in the print tray.
The following should relieve the questions of “Where do I get <xxx> service/package
from?” and “How do I install it?”. Full coverage of package management will come in
Section 24.2. This is will show you briefly how to use package managers with respect
to a real system service.
Let us say we know nothing of the service except that it has something to do with a
file /usr/sbin/lpd. First we use our package managers to find where the file comes
from (Debian commands are shown in braces):
¨ ¥
rpm -qf /usr/sbin/lpd
( dpkg -S /usr/sbin/lpd )
§ ¦
Will return lpr-0.nn-n (for RedHat 6.2, or LPRng-n.n.nn-n on RedHat 7.0, or lpr
on Debian ). On RedHat you may have to try this on a different machine because rpm
does not know about packages that are not installed. Alternatively, if we would like to
see if a package is installed whose name contains the letters lpr:
¨ ¥
rpm -qa | grep -i lpr
( dpkg -l ’*lpr*’ )
§ ¦
If the package is not present, the package file will be on your CDROM and is easy to
install it with (RedHat 7.0 and Debian in braces):
¨ ¥
rpm -i lpr-0.50-4.i386.rpm
( rpm -i LPRng-3.6.24-2 )
( dpkg -i lpr_0.48-1.deb )
§ ¦
184
21. System Services and lpd 21.3. LPRng vs. legacy lpr-0.nn
/usr/bin/lpq /usr/share/doc/lpr/README.Debian
/usr/bin/lpr /usr/share/doc/lpr/copyright
/usr/bin/lprm /usr/share/doc/lpr/examples/printcap
10 /usr/bin/lptest /usr/share/doc/lpr/changelog.gz
/usr/share/man/man1/lpr.1.gz /usr/share/doc/lpr/changelog.Debian.gz
/usr/share/man/man1/lptest.1.gz /var/spool/lpd/lp
/usr/share/man/man1/lpq.1.gz /var/spool/lpd/remote
§ ¦
(The word legacy with regard to software means outdated, superseded, obsolete or just
old.)
RedHat 7.0 has now switched to using LPRng rather than the legacy lpr that
Debian and other distributions use. LPRng is a more modern and comprehensive
package, and supports the same /etc/printcap file and identical binaries as did the
legacy lpr on RedHat 6.2. The only differences will be in the control files created in
your spool directories, and a different access control mechanism (discussed below).
Note that LPRng has strict permissions requirements on spool directories and is not
trivial to install from source.
A package’s many files can be loosely grouped into the following elements:
Documentation files
Documentation should be your first and foremost interest. man pages will not always
be the only documentation provided. Above we see that lpr does not install very
much into the /usr/share/doc directory. However other packages, like rpm -ql
apache reveal a huge user manual (in /home/httpd/html/manual/), while rpm
-ql wu-ftpd shows lots inside /usr/doc/wu-ftpd-2.5.0.
Every package will probably have a team that maintains it that will have a web page.
In the case of lpd, however, the code is very old, and the various CD vendors do main-
tenence on it themselves. A better example is the lprNG package. Goto the lprNG web
page <http://www.astart.com/lprng/LPRng.html> with your web browser.
185
21.4. Package elements 21. System Services and lpd
There you can see the authors, mailing lists and points of download. If a particular
package is of much interest to you, then you should get familiar with these resources.
Good web pages will also have additional documentation like trouble-shooting guides
and FAQ’s (Frequently Asked Questions). Some may even have archives of their mail-
ing lists. Note that some web pages are geared more toward CD vendors who are
trying to create their own distribution, and will not have packages for download that
beginner users can easily install.
User programs
These will be in one or other bin directory. In this case we can see lpq, lpr, lprm and
lptest, as well as their associated man pages.
These will be in one or other sbin directory. In this case we can see lpc, lpd, lpf and
pac, as well as their associated man pages. The only daemon (background) program is
really the lpd program itself which is the core of the whole package.
Configuration files
The file /etc/printcap controls lpd. Most system services will have a file in /etc.
printcap is a plain text file that lpd reads on startup. Configuring any service pri-
marily involves editing its configuration file. There are several graphical configuration
tools available that avoid this inconvenience (linuxconf, and printtool which is
especially for lpd) but these actually just silently produce the same configuration file.
Because printing is so integral to the system, printcap is not actually provided
by the lpr package. Trying rpm -qf /etc/printcap gives setup-2.3.4-1,
while dpkg -S /etc/printcap shows it to not be owned (i.e. part of the base sys-
tem).
The files in /etc/rc.d/init.d/ (or /etc/init.d/) are the startup and shutdown
scripts to run lpd on boot and shutdown. You can start lpd yourself on the command
line with,
¨ ¥
/usr/sbin/lpd
§ ¦
186
21. System Services and lpd 21.4. Package elements
(or /etc/init.d/lpd).
To make sure that lpd runs on startup, you can check that it has a symlink under the
appropriate runlevel. This is explained by running,
¨ ¥
ls -al ‘find /etc -name ’*lpd*’‘
§ ¦
showing,
¨ ¥
-rw-r--r-- 1 root root 17335 Sep 25 2000 /etc/lpd.conf
-rw-r--r-- 1 root root 10620 Sep 25 2000 /etc/lpd.perms
-rwxr-xr-x 1 root root 2277 Sep 25 2000 /etc/rc.d/init.d/lpd
lrwxrwxrwx 1 root root 13 Mar 21 14:03 /etc/rc.d/rc0.d/K60lpd -> ../init.d/lpd
5 lrwxrwxrwx 1 root root 13 Mar 21 14:03 /etc/rc.d/rc1.d/K60lpd -> ../init.d/lpd
lrwxrwxrwx 1 root root 13 Mar 21 14:03 /etc/rc.d/rc2.d/S60lpd -> ../init.d/lpd
lrwxrwxrwx 1 root root 13 Mar 24 01:13 /etc/rc.d/rc3.d/S60lpd -> ../init.d/lpd
lrwxrwxrwx 1 root root 13 Mar 21 14:03 /etc/rc.d/rc4.d/S60lpd -> ../init.d/lpd
lrwxrwxrwx 1 root root 13 Mar 28 23:13 /etc/rc.d/rc5.d/S60lpd -> ../init.d/lpd
10 lrwxrwxrwx 1 root root 13 Mar 21 14:03 /etc/rc.d/rc6.d/K60lpd -> ../init.d/lpd
§ ¦
The “3” in rc3.d is the one we are interested in. Having S60lpd symlinked to lpd
under rc3.d means that lpd will be started when the system enters runlevel 3, which
is the system’s usual operation.
Note that under RedHat you can run the command setup which has a menu option
System Services. This will allow you to manage what services come alive on boot,
thus creating the above symlinks automatically. For Debian check the man page for
the update-rc.d command.
More details on bootup are in Chapter 32.
Spool files
Systems services like lpd, innd, sendmail and uucp create intermediate files in the
course of processing each request. These are called spool files and are stored in one or
other /var/spool/ directory, usually to be processed then deleted in sequence.
lpd has a spool directory /var/spool/lpd, which may have been created on instal-
lation. You can create spool directories for the two printers in the example below, with
187
21.4. Package elements 21. System Services and lpd
¨ ¥
mkdir -p /var/spool/lpd/lp /var/spool/lpd/lp0
§ ¦
Log files
U NIX has a strict policy of not reporting error messages to the user interface where
there might be no user around to read those messages. While error messages of inter-
active commands are sent to the terminal screen, error or information messages pro-
duced by non-interactive commands are “logged” to files in the directory /var/log/.
A log file is a plain text file that has one liner status messages appended to it
continually by a daemon process. The usual directory for log files is /var/log. The
main log files are /var/log/messages and possibly /var/log/syslog. It contains
kernel messages and messages from a few primary services. Where a service would
produce large log files (think web access with thousands of hits per hour) the service
would use its own log file. sendmail for example uses /var/log/maillog. lpd
does not have a log file of its own (one of its failings).
View the system log file with the follow option to tail:
¨ ¥
tail -f /var/log/messages
§ ¦
Log files are rotated daily or weekly by the logrotate package. Its configuration file
is /etc/logrotate.conf. For each package that happens to produce a log file, there
is an additional configuration file under /etc/logrotate.d/. It is also easy to write
your own by copying one from a standard service. Rotation means that the log file is
renamed with a .1 extension and then truncated to zero length. The service is notified
by the logrotate program, usually with a SIGHUP. Your /var/log/ may contain a
number of old log files named .2, .3 etc. The point of log file rotation is to prevent log
files growing indefinitely.
188
21. System Services and lpd 21.5. The printcap file in detail
Environment Variables
Most user commands of services make use of some environment variables. These can
be defined in your shell startup scripts as usual. For lpr, if no printer is specified on
the command line, the PRINTER environment variable is used to determine the print
queue. An export PRINTER=lp? is all you need.
The printcap (printer capabilities) is similar (and based on) the termcap file (ter-
minal capabilities). Configuring a printer means adding or removing text in this file.
printcap contains a list of one line entries, one for each printer. Lines can be broken
by a \ before the newline. Here is an example of a printcap file for two printers.
¨ ¥
lp:\
:sd=/var/spool/lpd/lp:\
:mx#0:\
:sh:\
5 :lp=/dev/lp0:\
:if=/var/spool/lpd/lp/filter:
lp0:\
:sd=/var/spool/lpd/lp0:\
:mx#0:\
10 :sh:\
:rm=edison:\
:rp=lp3:\
:if=/bin/cat:
§ ¦
Printers are named by the first field: in this case lp is the first printer and lp0 the
second printer. Each printer usually refers to a different physical device with its own
queue. The lp printer should always be listed first and is the default print queue used
when no other is specified. Here lp refers to a local printer on the device /dev/lp0
(first parallel port). lp0 refers to a remote print queue lp3 on a machine edison.
The printcap has a comprehensive man page. However the following fields are ex-
plained, and are most of what you will ever need:
sh Suppress headers. The header is a few informational lines printed before or after
the print job. This should always be off.
189
21.6. Postscript and the print filter 21. System Services and lpd
if Input filter. This is an executable script into which printer data is piped. The output
of this script is fed directly to the printing device, or remote machine. This filter
will hence translate from the applications output into the printer’s native code.
rm Remote machine. If the printer queue is non-local, this is the machine name.
rp Remote printer queue name. The remote machine will have its own printcap file
with possibly several printers defined. This specifies which.
On U NIX the standard format for all printing is the PostScript file. PostScript .ps
files are graphics files representing arbitrary scalable text, lines and images. PostScript
is actually a programming language specifically designed to draw things on a page,
hence .ps files are really postscript programs. The last line in any PostScript program
will always be showpage, indicating that, having completed all drawing operations,
that the page can now be displayed. Hence it is easy to see the number of pages inside
a PostScript file, by grepping for the string showpage.
The procedure for printing on U NIX is to convert whatever you would like to
print into PostScript. PostScript files can be viewed using a PostScript “emulator”,
like the gv (ghostview) program. A program called gs (ghostscript) is the standard
utility for converting the PostScript into a format suitable for your printer. The idea
behind PostScript is that it is a language that can easily be built into any printer. The
so-call “PostScript printer” is one that directly interprets a PostScript file. However
these printers are relatively expensive, and most printers only understand the lesser
PCL (printer control language) dialect or some other format.
In short, any of the hundreds of different formats of graphics and text has a util-
ity that will convert it into PostScript, whereafter gs will convert it for any of the hun-
dreds of different kinds of printers &There are actually many printers not supported by gs at the
time of this writing. This is mainly because manufacturers refuse to release specifications to their printer
-
communication protocols. . The print filter is the work horse of this whole operation.
190
21. System Services and lpd 21.6. Postscript and the print filter
which sends PostScript though the stdin of lpr. All applications without their own
printer drivers will do the same. This means that we can generally rely on the fact
that the print filter will always receive PostScript. gs on the other hand can convert
PostScript for any printer, so all that remains is to determine its command-line options.
Note that filter programs should not be used with remote filters, remote printer
queues can send their PostScript as-is with :if=/bin/cat: (as in the example
printcap file above). This way, the machine connected to the device need be the
only one configured for it.
The filter program we are going to use for the local print queue will be a shell script
/var/spool/lpd/lp/filter. Create the filter with
¨ ¥
touch /var/spool/lpd/lp/filter
chmod a+x /var/spool/lpd/lp/filter
§ ¦
The -sDEVICE option describes the printer. In this example a Hewlett Packard Laser-
Jet 1100. Many printers have very similar or compatible formats, hence there are far
fewer DEVICE’s than different makes of printers. To get a full list of supported devices,
use gs -h and also consult the file /usr/doc/ghostscript-?.??/devices.txt
(or /usr/share/doc/ghostscript-?.??/Devices.htm, or
/usr/share/doc/gs/devices.txt.gz).
The -sOutputFile=- sets to write to stdout (as required for a filter). The
-sPAPERSIZE can be set to one of 11x17, a3, a4, a5, b3, b4, b5, halfletter,
ledger, legal, letter, note and others listed in the man page. You can also use
-g<width>x<height> to set the exact page size in pixels. -r600x600 sets the res-
olution, in this case 600 dpi (dots per inch). -q means to set quiet mode, suppressing
any informational messages which would otherwise corrupt the postscript output, and
finally - means to read from stdin and not from a file.
Our printer configuration is now complete. What remains is to start lpd and test print.
191
21.7. Access control 21. System Services and lpd
This can be done on the command line with the enscript package. enscript is a
program to convert ordinary ascii text into nicely formatted PostScript pages. The man
page for enscript shows an enormous number of options, but we can simply try:
¨ ¥
echo hello | enscript -p - | lpr
§ ¦
Note that you should be very careful about running lpd on any machine that is ex-
posed to the Internet. lpd has had numerous security reports and should really only
be used within a trusted LAN.
To prevent any remote machine from using your printer, lpd first looks in the
file /etc/hosts.equiv. This is a simple list of all machines allowed to print to your
printers. My own file looks like:
¨ ¥
192.168.3.8
192.168.3.9
192.168.3.10
192.168.3.11
§ ¦
The file /etc/hosts.lpd does the same, but doesn’t give those machines adminis-
trative control to the print queues. Note that other services like sshd and rshd (or
in.rshd) also check the hosts.equiv file and consider any machine listed to be
equivalent. This means that they are completed trusted and rshd will not request
user logins between machines to be authenticated. This is hence a security concern.
LPRng on RedHat 7.0 has a different access control facility. It can arbitrarily limit access
in a variety of ways depending on the remote user and the action (such as who is
allowed to manipulate queues). The file /etc/lpd.perms contains the configuration.
The file format is simple, although LPRng’s capabilities are rather involved — to make
a long story short, the equivalent hosts.equiv becomes in lpd.perms:
¨ ¥
ACCEPT SERVICE=* REMOTEIP=192.168.3.8
ACCEPT SERVICE=* REMOTEIP=192.168.3.9
ACCEPT SERVICE=* REMOTEIP=192.168.3.10
ACCEPT SERVICE=* REMOTEIP=192.168.3.11
5 DEFAULT REJECT
§ ¦
Large organisations with many untrusted users should look more closely at the
LPRng-HOWTO in /usr/share/doc/LPRng-n.n.nn. It will explain how to limit ac-
cess in more complicated ways.
192
21. System Services and lpd 21.8. Troubleshooting
21.8 Troubleshooting
1 Check if your printer is plugged in, and working. All printers have a way of
printing a test page. Get your printer manual and try it.
5 Try echo hello > /dev/lp0 to check if the port is operating. The printer
should do something indicating that data has at least been received. Chapter 42
explains how to get your parallel port kernel module installed.
6 Use the lpc program to query the lpd daemon. Try help, then status lp and
so on.
7 Check that there is enough space in your /var and /tmp devices for any inter-
mediate files needed by the print filter. For a large print job, this can be hundred
of megabytes. lpd may not given any kind of error for a print filter failure: the
print job may just dissappear into nowhere. If you are using legacy lpr, then
complain to your distribution vendor about your print filter not properly log-
ging to a file.
8 For legacy lpr, stop lpd and remove all of lpd’s runtime files in
/var/spool/lpd and from any of its subdirectories. (New LPRng will not
require this.) These are .seq, lock, status, lpd.lock, and any left over
spool files that failed to dissappear with lprm (these files are recognisable by
long file names with a hostname and random key embedded in the file name).
Then restart lpd.
9 For remote queues check that you can do forward and reverse lookups on
both machines of both machine’s hostnames, and IP address. If not, you
may get Host name for your address (ipaddr) unknown error messages
when trying an lpq. Test with the command host <ip-address> and also
host <machine-name> on both machines. If any of these do not work, add
entries for both machines in /etc/hosts from the example on Page 264. Note
that the host command may be ignorant of the file /etc/hosts and may still
fail.
193
21.9. Useful programs 21. System Services and lpd
printtool is a graphical printer setup program that is useful for very quickly setting
up lpd. It will immediately generate a printcap file and magic filter without you
having to know anything about lpd configuration.
apsfilter stands for any to postscript filter. The setup described above requires
everything be converted to PostScript before printing, but a filter could foreseeably
use the file command to determine the type of data coming in and then invoke a
program to convert it to PostScript, before piping it through gs. This would enable
JPEG, GIF, ascii text, DVI files or even gzipped HTML to be printed directly, since
PostScript converters have been written for each of these. apsfilter is such a filter,
which are generally called magic filters &This is because the file command uses magic numbers.
See Page 32-.
I personally find this feature a gimmick rather than a genuine utility, since most
of the time you want to layout the graphical object on a page before printing, which
requires previewing it, and hence converting it to PostScript manually. For most situ-
ations, the straight PostScript filter above will work adequately, provided users know
to use enscript instead of lpr when printing plain ascii text.
mpage is a very useful utility for saving the trees. It takes PostScript input and
resizes so that two, four or eight pages fit on one. Change your print filter to:
¨ ¥
#!/bin/bash
cat | mpage -4 | gs -sDEVICE=ljet4 -sOutputFile=- -sPAPERSIZE=a4 -r600x600 -q -
exit 0
§ ¦
194
21. System Services and lpd 21.10. Printing to things besides printers
:mx#0:\
:sh:\
5 :lp=/dev/null:\
:if=/usr/local/bin/my_filter.sh:
§ ¦
I will show the specific example of a redirecting print jobs to a fax machine in Chapter
33.
195
21.10. Printing to things besides printers 21. System Services and lpd
196
Chapter 22
Trivial introduction to C
The C programming language was invented for the purposes of writing an operating
system that could be recompiled (ported) to different hardware platforms (different
CPU’s). It is hence also the first choice for writing any kind of application that has to
communicate efficiently with the operating system.
Many people who don’t program in C very well think of C as an arbitrary lan-
guage out of many. This point should be made at once: C is the fundamental basis of all
computing in the world today. U NIX, Microsoft Windows, office suites, web browsers
and device drivers are all written in C. 99% of your time spent at a computer is proba-
bly spent inside a C application &C++ is also quite popular. It is, however, not as fundamental to
computing, although it is more suitable in many situations.-.
There is also no replacement for C. Since it fulfils its purpose without much flaw,
there will never be a need to replace it. Other languages may fulfil other purposes, but C
fulfils its purpose most adequately. For instance, all future operating systems will proba-
bly be written in C for a long time to come.
It is for these reasons that your knowledge of U NIX will never be complete until
you can program in C.
22.1 C fundamentals
197
22.1. C fundamentals 22. Trivial introduction to C
#include <stdlib.h>
#include <stdio.h>
Save this program in a file hello.c. We will now compile the program &Compiling is the
process of turning C code into assembler instructions. Assembler instructions are the program code that your
80?86/Sparc/RS6000 CPU understands directly. The resulting binary executable is fast because it is executed
natively by your processor — it is the very chip that you see on your motherboard that fetches hello byte
for byte from memory and executes each instruction. This is what is meant by million instructions per second
(MIPS). The Megahertz of the machine quoted by hardware vendors is very roughly the number of MIPS.
Interpreted languages (like shell scripts) are much slower because the code itself is written in something not
understandable to the CPU. The /bin/bash program has to interpret the shell program. /bin/bash itself is
written in C, but the overhead of interpretation makes scripting languages many orders of magnitude slower
-
than compiled languages. Shell scripts therefore do not need to be compiled. . Run the command
¨ ¥
gcc -Wall -o hello hello.c
§ ¦
the -o hello option tells gcc &GNU C Compiler. cc on other U NIX systems.- to produce
the binary file hello instead of the default binary file name a.out &Called a.out out
of historical reasons.-. The -Wall option means to report all Warnings during the com-
pilation. This is not strictly necessary but most helpful for correcting possible errors in
your programs.
Then run the program with
¨ ¥
./hello
§ ¦
Previously you should have familiarised yourself with bash functions (See Sec-
tion 7.7). In C all code is inside a function. The first function to be called (by the
operating system) in the main function.
Type echo $? to see the return code of the program. You will see it is 3, indicat-
ing the return value of the main function.
Other things to note are the " on either side of the string to be printed. Quotes
are required around string literals. Inside a string literal \n escape sequence indicates a
newline character. ascii(7) shows some other escape sequences. You can also see a
proliferance of ; everywhere in a C program. Every statement in C is separated by a ;
unlike with shell scripts where a ; is optional.
Now try:
198
22. Trivial introduction to C 22.1. C fundamentals
¨ ¥
#include <stdlib.h>
#include <stdio.h>
printf may be thought of as the command to send output to the terminal. It is also
what is known as a standard C library function. In other words, it is specified that a C
implementation should always have the printf function and that it should behave in
a certain way.
The %d indicates that a decimal should go in at that point in the text. The number
to be substituted will be the first argument to the printf function after the string literal
— i.e. the 1 + 2. The next %d is substituted with the second argument — i.e. the 10.
The %d is known as a format specifier. It essentially converts an integer number into a
decimal representation. See printf(3) for more details.
With bash you could use a variable anywhere anytime, and the variable would just
be blank if it was never used before. In C you have to tell the compiler what variables
you are going to need before each block of code.
This is done with a variable declaration:
¨ ¥
#include <stdlib.h>
#include <stdio.h>
The int x is a variable declaration. It tells the program to reserve space for one
integer variable and that it will later be referred to as x. int is the type of the variable.
x = 10 assigned the variable with the value 10. There are types for each of the kind
of numbers you would like to work with, and format specifiers to convert them for
printing:
199
22.1. C fundamentals 22. Trivial introduction to C
¨ ¥
#include <stdlib.h>
#include <stdio.h>
You will notice that %f is used for both floats and doubles. This is because
floats are always converted to doubless during an operation like this. Also try
replacing %f with %e to print in exponential notation — i.e. less significant digits.
22.1.3 Functions
Here we have a non-main function called by the main function. The function is
first declared with
¨ ¥
200
22. Trivial introduction to C 22.1. C fundamentals
This declaration states the return value of the function (void for no return value);
the function name (mutiply and print) and then the arguments that are going to be
passed to the function. The numbers passed to the function are given their own names,
x and y, and are converted to the type of x and y before being passed to the function
— in this case, int and int. The actual C code of which the function is comprised
goes between curly braces { and }.
In other word, the above function is the same as:
¨ ¥
void mutiply_and_print ()
{
int x;
int y;
5 x = <first-number-passed>
y = <second-number-passed>
printf ("%d * %d = %d\n", x, y, x * y);
}
§ ¦
x = 10;
10 if (x == 10) {
printf ("x is exactly 10\n");
x++;
} else if (x == 20) {
printf ("x is equal to 20\n");
15 } else {
printf ("No, x is not equal to 10 or 20\n");
}
if (x > 10) {
20 printf ("Yes, x is more than 10\n");
}
while (x > 0) {
201
22.1. C fundamentals 22. Trivial introduction to C
switch (x) {
case 9:
printf ("x is nine\n");
35 break;
case 10:
printf ("x is ten\n");
break;
case 11:
40 printf ("x is eleven\n");
break;
default:
printf ("x is huh?\n");
break;
45 }
return 0;
}
§ ¦
It is easy to see the format that these take, although they are vastly different from shell
scripts. C code works in statement blocks between curly braces, in the same way that
shell scripts have do’s and done’s.
Note that with most programming languages when we want to add 1 to a vari-
able we have to write, say x = x + 1. In C the abbreviation x++ is used, meaning to
increment a variable by 1.
The for loop takes three statements between ( . . . ). These are, a statement to
start things off, a comparison, and a statement to be executed everytime after the state-
ment block. The statement block after the for is executed until the comparison is
untrue.
The switch statement is like case in shell scripts. switch considers the argu-
ment inside its ( . . . ) and decides which case line to jump to. In this case it will
obviously be printf ("x is ten\n"); because x was 10 when the previous for
loop exited. The break tokens means we are done with the switch statement and
that execution should continue from Line 46.
202
22. Trivial introduction to C 22.1. C fundamentals
Note that a string has to be null-terminated. This means that the last character must be a
zero. The code y[10] = 0 sets the eleventh item in the array to zero. This also means
that strings need to be one char longer than you would think.
(Note that the first item in the array is y[0], not y[1], like some other program-
ming languages.)
203
22.1. C fundamentals 22. Trivial introduction to C
In the above example, the line char y[11] reserved 11 bytes for the string. Now
what if you want a string of 100000 bytes? C allows you to allocate memory for your
100k which means requesting memory from the kernel. Any non-trivial program will
allocate memory for itself and there is no other way of getting getting large blocks of
memory for your program to use. Try:
¨ ¥
#include <stdlib.h>
#include <stdio.h>
The declaration char *y would be new to you. It means to declare a variable (a num-
ber) called y that points to a memory location. The * (asterix) in this context means
pointer. Now if you have a machine with perhaps 256 megabytes of RAM + swap, then
y will have a range of about this much. The numerical value of y is also printed with
printf ("%ld\n", y);, but is of no interest to the programmer.
When finished using memory it should be given back to the operating system.
This is done with free. Programs that don’t free all the memory they allocate are
said to leak memory.
204
22. Trivial introduction to C 22.1. C fundamentals
int f;
int g;
a = sizeof (char);
b = sizeof (short);
15 c = sizeof (int);
d = sizeof (long);
e = sizeof (float);
f = sizeof (double);
g = sizeof (long double);
20 printf ("%d, %d, %d, %d, %d, %d, %d\n", a, b, c, d, e, f, g);
return 0;
}
§ ¦
Here you can see the number of bytes required by all of these types. Now we can easily
allocate arrays of things other than char.
¨ ¥
#include <stdlib.h>
#include <stdio.h>
On many machines an int is four bytes (32 bits), but you should never assume this.
Always use the sizeof keyword to allocate memory.
C programs probably do more string manipulation than anything else. Here is a pro-
gram that divides a sentence up into words:
¨ ¥
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
205
22.1. C fundamentals 22. Trivial introduction to C
int i;
int length_of_sentace;
10 char p[256];
char *q;
length_of_word = 0;
Here we introduce three more standard C library functions. strcpy stands for
stringcopy. It copies memory from one place to another. Line 13 of this program
copies text into the character array p, which is called the target of the copy.
strlen stands for stringlength. It determines the length of a string, which is
just a count of the number of characters up to the null character.
We need to loop over the length of the sentence. The variable i indicates the
current position in the sentence.
Line 20 says that if we find a character 32 (denoted by ’ ’) we know we have
reached a word boundary. We also know that the end of the sentence is a word bound-
ary even though there may not be a space there. The token || means OR. At this point
we can allocate memory for the current word, and copy the word into that memory.
The strncpy function is useful for this. It copies a string, but only up to a limit of
length of word characters (the last argument). Like strcpy, the first argument is
the target, and the second argument is the place to copy from.
To calculate the position of the start of the last word, we use p + i -
length of word. This means that we are adding i to the memory location p and then
going back length of word counts thus pointing strncpy to the exact position.
206
22. Trivial introduction to C 22.1. C fundamentals
Finally, we null terminate the string on Line 23. We can then print q, free the
used memory, and begin with the next word.
Under most programming languages, file operations involve three steps: opening a
file, reading or writing to the file, and then closing the file. The command fopen is
commonly uses to tell the operating system that you are ready to begin working with
a file:
The following program opens a file and spits it out on the terminal:
¨ ¥
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
A new type is presented here: FILE *. It is a file operations variable that has to be
initialised with fopen before we can use it. The fopen function takes two arguments:
the first is the name of the file and the second is string explaining how we want to open
the file — in this case "r" means reading from the start of the file. Other options are
"w" for writing and several more described in fopen(3).
The command fgetc gets a character from the file. It retrieves consecutive bytes
from the file until it reaches the end of the file, where it returns a -1. The break
statement indicates to immediately terminate the for loop, whereupon execution will
continue from Line 17. break statements can appear inside while loops as well.
You will notice that the for loop is empty. This is allowable C code and means
to loop forever.
207
22.1. C fundamentals 22. Trivial introduction to C
Some other file functions are fread, fwrite, fputc, fprintf and fseek. See
fwrite(3), fputc(3), fprintf(3) and fseek(3).
Up until now, you are probably wondering what the (int argc, char *argv[])
are for. These are the command-line arguments passed to the program by the shell.
argc is the number of command-line arguments and argv is an array of strings of
each argument. Printing them out is easy:
¨ ¥
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
Here we put this altogether in a program that reads in lots of files and dumps them as
words. Some new things in the following program are: != is the inverse of ==. It tests
if not-equal-to; realloc reallocates memory — it resizes an old block of memory so that
any bytes of the old block are preserved; \n, \t mean the newline character, 10, or the
tab character, 9, respectively.
¨ ¥
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
c = 0;
amount_allocated = 256;
q = malloc (amount_allocated);
25 if (q == 0) {
208
22. Trivial introduction to C 22.1. C fundamentals
30 while (c != -1) {
if (length_of_word >= amount_allocated) {
amount_allocated = amount_allocated * 2;
q = realloc (q, amount_allocated);
if (q == 0) {
35 perror ("realloc failed");
abort ();
}
}
40 c = fgetc (f);
q[length_of_word] = c;
if (c == -1 || c == ’ ’ || c == ’\n’ || c == ’\t’) {
if (length_of_word > 0) {
45 q[length_of_word] = 0;
printf ("%s\n", q);
}
amount_allocated = 256;
q = realloc (q, amount_allocated);
50 if (q == 0) {
perror ("realloc failed");
abort ();
}
length_of_word = 0;
55 } else {
length_of_word = length_of_word + 1;
}
}
60 fclose (f);
}
if (argc < 2) {
printf ("Usage:\n\twordsplit <filename> ...\n");
exit (1);
70 }
This program is more complicated than you might immediately expect. Reading
in a file where we know that a word will never exceed 30 characters is simple. But what
if we have a file that contains some words that are 100000 characters long? GNU
programs are expected to behave correctly under these circumstances.
We have hence created a program that can work efficiently with a 100 Gigabyte
file just as easily as with a 100 byte file. This is part of the art of C programming.
209
22.1. C fundamentals 22. Trivial introduction to C
fact it is really a truly excellent listing for the simple reason that, firstly, it is easy to
understand, and secondly, it is an efficient algorithm (albeit not optimal). Readability in
C is your first priority — it is imperative that what you do is obvious to anyone reading the
code.
At the start of each program will be one or more #include statements. These tell the
compiler to read in another C program. Now “raw” C does not have a whole lot in the
way of protecting against errors: for example the strcpy function could just as well
be used with one, three or four arguments, and the C program would still compile.
It would however reek havoc with the internal memory and cause the program to
crash. These other .h C programs are called header files that contain templates for how
functions are meant to be called. Every function you might like to use is contained in
one or other template file. The templates are called function prototypes.
A function prototype is written the same as the function itself, but without the
code. A function prototype for word dump would simply be:
¨ ¥
void word_dump (char *filename);
§ ¦
22.1.11 C comments
210
22. Trivial introduction to C 22.1. C fundamentals
and all non-obvious code should be commented. It is a good rule that a program that
needs lots of comments to explain it is badly written. Also, never comment the obvious
and explain why you do things rather that what you are doing. It is advisable not to
make pretty graphics between each function, so rather:
¨ ¥
/* returns -1 on error, takes a positive integer */
int sqr (int x)
{
<...>
§ ¦
than
¨ ¥
/***************************----SQR----******************************
* x = argument to make the square of *
* return value = *
* -1 (on error) *
5 * square of x (on success) *
********************************************************************/
int sqr (int x)
{
<...>
§ ¦
which is liable to give people nausea. Under C++, the additional comment // is al-
lowed, which ignores everything between the // and the end of the line. It is accepted
under gcc, but should not be used unless you really are programming in C++. In addi-
tion, programmers often “comment out” lines by placing an #if 0 . . . #endif around
them, which really does exactly the same thing as a comment (see Section 22.1.12), but
allows you to comment out comments as well eg:
¨ ¥
int x;
x = 10;
#if 0
printf ("debug: x is %d\n", x); /* print debug information */
5 #endif
y = x + 10;
<...>
§ ¦
211
22.2. C Libraries 22. Trivial introduction to C
§ ¦
in our example program #defines the text START BUFFER SIZE to be the text 256.
Thereafter wherever in the C program we have a START BUFFER SIZE, the text 256
will be seen by the compiler, and we can use START BUFFER SIZE instead. This is a
much cleaner way of programming, because, if say we would like to change the 256 to
some other value, we only need to change it in one place. START BUFFER SIZE is also
more meaningful than a number, making the program more readable.
Whenever you have a literal constant like 256, you should replace it with a macro
defined near the top of your program.
You can also check for the existence of macros with the #ifdef and #ifndef
directive. # directives are really a programming language all on their own:
¨ ¥
/* Set START_BUFFER_SIZE to fine tune performance before compiling: */
#define START_BUFFER_SIZE 256
/* #define START_BUFFER_SIZE 128 */
/* #define START_BUFFER_SIZE 1024 */
5 /* #define START_BUFFER_SIZE 16384 */
#ifndef START_BUFFER_SIZE
#error This code did not define START_BUFFER_SIZE. Please edit
#endif
10
#if START_BUFFER_SIZE <= 0
#error Wooow! START_BUFFER_SIZE must be greater than zero
#endif
22.2 C Libraries
We made reference to the Standard C Library. The C language on its own does almost
nothing; everything useful is an external function. External functions are grouped into
212
22. Trivial introduction to C 22.2. C Libraries
libraries. The Standard C Library is the file /lib/libc.so.6. To list all the C library
functions, do:
¨ ¥
nm /lib/libc.so.6
nm /lib/libc.so.6 | grep ’ T ’ | cut -f3 -d’ ’ | grep -v ’ˆ_’ | sort -u | less
§ ¦
many of these have man pages, however some will have no documentation and require
you to read the comments inside the header files. It is better not to use functions unless
you are sure that they are standard functions in the sense that they are common to other
systems.
To create your own library is simple. Lets say we have two files that contain
functions that we would like to create a library out of, simple math sqrt.c,
¨ ¥
#include <stdlib.h>
#include <stdio.h>
213
22.2. C Libraries 22. Trivial introduction to C
abort ();
}
if (y < 0)
15 return 0;
result = 1;
while (y > 0) {
result = result * x;
y = y - 1;
20 }
return result;
}
§ ¦
We would like to call the library simple math. It is good practice to name all the
functions in the library simple math ??????. The function abs error is not going to
be used outside of the file simple math sqrt.c and hence has the keyword static
in front of it, meaning that it is a local function.
We can compile the code with:
¨ ¥
gcc -Wall -c simple_math_sqrt.c
gcc -Wall -c simple_math_pow.c
§ ¦
The -c option means the compile only. The code is not turned into an executable. The
generated files are simple math sqrt.o and simple math pow.o. These are called
object files.
We now need to archive these files into a library. We do this with the ar command (a
predecessor to tar):
¨ ¥
ar libsimple_math.a simple_math_sqrt.o simple_math_pow.o
ranlib libsimple_math.a
§ ¦
and run:
¨ ¥
gcc -Wall -c mytest.c
214
22. Trivial introduction to C 22.3. C projects — Makefiles
The first command compiles the file mytest.c into mytest.o, while the second func-
tion is called linking the program, which assimilates mytest.o and the libraries into a
single executable. The option L. means to look in the current directory for any libraries
(usually only /lib and /usr/lib are searched). The option -lsimple math means
to assimilate the library libsimple math.a (lib and .a are added automatically).
This operation is called static &Nothing to do with the “static” keyword.- linking because
it happens before the program is run, and includes all object files into the executable.
As an aside, note that it is often the case where many static libraries are linked into
the same program. Here order is important: the library with the least dependencies
should come last, or you will get so-called symbol referencing errors.
We can also create a header file simple math.h for using the library.
¨ ¥
/* calculates the integer square root, aborts on error */
int simple_math_isqrt (int x);
This will get rid of the implicit declaration of function warning messages.
Usually #include <simple math.h> would be used, but here this is a header
file in the current directory — our own header file — and this is where we use
"simple math.h" instead of <simple math.h>.
Now what if you make a small change to one of the files (as you are likely to do very
often when developing)? You could script the process of compiling and linking, but
the script would build everything, and not just the changed file. What we really need
is a utility that only recompiles object files whose sources have changed: make is such
a utility.
make is a program that looks inside a Makefile in the current directory then
does a lot of compiling and linking. Makefiles contain lists of rules and dependencies
describing how to build a program.
215
22.3. C projects — Makefiles 22. Trivial introduction to C
meaning simply that the files libsimple math.a mytest.o must exist and be up-
dated before mytest. mytest: is called a make target. Beneath this line, we also need
to state how to build mytest:
¨ ¥
gcc -Wall -o $@ mytest.o -L. -lsimple_math
§ ¦
The $@ means the name of the target itself which is just substituted with mytest. Note
that the space before the gcc is a tab character and not 8 space characters.
The next dependency is that libsimple math.a depends on simple math sqrt.o
simple math pow.o. Once again we have a dependency, along with a shell script to
build the target. The full Makefile rule is:
¨ ¥
libsimple_math.a: simple_math_sqrt.o simple_math_pow.o
rm -f $@
ar rc $@ simple_math_sqrt.o simple_math_pow.o
ranlib $@
§ ¦
Note again that the left margin consists of a single tab character and not spaces.
which means that any .o files needed can be built from a .c file of a similar name using
the command gcc -Wall -c -o $*.o $<. $*.o means the name of the object file
and $< means the name of the file that $*.o depends on, one at a time.
216
22. Trivial introduction to C 22.3. C projects — Makefiles
Makefiles can in fact have their rules put in any order, so its best to state the most
obvious rules first for readability.
The all: target is the rule that make tries to satisfy when make is run with no
command-line arguments. This just means that libsimple math.a and mytest are
the last two files to be built, i.e. the top-level dependencies.
Makefiles also have their own form of environment variables, like shell scripts. You
can see that we have used the text simple math in three of our rules. It makes sense
to define a macro for this so that if we can easily change to a different library name.
lib$(LIBNAME).a: $(OBJS)
15 rm -f $@
ar rc $@ $(OBJS)
ranlib $@
.c.o:
20 gcc $(CFLAGS) -c -o $*.o $<
clean:
rm -f *.o *.a mytest
§ ¦
217
22.3. C projects — Makefiles 22. Trivial introduction to C
You can see we have added an additional disconnected target clean:. Targets can be
run explictly on the command-line like:
¨ ¥
make clean
§ ¦
218
Chapter 23
Shared libraries
DLL stands for Dynamically Loadable Library. This chapter follows directly from our
construction of static (.a) libraries in Chapter 22. Creating a DLL is not as relevant
as installing them. Here I will show you both so that you have a good technical
overview of how DLL’s work on U NIX. You can then promptly forget everything ex-
cept ldconfig and LD LIBRARY PATH discussed below.
The .a library file is good for creating functions that many programs can in-
clude. This is called code reuse. But note how the .a file is linked into (included) in
the executable mytest above. mytest is enlarged by the size of libsimple math.a.
Where there are hundreds of programs that use the same .a file, that code is effectively
duplicated all over the file-system. Such inefficiency was deemed unacceptable since
long before L INUX , so library files were invented that only link with the program
when it runs — a process known as dynamic linking. Instead of .a files, similar .so
(shared object) files live in /lib/ and /usr/lib/ that get automatically linked to a
program when it runs.
219
23.3. DLL versioning 23. Shared libraries
5 CFLAGS = -Wall
lib$(LIBNAME).so: $(OBJS)
gcc -shared $(CFLAGS) $(OBJS) -lc -Wl,-soname -Wl,$(SOVERSION) \
-o $(SONAME) && \
15 ln -sf $(SONAME) $(SOVERSION) && \
ln -sf $(SONAME) lib$(LIBNAME).so
.c.o:
gcc -fPIC -DPIC $(CFLAGS) -c -o $*.o $<
20
clean:
rm -f *.o *.a *.so mytest
§ ¦
The -shared option to gcc builds our shared library. The -W options are linker op-
tions that set the version number of the library that linking programs will load at run
time. The -fPIC -DPIC means to generate position-independent code, i.e. code suitable
for dynamic linking. After running make we have,
¨ ¥
lrwxrwxrwx 1 root root 23 Sep 17 22:02 libsimple_math.so -> libsimple_math.so.1.0.0
lrwxrwxrwx 1 root root 23 Sep 17 22:02 libsimple_math.so.1.0 -> libsimple_math.so.1.0.0
-rwxr-xr-x 1 root root 6046 Sep 17 22:02 libsimple_math.so.1.0.0
-rwxr-xr-x 1 root root 13677 Sep 17 22:02 mytest
§ ¦
You may observe that our three .so files are similar to the many in /lib/ and
/usr/lib/. This complicated system of linking and symlinking is part of the pro-
cess of library versioning. Although generating a DLL is out of the scope of most system
admin’s tasks, library version is important to understand:
DLL’s have a problem. Consider a DLL that is outdated or buggy: simply copy-
ing the DLL over with a new one will effect all the applications that use it. If these
applications rely on certain behaviour of the DLL code, then they will probably crash
with the fresh DLL. U NIX has elegantly solved this problem by allowing multiple ver-
sions of DLL’s to be present simultaneously. The programs themselves have their re-
quired version number built into them. Try,
¨ ¥
ldd mytest
§ ¦
which will show the DLL files that mytest is scheduled to link with:
220
23. Shared libraries 23.4. Installing DLL .so files
¨ ¥
libsimple_math.so.1.0 => ./libsimple_math.so.1.0 (0x40018000)
libc.so.6 => /lib/libc.so.6 (0x40022000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
§ ¦
If we try to run ./mytest you will be greeted with an error while loading
shared libraries message. This is because the dynamic linker does not search
the current directory for .so files. To run your program, you will have to install your
library:
¨ ¥
mkdir -p /usr/local/lib
install -m 0755 libsimple_math.so libsimple_math.so.1.0 \
libsimple_math.so.1.0.0 /usr/local/lib
§ ¦
221
23.4. Installing DLL .so files 23. Shared libraries
222
Chapter 24
Here you will, first and foremost, learn to build packages from source, following on
from your knowledge of Makefiles in Chapter 22. Most packages however also come
as .rpm (RedHat) or .deb (Debian ) files which are discussed further below.
Almost all packages originally come as C sources, tar’ed and available from one of
the many public ftp sites, like metalab.unc.edu. Thoughtful developers would
have made their packages GNU standards compliant. This means that untaring
the package will reveal the following files inside the top-level directory:
INSTALL This is a standard document beginning with the line “These are
generic installation instructions.”. Since all GNU packages are
install the same way, this file should always be the same.
README Any essential information. This is usually an explanation of what the package
does, promotional material, and anything special that need be done to install it.
ChangeLog A especially formatted list, containing a history of all changes ever done
to the package, by whom, and on what date. Used to track work on the package.
223
24.1. Building GNU source packages 24. Source and Binary Packages
Being GNU standards compliant should also mean that the package will install us-
ing only the three following commands:
¨ ¥
./configure
make
make install
§ ¦
It also usually means that packages will compile on any U NIX system. Hence this sec-
tion should be a good guide to getting L INUX software to work on non-L INUX ma-
chines.
An example will now go
through these steps. Begin by downloading cooledit from metalab.unc.edu in
the directory /pub/Linux/apps/editors/X/cooledit, using ftp. Make a direc-
tory /opt/src where you are going to build such custom packages. Now
¨ ¥
cd /opt/src
tar -xvzf cooledit-3.17.2.tar.gz
cd cooledit-3.17.2
§ ¦
You will notice that most sources have the name package-major.minor.patch.tar.gz.
The major version of the package is changed when the developers make a substantial
feature update or when they introduce incompatibilities to previous versions. The
minor version is usually updated when small features are added. The patch number
(also known as the patch level) is updated whenever a new release is made, and usually
indicates bug fixes.
At this point you can apply any patches you may have as per Section 20.7.3.
You can now ./configure the package. The ./configure script is generated
by autoconf — a package used by developers to create C source that will compile
on any type of U NIX system. The autoconf package also contains the GNU Coding
Standards to which all software should comply &autoconf is the remarkable work of David
MacKenzie. I often hear the myth that U NIX systems have so diverged that they are no longer compatible.
The fact that sophisticated software like cooledit (and countless others) compiles on almost any U NIX
machine should dispel this nonsense. There is also hype surrounding developers “porting” commercial
software from other U NIX systems to L INUX. If they had written their software in the least bit properly to
begin with, there would be no porting to be done. In short, all L INUX software runs on all U NIX’s. The only
-
exceptions are a few packages that uses some custom features of the L INUX kernel. .
¨ ¥
./configure --prefix=/opt/cooledit
§ ¦
Here, --prefix indicates the top-level directory under which the package will be
installed. (See Section 17.). Always also try:
224
24. Source and Binary Packages 24.1. Building GNU source packages
¨ ¥
./configure --help
§ ¦
-fomit-frame-pointer Permits the compiler to use one extra register that would
normally be used for debugging. Use this option only when you are absolutely
sure you have no interest in analysing any running problems with the package.
-s Strips the object code. This reduces the size of the object code by eliminating any
debugging data.
-pipe Don’t use temporary files. Rather use pipes to feed the code through the dif-
ferent stages of compilation. This usually speeds compilation.
Compile the package. This can take anything up to several hours depending on
the amount of code and your CPU power &cooledit will compile in under 10 minutes on any
entry level machine at the time of writing.-.
¨ ¥
make
§ ¦
if you decide that you would rather compile with debug support after all.
Install the package with,
¨ ¥
make install
§ ¦
You can use this to pack up the completed build for taring onto a different system.
You should however never try to run a package from a different directory to the one
225
24.2. Redhat and Debian binary packages 24. Source and Binary Packages
it was --prefix’d to install into, since most packages compile-in the location of data
they install.
Using a source package is often the best way to install when you want the pack-
age to work the way the developers intended. You will also tend to find more docu-
mentation, where vendors have neglected to include certain files.
Through this section we will place Debian examples inside braces, ( . . . ). Since
these are examples from actual systems they will not always correspond.
Package versioning
The package numbering for RedHat and Debian packages is often as follows (al-
though this is far from a rule):
¨ ¥
<package-name>-<source-version>-<package-version>.<hardware-platform>.rpm
( <package-name>_<source-version>-<package-version>.deb )
§ ¦
For example:
¨ ¥
bash-1.14.7-22.i386.rpm
( bash_2.03-6.deb )
§ ¦
is the Bourne Again Shell you are using, major version 1, minor version 14, patch
7, package version 22, compiled for an Intel 386 processor. Sometimes the Debian
package will have the architecture appended to the version number, in the above case,
perhaps bash 2.03-6 i386.deb.
226
24. Source and Binary Packages 24.2. Redhat and Debian binary packages
To install a package, run the following command on the .rpm or .deb file:
¨ ¥
rpm -i mirrordir-0.10.48-1.i386.rpm
( dpkg -i mirrordir_0.10.48-2.deb )
§ ¦
Upgrading can be done with, for example (Debian automatically chooses an upgrade
if the package is already present):
¨ ¥
rpm -U mirrordir-0.10.49-1.i386.rpm
( dpkg -i mirrordir_0.10.49-1.deb )
§ ¦
With Debian , a package removal does not remove configuration files, allowing you
to reinstall later:
¨ ¥
dpkg -r mirrordir
§ ¦
If you need to reinstall a package (perhaps because of a file being corrupted), use
¨ ¥
rpm -i --force python-1.6-2.i386.rpm
§ ¦
Dependencies
Packages often require other packages to already be installed in order to work. The
package database keeps a track of these dependencies. Often you will get an error:
failed dependencies: (or dependency problems for Debian ) message when
trying to install. This means that other packages must be installed first. The same
might happen when trying to remove packages. If two packages mutually require
each other, you must place them both on the command-line at once when installing.
Sometimes a package requires something that is not essential, or is already provided
by an equivalent package. For example, a program may require sendmail to be in-
stalled where exim is an adequate substitute. In such cases the option --nodeps skips
dependency checking.
227
24.2. Redhat and Debian binary packages 24. Source and Binary Packages
¨ ¥
rpm -i --nodeps <rpm-file>
( dpkg -i --ignore-depends=<required-package> <deb-file> )
§ ¦
Note that Debian is far more fastidious about its dependencies, override them only
when you are sure what is going on underneath.
Package queries
.rpm and .deb packages are more than a way of archiving files; otherwise we could
just use .tar files. Each package has its file list stored in a database that can be queried.
The following are some of the more useful queries that can be done. Note that these
are queries on already installed packages only:
To get a list of all packages (query all, llist),
¨ ¥
rpm -qa
( dpkg -l ’*’ )
§ ¦
Try,
¨ ¥
rpm -qa | grep util
( dpkg -l ’*util*’ )
§ ¦
To query for the existence of a package, say textutils ((query, llist) status)
¨ ¥
rpm -q textutils
( dpkg -l textutils )
§ ¦
228
24. Source and Binary Packages 24.2. Redhat and Debian binary packages
To list what other packages require this one (with Debian we can check be attempting
a remove with the --no-act option to merely test),
¨ ¥
rpm -q --whatrequires <package>
( dpkg --purge --no-act <package> )
§ ¦
To get a file list contained by a package &Once again, not for files but packages already
installed.-,
¨ ¥
rpm -ql <package>
( dpkg -L <package> )
§ ¦
Package file lists are especially useful for finding what commands a package provides,
as well as what documentation. Users are often frustrated by a package, that they
“don’t know what to do with”. Listing files owned by the package is where to start.
To find out what package a file belongs to,
¨ ¥
rpm -qf <filename>
( dpkg -S <filename> )
§ ¦
For
example rpm -qf /etc/rc.d/init.d/httpd gives apache-mod ssl-1.3.12.2.6.6-1
on my system, and rpm -ql fileutils-4.0w-3 | grep bin gives a list of all
other commands from fileutils. A trick to find all the sibling files of a command in
your PATH is:
¨ ¥
rpm -ql ‘rpm -qf \‘which --skip-alias <command> \‘‘
( dpkg -L ‘dpkg -S \‘which <command> \‘ | cut -f1 -d:‘ )
§ ¦
Package verification
You sometimes might want to query if a package’s files have been modified since in-
stallation (possibly by a hacker or an incompetent system administrator). To verify all
229
24.2. Redhat and Debian binary packages 24. Source and Binary Packages
However there is not yet a way of saying that the package installed is the real package
(see Section 44.3). To check this, you need to get your actual .deb or .rpm file and
verify it with:
¨ ¥
rpm -Vp openssh-2.1.1p4-1.i386.rpm
( debsums openssh_2.1.1p4-1_i386.deb )
§ ¦
Finally, even if you have the package file, how can you be absolutely sure that it is the
package that the original packager created, and not some trojan substitution? This can
be done with the md5sum command:
¨ ¥
md5sum openssh-2.1.1p4-1.i386.rpm
( md5sum openssh_2.1.1p4-1_i386.deb )
§ ¦
md5sum uses the MD5 mathematical algorithm to calculate a numeric hash value based
on the file contents, in this case: 8e8d8e95db7fde99c09e1398e4dd3468. This is
identical to password hashing described on page 97. There is no feasible computational
method of forging a package to give the same MD5 hash; hence packagers will often
publish their md5sum results on their web page, and you can check these against your
own as a security measure.
Special queries
To query package a file that have not been installed, use for example:
¨ ¥
rpm -qp --qf ’[%{VERSION}\n]’ <rpm-file>
( dpkg -f <deb-file> Version )
§ ¦
230
24. Source and Binary Packages 24.2. Redhat and Debian binary packages
Here, VERSION is a query tag applicable to .rpm files. A list of other tags that can be
BUILDHOST OBSOLETES RPMTAG PREUN
BUILDTIME OS RPMVERSION
CHANGELOG PACKAGER SERIAL
CHANGELOGTEXT PROVIDES SIZE
CHANGELOGTIME RELEASE SOURCERPM
queried is: COPYRIGHT REQUIREFLAGS SUMMARY
DESCRIPTION REQUIRENAME VENDOR
DISTRIBUTION REQUIREVERSION VERIFYSCRIPT
GROUP RPMTAG POSTIN VERSION
LICENSE RPMTAG POSTUN
NAME RPMTAG PREIN
Which will create a directory <out-directory> and place the files in them. You can
also dump the package as a tar file with:
¨ ¥
dpkg --fsys-tarfile <deb-file>
§ ¦
231
24.3. Source packages 24. Source and Binary Packages
Only a taste of Debian package management was provided above. Debian has
two higher level tools: APT (Advanced Package Tool — comprised of the commands
apt-cache, apt-cdrom, apt-config and apt-get); and dselect which is an in-
teractive text based package selector. When you first install Debian I suppose the
first thing you are supposed to do is run dselect (there are other graphical front-
ends — do a search on Freshmeat <http://freshmeat.net/>) and then install and
configure all the things you skipped over during installation. Between these you can
do some sophisticated time-saving things like recursively resolving package depen-
dencies through automatic downloads — i.e. just mention the package and APT will
find it and what it depends on, then download and install everything for you. See
apt(8), sources.list(5) and apt.conf(5) for more info.
There are also numerous interactive graphical applications for managing RPM pack-
ages. Most are purely cosmetic.
Experience will clearly demonstrate the superiority of Debian packages over most
others. You will also notice that where RedHat-like distributions have chosen a selec-
tion of packages that they thought you would find useful, Debian has hundreds of
volunteer maintainers selecting what they find useful. Almost every Free U NIX pack-
age on the Internet has been included into Debian .
Both RedHat and Debian binary packages begin life as source files from which
their binary versions are compiled. Source RedHat packages will end in .src.rpm
while Debian packages will always appear under the source tree in the distribu-
tion. The RPM-HOWTO details the building of RedHat source packages, and Debian ’s
dpkg-dev and packaging-manual packages contain a complete reference to the
Debian package standard and packaging methodologies (try dpkg -L dpkg-dev
and dpkg -L packaging-manual).
Actually building packages will not be covered in this edition.
232
Chapter 25
Introduction to IP
IP stands for Internet Protocol. It is the method by which data gets transmitted over
the Internet. At a hardware level, network cards are capable of transmitting packets
(also called datagrams) of data between one another. A packet contains a small block of
say, 1 kilobyte of data. (In contrast to serial lines which transmit continuously.) All In-
ternet communication occurs via transmission of packets, which travel intact between
machines on either side of the world.
Each packet contains a header preceding the data of 24 bytes or more. Hence
slightly more than the said 1 kilobyte of data would be found on the wire. When a
packet is transmitted, the header would obviously contain the destination machine.
Each machine is hence given a unique IP address — a 32 bit number. There are no
machines on the Internet that do not have an IP address.
The header bytes are shown in Table 25.1.
Version will for the mean time be 4, although IP Next Generation (version 6) is
in the process of development. IHL is the length of the header divided by 4. TOS
(Type of Service) is a somewhat esoteric field for tuning performance and will not be
explained. The Length field is the length in bytes of the entire packet inclusive of the
header. The Source and Destination are the IP addresses from and to where the packet
is coming/going.
The above description constitutes the view of the Internet that a machine has.
However, physically, the Internet consists of many small high speed networks (like a
company or a university) called Local Area Networks, or LANs. These are all connected
to each other via lower speed long distance links. On a LAN, the raw medium of
233
25.1. Internet Communication 25. Introduction to IP
Bytes Description
0 bits 0–3: Version, bits 4–7: Internet Header Length (IHL)
1 Type of service (TOS)
2–3 Length
4–5 Identification
6–7 bits 0-3: Flags, bits 4-15: Offset
8 Time to live (TTL)
9 Type
10–11 Checksum
12–15 Source IP address
16–19 Destination IP address
20–IHL*4-1 Options + padding to round up to four bytes
Data begins at IHL*4 and ends at Length-1
transmission is not a packet but an Ethernet frame. Frames are analogous to packets
(having both a header and a data portion) but are sized to be efficient with particular
hardware. IP packets are encapsulated within frames, where the IP packet fits within
the Data part of the frame. A frame may however be to small to hold an entire IP
packet, in which case the IP packet is split into several smaller packets. This group
of smaller IP packets is then given an identifying number and each smaller packet
will then have the Identification field set with that number and the Offset field set
to indicate its position within the actual packet. On the other side, the destination
machine will reconstruct a packet from all the smaller sub-packets that have the same
Identification field.
The convention for writing IP address in human readable form in dotted deci-
mal notation like 152.2.254.81, where each number is a byte and is hence in the
range of 0 to 255. Hence the entire address space is in the range of 0.0.0.0 to
255.255.255.255. Now to further organise the assignment of addresses, each 32
bit address is divided into two parts, a network and a host part of the address.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
The network part of the address designates the LAN and the host part the par-
234
25. Introduction to IP 25.2. Special IP Addresses
ticular machine on the LAN. Now, because it was unknown at the time of specification
whether there would one day be more LANs or more machines on a LAN, three dif-
ferent classes of address were created. Class A addresses begin with the first bit of the
network part set to 0 (hence a Class A address always has the first dotted decimal num-
ber less than 128). The next 7 bits give the identity of the LAN and the remaining 24
bits give the identity of an actual machine on that LAN. A Class B address begins with
a 1 and then a 0 (first decimal number is 128 through 192). The next 14 bits give the
LAN and the remaining 16 bits give the machine — most universities, like the address
above, are Class B addresses. Finally, Class C addresses start with a 1 1 0 (first decimal
number is 192 through 223), and the next 21 bits and then the next 8 bits are the LAN
and machine respectively. Small companies tend use Class C addresses.
In practice, there are few organisations that require Class A addresses. A univer-
sity or large company might use a Class B address, but then it would have its own fur-
ther subdivisions, like using the third dotted decimal as a department (bits 16 through
23) and the last dotted decimal (bits 24 through 31) as the machine within that depart-
ment. In this way the LAN becomes a micro Internet in itself. Here the LAN is called a
network and the various departments are each called a subnet.
There are also some IP addresses that have special purposes that are never used
on the open Internet. 192.168.0.0–192.168.255.255 are private addresses per-
haps used inside a local LAN that does not communicate directly with the Internet.
127.0.0.0–127.255.255.255 are used for communication with the localhost — i.e.
the machine itself. Usually 127.0.0.1 is an IP address pointing to the machine itself.
Then 172.16.0.0–172.31.255.255 are additional private addresses for very large
internal networks, and 10.0.0.0–10.255.255.255 are for even larger ones.
Consider again the example of a University with a Class B address. It might have
an IP address range of the 137.158.0.0–137.158.255.255. It has decided that
the astronomy department should get 512 of its own IP addresses 137.158.26.0–
137.158.27.255. We say that astronomy has a network address of 137.158.26.0.
The machines there all have a network mask of 255.255.254.0. A particular machine
in astronomy may have an IP address of 137.158.27.158. This terminology will be
used later.
235
25.4. Computers on a LAN 25. Introduction to IP
Dotted IP Binary
Netmask 255 . 255 . 254 . 0 1111
| 1111 1111
{z1111 1111 111}0 0000 0000
z }| {
Network address 137 . 158 . 26 . 0 1000 1001 1001 1110 0001 1010 0000 0000
IP address 137 . 158 . 27 . 158 1000 1001 1001 1110 0001 1011
| 1001
{z1110}
z }| {
Host part 0 . 0 . 1 . 158 0000 0000 0000 0000 0000 0001 1001 1110
Here we will define the term LAN as a network of computers that are all more-or-less
connected directly together by Ethernet cables (this is common for the small business
with up to about 50 machines). Each machine has an Ethernet card which is refered
to as eth0 when configuring the network from the commandline. If there is more
than one card on a single machine, then these are named eth0, eth1, eth2 etc. and
are each called a network interface (or just interface) of the machine. LANs work as
follows: network cards transmit a frame to the LAN and other network cards read that
frame from the LAN. If any one network card transmits a frame then all other network
cards can see that frame. If a card starts to transmit a frame while another card is in
the process of transmitting a frame, then a clash is said to have occurred and the card
waits a random amount of time and then tries again. Each network card has a physical
address (that is inserted at the time of its manufacture, and has nothing to do with IP
addresses) of 48 bits called the hardware address. Each frame has a destination address
in its header that tells what network card it is destined for, so that network cards ignore
frames that are not addressed to them.
Now since frame transmission is governed by the network cards, the destination
hardware address must be determined from the destination IP address before sending
a packet to a particular machine. The way this is done is through a protocol called
the Address Resolution Protocol (ARP). A machine will transmit a special packet that
asks ‘What hardware address is this IP address?’. The guilty machine then responds
and the transmitting machine stores the result for future reference. Of course if you
suddenly switch network cards, then other machines on the LAN will have the wrong
information, so ARP has timeouts and re-requests built into the protocol. Try type the
command arp to get a list of hardware address to IP mappings.
Most distributions have a generic way to configure your interfaces. Here we will show
the raw method.
We first have to create a lo interface. This is called the loopback device (and has nothing
236
25. Introduction to IP 25.5. Configuring Interfaces
to do with loopback block devices: /dev/loop? files). This is an imaginary device that
is used to communicate with the machine itself, if for instance you are telneting to
the local machine, you are actually connecting via the loopback device. The ifconfig
(interfaceconfigure) command is used to do anything with interfaces. First run,
¨ ¥
/sbin/ifconfig lo down
/sbin/ifconfig eth0 down
§ ¦
The broadcast address is a special address that all machines respond to. It is usually
the last address of the particular network.
Now do
¨ ¥
/sbin/ifconfig
§ ¦
which shows various interesting bits, like the 48 bit hardware address of the network
card (00:00:E8:3B:2D:A2).
237
25.6. Configuring Routing 25. Introduction to IP
The interfaces are now active, however there is nothing telling the kernel what packets
should go to what interface, even though we might expect such behaviour to happen
on its own. With U NIX, you must explicitly tell the kernel to send particular packets to
particular interfaces.
Any packet arriving through any interface is pooled by the kernel. The kernel then
looks at each packet’s destination address and decides based on the destination where
it should be sent. It doesn’t matter where the packet came from, once the kernel has it,
its what its destination address says that matters. Its up to the rest of the network to
ensure that packets do not arrive at the wrong interfaces in the first place.
We know that any packet having the network address 127.???.???.??? must go to
the loopback device (this is more or less a convention. The command,
¨ ¥
/sbin/route add -net 127.0.0.0 netmask 255.0.0.0 lo
§ ¦
(-n causes route to not print IP addresses as hostnames) gives the output
¨ ¥
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
§ ¦
The routing table now routes 127. and 192.168.3. packets. Now we need
a route for the remaining possible IP addresses. U NIX can have a route that says to
send packets with particular destination IP addresses to another machine on the LAN,
238
25. Introduction to IP 25.7. Configuring startup scripts
from where they might be forwarded elsewhere. This is sometimes called the gateway
machine. The command is:
¨ ¥
/sbin/route add -net <network-address> netmask <netmask> gw \
<gateway-ip-address> <interface>
§ ¦
This is the most general form of the command, but its often easier to just type:
¨ ¥
/sbin/route add default gw <gateway-ip-address> <interface>
§ ¦
when we want to add a route that applies to all packets. The default signifies all
packets; it is the same as
¨ ¥
/sbin/route add -net 0.0.0.0 netmask 0.0.0.0 gw <gateway-ip-address> \
<interface>
§ ¦
but since routes are ordered according to netmask, more specific routes are used in
preference to less specific ones.
Although these 7 commands will get your network working, you should not
do such a manual configuration. The next section explains how to configure your
startup scripts.
Most distributions will have an modular and extensible system of startup scripts which
initiate networking.
239
25.7. Configuring startup scripts 25. Introduction to IP
You can see that these two files are equivalent to the example configuration done
above. There are an enormous amount of options that these two files can take for
the various protocols besides TCP/IP, but this is the most common configuration.
The file /etc/sysconfig/network-scripts/ifcfg-lo for the loopback device
will be configured automatically at installation, you should never need to edit it.
To stop and and start networking (i.e. bring up and down the interfaces and routing),
type (alternative commands in braces):
¨ ¥
/etc/init.d/network stop
( /etc/rc.d/init.d/network stop )
/etc/init.d/network start
( /etc/rc.d/init.d/network start )
§ ¦
240
25. Introduction to IP 25.7. Configuring startup scripts
NETWORK=192.168.4.0
5 BROADCAST=192.168.4.255
ONBOOT=yes
§ ¦
and then set FORWARD IPV4=true (above) to enable packet forwarding between your
two interfaces.
To stop and start networking (i.e. bring up and down the interfaces and routing), type
¨ ¥
/etc/init.d/networking stop
/etc/init.d/networking start
§ ¦
241
25.8. Complex routeing — a many-hop example 25. Introduction to IP
netmask 255.255.255.0
5 gateway 192.168.3.254
iface eth1 inet static
address 192.168.4.1
netmask 255.255.255.0
§ ¦
Consider two distant LAN’s that need to communicate. Two dedicated machines, one
on each LAN, are linked via a some alternative method (in this case a permanent serial
line).
This arrangement can be summarised by five machines X, A, B, C and D. Machines X,
A and B form LAN 1 on subnet 192.168.1.0/26. Machines C and D form LAN 2
on subnet 192.168.1.128/26. Note how we use the “/26” to indicate that only the
first 26 bits are network address bits, while the remaining 6 bits are host address bits.
This means that we can have at most 26 = 64 IP addresses on each of LAN 1 and 2.
Our dedicate serial link comes between machines B and C.
Machine X has IP address 192.168.1.1. This machine is the gateway to the
Internet. The Ethernet port of machine B is simply configured with an IP address
of 192.168.1.2 with a default gateway of 192.168.1.1. Note that the broadcast
address is 192.168.1.63 (the last 6 bits set to 1).
The Ethernet port of machine C is configured with an IP address of 192.168.1.129.
No default gateway should be set until serial line is configured.
We will make the network between B and C subnet 192.168.1.192/26. It is
effectively a LAN on its own, even though only two machines can ever be connected.
Machines B and C will have IP addresses 192.168.1.252 and 192.168.1.253 re-
spectively on their facing interfaces.
This is a real life example with an unreliable serial link. To keep the link up requires
pppd and a shell script to restart it if it dies. The pppd program will be covered in
242
25. Introduction to IP 25.8. Complex routeing — a many-hop example
Note that if the link were an Ethernet link instead (on a second Ethernet card), and/or
a genuine LAN between machines B and C (with subnet 192.168.1.252/26), then
the same script would be just:
¨ ¥
/sbin/ifconfig eth1 192.168.1.252 broadcast 192.168.1.255 netmask \
255.255.255.192
§ ¦
in which case all “ppp0” would change to “eth1” in the scripts that follow.
Routeing on machine B is achieved with the following script provided the link is up.
This script must be executed whenever pppd has negotiated the connection, and can
243
25.8. Complex routeing — a many-hop example 25. Introduction to IP
hence be placed in the file /etc/pppd/ip-up, which pppd will execute automatically
as soon as the ppp0 interface is available:
¨ ¥
/sbin/route del default
/sbin/route add -net 192.168.1.192 netmask 255.255.255.192 dev ppp0
/sbin/route add -net 192.168.1.128 netmask 255.255.255.192 gw 192.168.1.253
/sbin/route add default gw 192.168.1.1
5
echo 1 > /proc/sys/net/ipv4/ip_forward
§ ¦
Our full routeing table and interface list for machine B then looks like &RedHat 6 likes to
-:
add explicit routes to each device. These may not be necessary on your system
¨ ¥
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.1.2 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
192.168.1.253 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
5 192.168.1.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0
192.168.1.192 0.0.0.0 255.255.255.192 U 0 0 0 ppp0
192.168.1.128 192.168.1.253 255.255.255.192 UG 0 0 0 ppp0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
10
eth0 Link encap:Ethernet HWaddr 00:A0:24:75:3B:69
inet addr:192.168.1.2 Bcast:192.168.1.63 Mask:255.255.255.192
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
15 ppp0 Link encap:Point-to-Point Protocol
inet addr:192.168.1.252 P-t-P:192.168.1.253 Mask:255.255.255.255
§ ¦
244
25. Introduction to IP 25.9. Interface aliasing — many IP’s on one physical card
Machine D can be configured like any ordinary machine on a LAN. It just sets its
default gateway to 192.168.1.129. Machine A however has to know to send packets
destined for subnet 192.168.1.128/26 through machine B. Its routeing table has an
extra entry for the 192.168.1.128/26 LAN. The full routeing table for machine A
is:
¨ ¥
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.1.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0
192.168.1.128 192.168.1.2 255.255.255.192 UG 0 0 0 eth0
5 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
§ ¦
The above configuration allowed machines to properly send packets between ma-
chines A and D, and out through the Internet. One caveat: ping was sometimes not
able work even though telnet did. This may be a peculiarity of the kernel version
we were using **shrug**.
245
25.10. Diagnostic utilities 25. Introduction to IP
in addition to your regular eth0 device. Here, the same interface can communicate to
3 LAN’s having networks 192.168.4.0, 192.168.5.0 and 192.168.6.0. Don’t
forget to add routes to these networks as above.
25.10.1 ping
The ping command is the most common network utility. IP packets come in three
types on the Internet, represented in the Type field of the IP header: UDP, TCP and
ICMP. (The former two will be discussed later, and represent the two basic methods of
communication between two programs running on different machines.) ICMP how-
ever, stands for Internet Control Message Protocol, and are diagnostic packets that are
responded to in a special way. Try:
¨ ¥
ping metalab.unc.edu
§ ¦
or some other well known host. You will get output like:
¨ ¥
PING metalab.unc.edu (152.19.254.81) from 192.168.3.9 : 56(84) bytes of data.
64 bytes from 152.19.254.81: icmp_seq=0 ttl=238 time=1059.1 ms
64 bytes from 152.19.254.81: icmp_seq=1 ttl=238 time=764.9 ms
64 bytes from 152.19.254.81: icmp_seq=2 ttl=238 time=858.8 ms
5 64 bytes from 152.19.254.81: icmp_seq=3 ttl=238 time=1179.9 ms
64 bytes from 152.19.254.81: icmp_seq=4 ttl=238 time=986.6 ms
64 bytes from 152.19.254.81: icmp_seq=5 ttl=238 time=1274.3 ms
64 bytes from 152.19.254.81: icmp_seq=6 ttl=238 time=930.7 ms
§ ¦
246
25. Introduction to IP 25.10. Diagnostic utilities
25.10.2 traceroute
traceroute is a rather fascinating utility to identify where a packet has been. It uses
UDP packets or, with the -I option, ICMP packets to detect the routing path. On my
machine,
¨ ¥
traceroute metalab.unc.edu
§ ¦
gives,
¨ ¥
traceroute to metalab.unc.edu (152.19.254.81), 30 hops max, 38 byte packets
1 192.168.3.254 (192.168.3.254) 1.197 ms 1.085 ms 1.050 ms
2 192.168.254.5 (192.168.254.5) 45.165 ms 45.314 ms 45.164 ms
3 cranzgate (192.168.2.254) 48.205 ms 48.170 ms 48.074 ms
5 4 cranzposix (160.124.182.254) 46.117 ms 46.064 ms 45.999 ms
5 cismpjhb.posix.co.za (160.124.255.193) 451.886 ms 71.549 ms 173.321 ms
6 cisap1.posix.co.za (160.124.112.1) 274.834 ms 147.251 ms 400.654 ms
7 saix.posix.co.za (160.124.255.6) 187.402 ms 325.030 ms 628.576 ms
8 ndf-core1.gt.saix.net (196.25.253.1) 252.558 ms 186.256 ms 255.805 ms
10 9 ny-core.saix.net (196.25.0.238) 497.273 ms 454.531 ms 639.795 ms
10 bordercore6-serial5-0-0-26.WestOrange.cw.net (166.48.144.105) 595.755 ms 595.174 ms *
11 corerouter1.WestOrange.cw.net (204.70.9.138) 490.845 ms 698.483 ms 1029.369 ms
12 core6.Washington.cw.net (204.70.4.113) 580.971 ms 893.481 ms 730.608 ms
13 204.70.10.182 (204.70.10.182) 644.070 ms 726.363 ms 639.942 ms
15 14 mae-brdr-01.inet.qwest.net (205.171.4.201) 767.783 ms * *
15 * * *
16 * wdc-core-03.inet.qwest.net (205.171.24.69) 779.546 ms 898.371 ms
17 atl-core-02.inet.qwest.net (205.171.5.243) 894.553 ms 689.472 ms *
18 atl-edge-05.inet.qwest.net (205.171.21.54) 735.810 ms 784.461 ms 789.592 ms
20 19 * * *
20 * * unc-gw.ncren.net (128.109.190.2) 889.257 ms
21 unc-gw.ncren.net (128.109.190.2) 646.569 ms 780.000 ms *
22 * helios.oit.unc.edu (152.2.22.3) 600.558 ms 839.135 ms
§ ¦
25.10.3 tcpdump
tcpdump watches a particular interface for all the traffic that passes it — i.e. all the
traffic of all the machines connected to the same hub. A network card usually grabs
only the frames destined for it, but tcpdump puts the card into promiscuous mode,
meaning for it to retrieve all frames regardless of their destination hardware address.
Try
¨ ¥
tcpdump -n -N -f -i eth0
§ ¦
tcpdump is also discussed in Section 41.5. Deciphering the output of tcpdump is left
for now as an exercise for the reader. More on the tcp part of tcpdump in Chapter 26.
247
25.10. Diagnostic utilities 25. Introduction to IP
248
Chapter 26
To implement a reliable stream having only data packets at our disposal is tricky.
You can send single packets and then wait for the remote machine to indicate that it
has received it, but this is inefficient (packets can take a long time to get to and from
their destination) — you really want to be able to send as many packets as possible
at once, and then have some means of negotiating with the remote machine when to
resend packets that were not received. What TCP does is to send data packets one way,
and then acknowledge packets the other way, saying how much of the stream has been
properly received.
We hence say that TCP is implemented on top of IP. This is why Internet communi-
cation is sometimes called TCP/IP.
TCP communication has three stages: negotiation, transfer and detachment &This is
-.
all my own terminology. This is also somewhat of a schematic representation.
249
26.1. The TCP header 26. TCP and UDP
Negotiation: The client application (say a web browser) first initiates the connec-
tion using a C connect() (connect(2)) function. This causes the kernel to send a
SYN (SYNchronisation) packet to the remote TCP server (in this case a web server).
The web server responds with a SYN-ACK packet (ACKnowledge), and finally the
client responds with a final SYN packet. This packet negotiation is unbeknown to
the programmer.
Transfer: The programmer will use the send() (send(2)) and recv() (recv(2))
C function calls to send and receive an actual stream of bytes. The stream of bytes will
be broken into packets and the packets send individually to the remote application. In
the case of the web server, the first bytes sent would be the line GET /index.html
HTTP/1.0<CR><NL><CR><NL>. On the remote side, reply packets (so also called
ACK packets) are sent back as the data arrives, indicating if parts of the stream went
missing and require retransmission. Communication is full-duplex — meaning that
there are streams in both directions — both data and acknowledge packets are going
both ways simultaneously.
Detachment: The programmer will use the C function call close() (close(2))
and/or shutdown() (shutdown(2)) to terminate the connection. A FIN packet will
be sent and TCP communication will cease.
TCP packets are obviously encapsulated within IP packets. The TCP packet is inside the
Data begins at. . . part of the IP packet. A TCP packet has a header part and a data
part. The data part may sometimes be empty (such as in the Negotiation stage).
250
26. TCP and UDP 26.1. The TCP header
Bytes Description
0 bits 0–3: Version, bits 4–7: Internet Header Length (IHL)
1 Type of service (TOS)
2–3 Length
4–5 Identification
6–7 bits 0-3: Flags, bits 4-15: Offset
8 Time to live (TTL)
9 Type
10–11 Checksum
12–15 Source IP address
16–19 Destination IP address
20–IHL*4-1 Options + padding to round up to four bytes
0–1 Source Port
2–3 Destination Port
4–7 Sequence Number
8–11 Acknowledgement Number
12 bits 0–3: number of bytes of additional TCP Options / 4
13 Control
14–15 Window
16–17 Checksum
18–19 Urgent Pointer
20–20+Options*4 Options + padding to round up to four bytes
TCP Data begins at IHL*4+20+Options*4 and ends at Length-1
Sequence Number is the offset within the stream that this particular packet of
data belongs to. The Acknowledge Number is the point in the stream up to which all
data has been received. Control are various other flag bits. Window is the maximum
amount that the receiver is prepared to accept. Checksum is to verify data integrity,
and Urgent Pointer is for interrupting data. Data needed by extensions to the protocol
are appended after the header as options.
251
26.2. A sample TCP session 26. TCP and UDP
Its easy to see TCP working by using telnet. You are probably familiar with using
telnet to login to remote systems, but telnet is actually a generic program to con-
nect to any TCP socket. Here we will try connect to cnn.com’s web page.
We first need to get an IP address of cnn.com:
¨ ¥
[root@cericon]# host cnn.com
cnn.com has address 207.25.71.20
§ ¦
which says list all packets having source (src) or destination (dst) addresses of either
us or CNN.
Then we use the HTTP protocol to grab the page. Type in the HTTP command
GET / HTTP/1.0 and then press Enter twice (as required by the HTTP protocol).
The first and last few lines of the sessions are shown below:
¨ ¥
[root@cericon root]# telnet 207.25.71.20 80
Trying 207.25.71.20...
Connected to 207.25.71.20.
Escape character is ’ˆ]’.
5 GET / HTTP/1.0
HTTP/1.0 200 OK
Server: Netscape-Enterprise/2.01
Date: Tue, 18 Apr 2000 10:55:14 GMT
10 Set-cookie: CNNid=cf19472c-23286-956055314-2; expires=Wednesday, 30-Dec-2037 16:00:00 GMT; path=/; domain=.cnn.com
Last-modified: Tue, 18 Apr 2000 10:55:14 GMT
Content-type: text/html
<HTML>
15 <HEAD>
<TITLE>CNN.com</TITLE>
<META http-equiv="REFRESH" content="1800">
<!--CSSDATA:956055234-->
20 <SCRIPT src="/virtual/2000/code/main.js" language="javascript"></SCRIPT>
<LINK rel="stylesheet" href="/virtual/2000/style/main.css" type="text/css">
<SCRIPT language="javascript" type="text/javascript">
<!--//
if ((navigator.platform==’MacPPC’)&&(navigator.ap
25
..............
..............
</BODY>
30 </HTML>
Connection closed by foreign host.
§ ¦
The above produces the front page of CNN’s web site in raw html. This is easy to paste
into a file and view off-line.
In the other window, tcpdump is showing us what packets are being exchanged.
tcpdump nicely shows us hostnames instead of IP addresses and the letters www in-
252
26. TCP and UDP 26.2. A sample TCP session
stead of the port number 80. The local “random” port in this case was 4064:
¨ ¥
[root@cericon]# tcpdump \
’( src 192.168.3.9 and dst 207.25.71.20 ) or ( src 207.25.71.20 and dst 192.168.3.9 )’
Kernel filter, protocol ALL, datagram packet socket
tcpdump: listening on all devices
5 12:52:35.467121 eth0 > cericon.cranzgot.co.za.4064 > www1.cnn.com.www:
S 2463192134:2463192134(0) win 32120 <mss 1460,sackOK,timestamp 154031689 0,nop,wscale 0> (DF)
12:52:35.964703 eth0 < www1.cnn.com.www > cericon.cranzgot.co.za.4064:
S 4182178234:4182178234(0) ack 2463192135 win 10136 <nop,nop,timestamp 1075172823 154031689,nop,wscale 0,mss 1460>
12:52:35.964791 eth0 > cericon.cranzgot.co.za.4064 > www1.cnn.com.www:
10 . 1:1(0) ack 1 win 32120 <nop,nop,timestamp 154031739 1075172823> (DF)
12:52:46.413043 eth0 > cericon.cranzgot.co.za.4064 > www1.cnn.com.www:
P 1:17(16) ack 1 win 32120 <nop,nop,timestamp 154032784 1075172823> (DF)
12:52:46.908156 eth0 < www1.cnn.com.www > cericon.cranzgot.co.za.4064:
. 1:1(0) ack 17 win 10136 <nop,nop,timestamp 1075173916 154032784>
15 12:52:49.259870 eth0 > cericon.cranzgot.co.za.4064 > www1.cnn.com.www:
P 17:19(2) ack 1 win 32120 <nop,nop,timestamp 154033068 1075173916> (DF)
12:52:49.886846 eth0 < www1.cnn.com.www > cericon.cranzgot.co.za.4064:
P 1:278(277) ack 19 win 10136 <nop,nop,timestamp 1075174200 154033068>
12:52:49.887039 eth0 > cericon.cranzgot.co.za.4064 > www1.cnn.com.www:
20 . 19:19(0) ack 278 win 31856 <nop,nop,timestamp 154033131 1075174200> (DF)
12:52:50.053628 eth0 < www1.cnn.com.www > cericon.cranzgot.co.za.4064:
. 278:1176(898) ack 19 win 10136 <nop,nop,timestamp 1075174202 154033068>
12:52:50.160740 eth0 < www1.cnn.com.www > cericon.cranzgot.co.za.4064:
P 1176:1972(796) ack 19 win 10136 <nop,nop,timestamp 1075174202 154033068>
25 12:52:50.220067 eth0 > cericon.cranzgot.co.za.4064 > www1.cnn.com.www:
. 19:19(0) ack 1972 win 31856 <nop,nop,timestamp 154033165 1075174202> (DF)
12:52:50.824143 eth0 < www1.cnn.com.www > cericon.cranzgot.co.za.4064:
. 1972:3420(1448) ack 19 win 10136 <nop,nop,timestamp 1075174262 154033131>
12:52:51.021465 eth0 < www1.cnn.com.www > cericon.cranzgot.co.za.4064:
30 . 3420:4868(1448) ack 19 win 10136 <nop,nop,timestamp 1075174295 154033165>
..............
..............
The above requires some explanation: Line 5, 7 and 9 are the Negotiation
stage. tcpdump uses the format <Sequence Number>:<Sequence Number +
data length>(<data length>) on each line to show the context of the packet
within the stream. The Sequence Number however is chosen randomly at the outset,
hence tcpdump prints the relative sequence number after the first two packets to make
it clearer what the actual position is within the stream. Line 11 is where I pressed Enter
the first time, and Line 15 was Enter with an empty line. The ack 19’s indicates up
253
26.3. User Datagram Protocol (UDP) 26. TCP and UDP
to where CNN’s web server has received incoming data — in this case we only ever
typed in 19 bytes, hence the web server sets this value in every one of its outgoing
packets, while our own outgoing packets are mostly empty of data.
More information about the tcpdump output can be had from tcpdump(8) under the
section TCP Packets.
Sometimes you want to have direct control of packets for efficiency reasons, or because
you don’t really mind if packets get lost. Two examples are nameserver communica-
tions, where single packet transmissions are desired, or voice transmissions where re-
ducing lag time is more important than data integrity. Another is NFS (Network File
System) which uses UDP to implement exclusively high bandwidth data transfer.
With UDP the programmer sends and receives individual packets again incapsulated
within IP. Ports are used in the same way as with TCP, but these are merely identifiers
and there is no concept of a stream. The full UDP/IP header is very simple:
Bytes Description
0 bits 0–3: Version, bits 4–7: Internet Header Length (IHL)
1 Type of service (TOS)
2–3 Length
4–5 Identification
6–7 bits 0-3: Flags, bits 4-15: Offset
8 Time to live (TTL)
9 Type
10–11 Checksum
12–15 Source IP address
16–19 Destination IP address
20–IHL*4-1 Options + padding to round up to four bytes
0–1 Source Port
2–3 Destination Port
4–5 Length
6–7 Checksum
UDP Data begins at IHL*4+8 and ends at Length-1
254
26. TCP and UDP 26.4. /etc/services file
There are various standard port numbers used exclusively for particular types of ser-
vices. 80 is always web as shown above. Port numbers 1 through 1023 are reserved for
such standard services which are each given convenient names.
All services are defined for both TCP as well as UDP, even though there is, for example,
no such thing as UDP FTP access, etc.
Port numbers below 1024 are used exclusively for root uid programs such as mail,
DNS, and web services. Programs of ordinary users are not allowed to bind to ports
below 1024. The place where these ports are defined is the /etc/services file. The
/etc/services file is mostly for descriptive purposes — programs can look up port
names and numbers — /etc/services has nothing to do with the availability of a
service.
An extract of the /etc/services file is
¨ ¥
tcpmux 1/tcp # TCP port service multiplexer
echo 7/tcp
echo 7/udp
discard 9/tcp sink null
5 discard 9/udp sink null
systat 11/tcp users
daytime 13/tcp
daytime 13/udp
netstat 15/tcp
10 qotd 17/tcp quote
msp 18/tcp # message send protocol
msp 18/udp # message send protocol
chargen 19/tcp ttytst source
chargen 19/udp ttytst source
15 ftp-data 20/tcp
ftp 21/tcp
fsp 21/udp fspd
ssh 22/tcp # SSH Remote Login Protocol
ssh 22/udp # SSH Remote Login Protocol
20 telnet 23/tcp
smtp 25/tcp mail
time 37/tcp timserver
time 37/udp timserver
rlp 39/udp resource # resource location
25 nameserver 42/tcp name # IEN 116
whois 43/tcp nicname
re-mail-ck 50/tcp # Remote Mail Checking Protocol
re-mail-ck 50/udp # Remote Mail Checking Protocol
domain 53/tcp nameserver # name-domain server
30 domain 53/udp nameserver
mtp 57/tcp # deprecated
bootps 67/tcp # BOOTP server
bootps 67/udp
bootpc 68/tcp # BOOTP client
255
26.5. Encrypted/forwarded TCP 26. TCP and UDP
35 bootpc 68/udp
tftp 69/udp
gopher 70/tcp # Internet Gopher
gopher 70/udp
rje 77/tcp netrjs
40 finger 79/tcp
www 80/tcp http # WorldWideWeb HTTP
www 80/udp # HyperText Transfer Protocol
§ ¦
The TCP stream can easily be reconstructed by anyone listening on a wire that happens
to see your network traffic. This is known as an inherently insecure service. We would
like to encrypt our data so that anything captured between the client and server will
appear garbled. Such an encrypted stream should have several properties:
1. It should ensure that the connecting client really is connecting to the server in
question. In other words it should authenticate the server to ensure that the
server is not a “trojan”.
2. It should prevent any information being gained by a snooper. This means that
any traffic read should appear cryptographically garbled.
3. It should be impossible for a listener to modify the traffic without detection.
The above is relatively easily accomplished using at least two packages. Take the
example where we would like to use POP3 to retrieve mail from a remote machine.
First, POP3 can be verified to be working by logging in on the POP3 server, and then
from within the shell running a telnet to port 110 (i.e. the POP3 service) :
¨ ¥
telnet localhost 110
Connected to localhost.localdomain.
Escape character is ’ˆ]’.
+OK POP3 localhost.localdomain v7.64 server ready
5 QUIT
+OK Sayonara
Connection closed by foreign host.
§ ¦
For our first example, we use the OpenSSH package. We can initialise and run the
sshd Secure Shell Daemon if it has not been initialised before. This would be run on
the POP3 server:
¨ ¥
ssh-keygen -b 1024 -f /etc/ssh/ssh_host_key -q -N ’’
ssh-keygen -d -f /etc/ssh/ssh_host_dsa_key -q -N ’’
256
26. TCP and UDP 26.5. Encrypted/forwarded TCP
sshd
§ ¦
Client
(telnet locahost 12345)
Pop
(sshd) 22
(ipop3d) 110
To create an encrypted channel we use the ssh client login program in a special way.
We would like it to listen on a particular TCP port and then encrypt and forward all
traffic to the remote TCP port 110 on the server. This is known as (encrypted) port for-
warding. On the client machine we choose an arbitrary unused port to listen on, in this
case 12345:
¨ ¥
ssh -C -c arcfour -N -n -2 -L 12345:<pop3-server.doma.in>:110 \
<pop3-server.doma.in> -l <user> -v
§ ¦
where <user> is the name of a shell account on the POP3 server. Finally, also on the
client machine, we run:
¨ ¥
telnet localhost 12345
Connected to localhost.localdomain.
257
26.5. Encrypted/forwarded TCP 26. TCP and UDP
Here we get the identical results to above, because, as far as the server is concerned, the
POP3 connection comes from a client on the server machine itself, unknowing of the
fact that it has originated from sshd which in turn is forwarding from a remote ssh
client. In addition, the -C option compresses all data (useful for low speed connec-
tions). Also note that you should generally never use any encryption besides arcfour
and SSH Protocol 2 (option -2).
The second method is the forward program of the mirrordir package which
was written by myself. It has a unique encryption protocol that does much of what
OpenSSH can, although the protocol has not been validated by the community at large.
On the server machine you can just type secure-mcserv. On the client run,
¨ ¥
forward <user>@<pop3-server.doma.in> <pop3-server.doma.in>:110 \
12345 --secure -z -K 1024
§ ¦
258
Chapter 27
We know that each computer on the Internet has its own IP address. Although this is
sufficient to identify a computer for purposes of transmitting packets, it is not partic-
ularly accommodating to people. Also, if a computer were to be relocated we would
like to still identify it by the same name.
Hence each computer is given a descriptive textual name. The basic textual name
of a machine is called the unqualified-hostname &This is my own terminology.- and is usu-
ally less than eight characters and contains only lowercase letters and numbers (and
especially no dots). Groups of computers have a domainname. The full name of ma-
chine is unqualified-hostname.domainname and is called the fully qualified hostname &Stan-
dard terminology- or the qualified-hostname &My terminology.- For example, my computer
is cericon. The domainname of my company is cranzgot.co.za, and hence the
qualified-hostname of my computer is cericon.cranzgot.co.za, although the IP
address might be 160.124.182.1.
Often the word domain is synonymous with domainname, and the word hostname
on its own can mean either the qualified or unqualified hostname.
This system of naming computers is called the Domain Name System (DNS)
Domain’s always end in a standard set of things. Here is a complete list of things that
the last bit of a domain can be:
259
27.2. Resolving DNS names to IP addresses 27. DNS and Name Resolution
.com domain.
.edu A US university.
.net An Internet service providor. In fact, any bandwidth reseller, IT company or any
company at all might have a .net domain.
Besides the above, the domain could end in a two letter country code.
The complete list of country codes follows is given in Table 27.1. The .us domain
is rarely used, since in the US .com, .edu, .org, .mil, .gov, .int, or .net are
mostly used.
Within each country, a domain may have things before it for better description.
Each country may implement a different structure. Some examples are:
.co.za A South African company. (za = Zuid Afrika, for the old Dutch postal codes.)
Note that a South African company might choose a .com domain or a .co.za
domain. In our case we use cranzgot.co.za. The same applies everywhere, so there
is no hard and fast rule to locate an organisation from its domain.
In practice, a user will type a hostname (say www.cranzgot.co.za) into some appli-
cation like a web browser. The application has to then try find the IP address associated
with that name, in order to send packets to it. This section describes the query structure
used on the Internet so that everyone can find out anyone else’s IP address.
260
27. DNS and Name Resolution 27.2. Resolving DNS names to IP addresses
Another obvious way to do this is to have one huge computer on the Internet
somewhere who’s IP address is known by everyone. This computer would be respon-
sible for servicing requests for IP numbers, and the said application running on your
local machine would just query this big machine. Of course with their being billions of
machines out their, this will obviously create far too much network traffic &Actually, A
Microsoft LAN kind of works this way — i.e. not very well-.
261
27.2. Resolving DNS names to IP addresses 27. DNS and Name Resolution
Each country also has a name server, and in turn each organisation has a name
server. Each name server only has information about machines in its own domain,
as well as information about other name servers. The root name servers only have
information on the IP addresses of the name servers of .com, .edu, .za etc. The
.za name server only has information on the IP addresses of the name servers of
.org.za, .ac.za, .co.za etc. The .co.za name server only has information on the
name servers of all South African companies, like .cranzgot.co.za, .icon.co.za,
.mweb.co.za, etc. The .cranzgot.co.za, name server only has info on the ma-
chines at Cranzgot Systems, like www.cranzgot.co.za.
Your own machine will have a name server defined in its configuration files that
is geographically close to it. The responsibility of this name server will be to directly
answer any queries about its own domain that it has information about, and also to
answer any other queries by querying as many other name servers on the Internet as
is necessary.
Now our application is presented with www.cranzgot.co.za. The following
sequence of lookups take place to resolve this name into an IP address. This procedure
is called hostname resolution and the algorithm that performs this operation is called the
resolver.
1. The application will check certain special databases on the local machine. If it
262
27. DNS and Name Resolution 27.3. Configuring your local machine
2. The application will look up a geographically close name server from the local
machines configuration file. Lets say this machine is called ns.
4. ns will decide if that IP has been recently looked up before. If it has, there is no
need to ask further, since the result would be stored in a local cache.
5. ns will see if the domain is local. I.e. if it is a computer that it has direct infor-
mation about. In this case this would only be true if the ns were Cranzgot’s very
own name server.
6. ns will strip out the TLD (Top Level Domain) .za It will query a root name
server, asking what name server is responsible for .za The answer will be
ucthpx.uct.ac.za of IP address 137.158.128.1.
7. ns will strip out the next highest domain co.za It will query 137.158.128.1,
asking what name server is responsible for co.za The answer will be
secdns1.posix.co.za of IP address 160.124.112.10.
9. ns will query
196.28.133.1 asking what the IP address is of www.cranzgot.co.za The
answer will be 160.124.182.1.
11. ns will store each of these results in a local cache with an expiry date. To avoid
having to look them up a second time.
We made reference to “configuration files” above. These are actually the files:
/etc/host.conf, /etc/hosts, and /etc/resolv.conf. These are the three and
only files that specify how applications are going to lookup IP numbers, and have noth-
ing to do with the configuration files of the nameserver daemon itself, even thought a
nameserver daemon might be running on the local machine.
When an application requires to lookup a hostname it goes through the following
procedure &What is actually happening is that the application is making a C library call to the function
263
27.3. Configuring your local machine 27. DNS and Name Resolution
gethostbyname(), hence all these configuration files really belong to the C library packages glibc or
-
libc. However this is a detail you need not be concerned about. . The following are equivalent
steps 1, 2 and 3 above with the details of the configuration files filled in. The configu-
ration files that follow are taken as they my might be on my own personal machine:
1. The application will check the file /etc/host.conf. This file will usually have
a line order hosts,bind in it, specifying that it should first (hosts) check the
local database file /etc/hosts, and then (bind) query the name server speci-
fied in /etc/resolv.conf. The file /etc/hosts contains a text readable list
of IP addresses and names. An example is given below. If it can get an answer
directly from /etc/hosts, it proceeds no further.
3. The application will query that name server with the hostname. If the hostname
is unqualified, then it appends the domainname of the local machine to it be-
fore trying the query. If the keyword search <domain1> <domain2> ...
<domainN> appears in the configuration file, then it tries a query with each of
<domain1>, <domain2> etc. appended in turn until the query successfully re-
turns an IP. This just saves you having to type in the full hostname for computers
within your own organisation.
4. The nameserver will proceed with the hierarchical queries described from step 4
onward.
The hosts aragorn, cericon and ra are the hosts I am most interested in, and
hence are listed here. cericon is my local machine and must be listed. You can list
any hosts that you want fast lookups too, or hosts that might need to be known in spite
of nameservers being down.
The /etc/host.conf might look like this. All of the lines are optional:
¨ ¥
order hosts, bind, nis
trim some.domain
spoofalert
nospoof
264
27. DNS and Name Resolution 27.3. Configuring your local machine
5 multi on
reorder
§ ¦
order The order in which lookups are done. Don’t try fiddling with this value. It
never seems to have any effect. You should leave it as order hosts,bind
(or order hosts,bind,nis if you are using NIS — discussed in later (possi-
bly unwritten) chapters) &Search for the NIS-HOWTO on the web.-. Once again, bind
means to then go and check the /etc/resolv.conf which holds the name-
server query options.
trim Strip the domain some.domain from the end of a hostname before trying a
lookup. You will probably never require this feature.
spoofalert Try reverse lookups on a hostname after looking up the IP (i.e. do a
query to find the name from the IP.) If this query does not return the correct result,
it could mean that someone is trying to make it look like they are a machine
that they aren’t. This is a hackers trick called spoofing. This warns you of such
attempts in your log file /var/log/messages.
nospoof Disallows results that fail this spoof test.
multi on Return more than one result if there are aliases. Actually, a host can have
several IP numbers and an IP number can have several hostnames. Consider
a computer that might want more than one name (ftp.cranzgot.co.za and
www.cranzgot.co.za are the same machine.) Or a machine that has several
networking cards and an IP address for each. This should always be turned
on. multi off is the alternative. Most applications use only the first value
returned.
reorder Reorder says that if more than one IP is returned by a lookup, then that list
should be sorted according to the IP that has the most convenient network route.
Despite this array of options, an /etc/host.conf file almost always looks simply
like:
¨ ¥
order hosts, bind
multi on
§ ¦
265
27.3. Configuring your local machine 27. DNS and Name Resolution
nameserver Specifies a nameserver to query. No more than three may be listed. The
point of having more than one is to safeguard against a nameserver being down
— the next in the list will then just be queried.
search If given a hostname with less than ndots dots (i.e. 1 in this case), add each
of the domains in turn to the hostname, trying a lookup with each. This allows
you to type in an unqualified hostname and have application work out what
organisation it is belongs to from the search list. You can have up to six domains,
but then queries would be a time-consuming.
sortlist If more than one host is returned, sort them according to the following
network/masks.
Despite this array of options, an /etc/resolv.conf file almost always looks simply
like:
¨ ¥
nameserver 192.168.2.254
search cranzgot.co.za
§ ¦
266
27. DNS and Name Resolution 27.4. Reverse lookups
A reverse lookup was mentioned under nospoof above. This is the determining of the
hostname from the IP address. The course of queries is similar to forward lookups
using part of the IP address to find out what machines are responsible for what ranges
of IP address.
A forward lookup is an ordinary lookup of the IP address from the hostname.
It has been emphasised that name servers only hold information for their own do-
mains. Any other information they may have about another domain is cached, tempo-
rary data that has an expiry date attached to it.
The domain that a name server has information about is said to be the do-
main that a name server is authoritive for. Alternatively we say: “a name server is
authorative for the domain”. For instance, the server ns2.cranzgot.co.za is au-
thoritive for the domain cranzgot.co.za. Hence lookups from anywhere on the
Internet having the domain cranzgot.co.za ultimately are the responsibility of
ns2.cranzgot.co.za, and originate (albeit via a long series of caches) from the host
ns2.cranzgot.co.za.
for an example of a host with lots of IP address. Keep typing host over and over.
Notice that the order of the hosts keeps changing randomly. This is to distribute load
amidst the many cnn.com servers.
Now pick one of the IP addresses and type
¨ ¥
host <ip-address>
§ ¦
267
27.7. The nslookup command 27. DNS and Name Resolution
The ping command has nothing directly to do with DNS, but is a quick way of getting
an IP address, and checking if a host is responding at the same time. It is often used as
the acid test for network and DNS connectivity. See Section 25.10.1.
Now enter:
¨ ¥
whois [email protected]
§ ¦
(Note that original BSD whois worked like whois -h <host> <user>) You will
get a response like:
¨ ¥
[rs.internic.net]
5 Domain names in the .com, .net, and .org domains can now be registered
with many different competing registrars. Go to http://www.internic.net
for detailed information.
20 >>> Last update of whois database: Thu, 20 Jan 00 01:39:07 EST <<<
The Registry database contains ONLY .COM, .NET, .ORG, .EDU domains and
Registrars.
§ ¦
you will get a > prompt where you can type commands. If you type in a hostname,
nslookup will return its IP address(s) and visa versa. Also typing
268
27. DNS and Name Resolution 27.7. The nslookup command
¨ ¥
help
§ ¦
any time will return a complete list of commands. By default, nslookup uses the first
nameserver listed in /etc/resolv.conf for all its queries. However, the command
¨ ¥
server <nameserver>
§ ¦
This tells nslookup to return the second type of information that a DNS can deliver:
the authoritive name server for a domain or the NS record of the domain. You can enter
any domain here. For instance, enter
¨ ¥
set type=NS
cnn.com
§ ¦
It will return
¨ ¥
Non-authoritative answer:
cnn.com nameserver = NS-02B.ANS.NET
cnn.com nameserver = NS-02A.ANS.NET
cnn.com nameserver = NS-01B.ANS.NET
5 cnn.com nameserver = NS-01A.ANS.NET
This tells us that their are four name servers authoritive for the domain cnn.com (one
plus three backups). It also tells us that it did not get this answer from an authoritive
source, but via a cached source. It also tells us what nameservers are authorative for
this very information.
269
27.8. The dig command 27. DNS and Name Resolution
to get the so-called MX record for that domain. The MX record is the server responsible
for handling mail destined to that domain. MX records also have a priority (usually
10 or 20). This tells any mail server to try the 20 one should the 10 one fail, and so on.
There are usually only one or two MX records. Mail is actually the only Internet service
handled by DNS. (For instance there is no such thing as a NEWSX record for news, nor
a WX record for web pages, whatever kind of information we may like such records to
hold.)
Also try
¨ ¥
set type=PTR
<ip-address>
set type=A
<hostname>
5 set type=CNAME
<hostname>
§ ¦
dig stands for domain information groper, and sends single requests to a DNS server for
testing or scripting purposes (it is very similar to nslookup, but non-interactive).
270
27. DNS and Name Resolution 27.8. The dig command
It is usually used
¨ ¥
dig @<server> <domain> <query-type>
§ ¦
Where <server> is the machine running the DNS daemon to query, <domain> is the
domain of interest and <query-type> is one of A, ANY, MX, NS, SOA, HINFO or AXFR,
of which the non-obvious ones you can check about in the man page.
Useful is the AXFR record. For instance
¨ ¥
dig @dns.dial-up.net icon.co.za AXFR
§ ¦
271
27.8. The dig command 27. DNS and Name Resolution
272
Chapter 28
NFS
This chapter covers NFS: the file sharing capabilities of U NIX, showing you how to
setup directories share-able to other U NIX machines.
NFS stands for Network File System. As soon as one thinks of high speed Ethernet,
the logical possibility of sharing a file-system across a network comes to mind. DOS,
OS/2 and Windows have their own file sharing schemes (IPX, SMB etc.), and NFS is
the U NIX equivalent.
Consider your hard drive with its 10000 or so files. Ethernet is fast enough that
one should be able to entirely use the hard drive of another machine, transferring
needed data as required as network packets; or make a directory tree visible to sev-
eral computers. Doing this efficiently is a complex task. NFS is a standard, a protocol
and (on L INUX ) a software suite that accomplishes this task in an efficient manner.
It is really easy to configure as well. Unlike some other sharing protocols, NFS merely
shares files and does not facilitate printing or messaging.
28.1 Software
Depending on your distribution, these can be located in any of the bin or sbin di-
rectories. These are all daemon processes that should be started in the order given
here.
portmap (also sometimes called rpc.portmap) This maps service names to ports.
Client and server processes may request a TCP port number based on a service
name, and portmap handles these requests. It is basically a network version of
your /etc/services file.
273
28.2. Configuration example 28. NFS
rpc.mountd (also sometimes called mountd) This handles the initial incoming re-
quest from a client to mount a file-system, and checks that the request is allow-
able.
rpc.nfsd (also sometimes called nfsd) This is the core — the file-server program
itself.
rpc.lockd (also sometimes called lockd) This handles shared locks between differ-
ent machines on the same file over the network.
The acronym RPC stands for Remote Procedure Call. RPC was developed along
with NFS by Sun Microsystems. It is an efficient way for a program to call a function
on another machine, and can be used by any service that wishes to have efficient dis-
tributed processing. These days its not really used for much except NFS, having been
superseded by technologies like CORBA &The “Object-Oriented” version of RPC-. You can
however still write distributed applications using L INUX ’s RPC implementation.
To share a directory to a remote machine, requires that forward and reverse DNS
lookups be working for the server machine as well as all client machines. This is cov-
ered in Chapter 27 and Chapter 40. If you are just testing NFS, and you are sharing
directories to your local machine (what we will do now) you may find NFS to still
work without a proper DNS setup. You should at least have proper entries in your
/etc/hosts file for your local machine (see Page 27.3).
The first step is deciding on the directory you would like to share. A useful trick
is to share your CDROM to your whole LAN. This is perfectly safe considering that
CDs are read-only. Create an /etc/exports file with the following in it:
¨ ¥
/mnt/cdrom 192.168.1.0/24(ro) localhost(ro)
§ ¦
You can immediately see that the format of the /etc/exports file is simply a line
for each share-able directory. Next to each directory name goes a list of hosts that are
allowed to connect. In this case, those allowed access are all IP addresses having the
upper 24 bits matching 192.168.1, as well as the localhost.
You should then mount your CDROM as usual with
¨ ¥
mkdir -p /mnt/cdrom
mount -t iso9660 -o ro /dev/cdrom /mnt/cdrom
§ ¦
274
28. NFS 28.2. Configuration example
¨ ¥
portmap
rpc.mountd
rpc.nfsd
rpc.lockd
§ ¦
Whenever making any changes to your /etc/exports file you should also do a:
¨ ¥
exportfs -r
§ ¦
which causes a re-reading of the /etc/exports file. Entering the exportfs com-
mand with no options should then show:
¨ ¥
/mnt/cdrom 192.168.1.0/24
/mnt/cdrom localhost.localdomain
§ ¦
You can see that the mount command sees the remote machine’s directory as a “device”
of sorts, although the type is nfs instead of ext2, vfat or iso9660. The remote
hostname is followed by a colon followed by the directory on that remote machine
relative to the root directory. This is unlike many other kinds of services that name all
files relative to some “top-level” directory (eg. FTP and web servers). The acid test
now is to ls the /mnt/nfs directory to verify that its contents are indeed the same
as /mnt/cdrom. Now if our server is called say cdromserver, we can run the same
command on all client machines:
¨ ¥
mkdir /mnt/nfs
mount -t nfs cdromserver:/mnt/cdrom /mnt/nfs
§ ¦
If anything went wrong, you might like to search your process list for all processes with
an rpc, mount, nfs or portmap in them. Completely stopping NFS means clearing
all of these processes if you really want to start from scratch. It is useful to also keep a,
¨ ¥
tail -f /var/log/messages
§ ¦
running in a separate console to watch for any error (or success) messages (actually
true of any configuration you are doing). Note that it is not always obvious that NFS
275
28.3. Access permissions 28. NFS
Most distributions will not require you to manually start and stop the daemon pro-
cesses above. Like most services, RedHat’s NFS implementation can be invoked simple
with:
¨ ¥
/etc/init.d/nfs start
/etc/init.d/nfslock start
§ ¦
One further option, no root squash, disables NFS’s special treatment of root
owned files. This is useful if you are finding certain files strangely inaccessible. This
is really only for systems (like diskless workstations) that need full root access to a
file-system. An example is:
¨ ¥
/ *.very.trusted.net(rw,no_root_squash)
§ ¦
The man page for /etc/exports, exports(5) contains an exhaustive list of options.
276
28. NFS 28.4. Security
28.4 Security
NFS requires a number of services to be running that have no use anywhere else. Many
naive administrators create directory exports with impunity, thus exposing those ma-
chines to opportunistic hackers. An NFS server should be well hidden behind a fire-
wall, and any Internet server exposed to the Internet should never run the portmap or
RPC services.
There are actually two versions of the NFS implementation for L INUX . Although
this is a technical caveat it is worth understanding that the NFS server was originally
implemented by an ordinary daemon process before the L INUX kernel itself sup-
ported NFS. Debian supports both implementations in two packages, nfs-server
and nfs-kernel-server, although the configuration should be identical. Depend-
ing on the versions of these implementations and the performance you require, one or
the other may be better. You are advised to at least check the status of the kernel NFS
implementation on the kernel web pages. Of course NFS as a client must necessarily
by supported by the kernel as a regularly file-system type in order to be able to mount
anything.
277
28.5. Kernel NFS 28. NFS
278
Chapter 29
There are some hundred odd services that a standard L INUX server supports. For all
of these to be running simultaneously would be a strain. Hence a daemon process ex-
ists that watches for incoming TCP connections and then starts the relevant executable,
saving that executable from having to run all the time. This is used only for sparsely
used services — i.e. not web, mail or DNS.
The daemon that does this is traditionally call inetd; the subject of this chapter.
(Section 36.1 contains an example of writing your own network service in shell
script to run under inetd.)
The package that inetd comes under depends on the taste of your distribution. Fur-
ther, under RedHat, version 7.0 switched to xinetd, a welcomed move that departs
radically from the traditional U NIX inetd. xinetd will be discussed below. The
important inetd files are: the configuration file /etc/inetd.conf, the executable
/usr/sbin/inetd, the inetd and inetd.conf man pages and the startup script
/etc/init.d/inet (or /etc/rc.d/init.d/inetd or /etc/init.d/inetd).
Another important file is /etc/services discussed previously in Section 26.4.
279
29.2. Invoking services using /etc/inetd.conf 29. Services Running Under inetd
To start most services can be done in one of three methods: firstly as a standalone
(resource hungry, as discussed) daemon, secondly under inetd, or thirdly as a “TCP
wrapper” moderated inetd service. However, some services will run using only one
method. Here we will give an example showing all three methods. You will need to
have an ftp package installed for this example (either wuftpd on RedHat or ftpd on
Debian ).
The -D option indicates to start in Daemon mode (or standalone mode). This repre-
sents the first way of running an Internet service.
Next we can let inetd run the service for us. Edit your /etc/inetd.conf file and
add/edit the line (alternatives in round braces):
¨ ¥
ftp stream tcp nowait root /usr/sbin/in.ftpd in.ftpd
( ftp stream tcp nowait root /usr/sbin/in.wuftpd in.wuftpd )
§ ¦
280
29. Services Running Under inetd 29.2. Invoking services using /etc/inetd.conf
ftp The name of the service. Looking in the /etc/services file, we can see that
this is TCP port 21.
stream tcp Socket type and protocol. In this case a TCP stream socket, and hardly
ever anything else.
nowait Do not wait for the process to exit before listening for a further incoming
connection. Compare to wait and respawn in Chapter 32.
root The initial user ID under which the service must run.
in.ftpd The command-line. In this case, just the program name and no options.
Next we can let inetd run the service for us under the tcpd wrapper command. This
is almost the same as before, but with a slight change in the /etc/inetd.conf entry:
¨ ¥
ftp stream tcp nowait root /usr/sbin/tcpd /usr/sbin/in.ftpd
( ftp stream tcp nowait root /usr/sbin/tcpd /usr/sbin/in.wuftpd )
§ ¦
Then restart the inetd service as before. What this does is allow tcpd to invoke
in.ftpd (or in.wuftpd) on inetd’s behalf. The tcpd command does various
checks on the incoming connection to decide whether it should be trusted. tcpd
checks what host the connection originates from and compares it against entries in
the file /etc/hosts.allow and /etc/hosts.deny. It can refuse connections from
selected hosts, thus giving you finer access control to services. This is the third method
of running an Internet service.
Consider the above /etc/inetd.conf entry with the following line in your
/etc/hosts.allow file:
¨ ¥
in.ftpd: LOCAL, .my.domain
( in.wuftpd: LOCAL, .my.domain )
§ ¦
281
29.3. Various service explanations 29. Services Running Under inetd
This will deny connections from all machines with hostname’s not ending in
.my.domain but allow connections from the local &The same machine on which in-
etd is running- machine. It is useful at this point to try making an ftp connec-
tion from different machines to test access control. A complete explanation of
the /etc/hosts.allow and /etc/hosts.deny file format can be obtained from
hosts access(5). Another example is (/etc/hosts.deny):
¨ ¥
ALL: .snake.oil.com, 146.168.160.0/255.255.240.0
§ ¦
which would deny access for ALL services to all machines inside the 146.168.160.0
(first 20 bits) network, as well as all machines under the snake.oil.com domain.
Distribution conventions
Note that the above methods can not be used simultaneously — if a service is
already running via one method, trying to start it via another will fail, possibly
with a “port in use” error message. Your distribution would have already decided
whether to make the service an inetd entry or a standalone daemon. In the for-
mer case a line in /etc/inetd.conf will be present, while in the latter case, a script
/etc/init.d/<service> (or /etc/rc.d/init.d/<service>) will be present to
start or stop the daemon. Typically there will be no /etc/init.d/ftpd script,
while there will be /etc/init.d/httpd and /etc/init.d/named scripts. Note
that there will always be a /etc/init.d/inet script.
A typical /etc/inetd.conf file (without the comment lines) looks something like:
¨ ¥
ftp stream tcp nowait root /usr/sbin/tcpd in.ftpd -l -a
telnet stream tcp nowait root /usr/sbin/tcpd in.telnetd
shell stream tcp nowait root /usr/sbin/tcpd in.rshd
login stream tcp nowait root /usr/sbin/tcpd in.rlogind
5 talk dgram udp wait nobody.tty /usr/sbin/tcpd in.talkd
ntalk dgram udp wait nobody.tty /usr/sbin/tcpd in.ntalkd
pop-3 stream tcp nowait root /usr/sbin/tcpd ipop3d
imap stream tcp nowait root /usr/sbin/tcpd imapd
uucp stream tcp nowait uucp /usr/sbin/tcpd /usr/sbin/uucico -l
282
29. Services Running Under inetd 29.4. The xinetd alternative
Instead of the usual inetd + tcpd combination, RedHat switched to the xinetd
package as of version 7.0. I hope that this becomes a standard. The xinetd
package combines the features if tcpd and inetd into one neat package. The
xinetd package consists of a top-level config file, /etc/xinetd.conf; an exe-
cutable /usr/sbin/xinetd; and then a config file for each service under the direc-
tory /etc/xinetd.d/. This allows a package like ftpd control over its own configuration
through its own separate file.
283
29.4. The xinetd alternative 29. Services Running Under inetd
Which dictates respectively that xinetd: limit the number of simultaneous connec-
tions of each service to 60; log to the syslog facility using syslog’s authprov chan-
nel; log the HOST and Process ID for each successful connection; and finally also log the
HOST (and also RECORD info about the connection attempt) for each failed connection.
In other words, /etc/xinetd.conf really says nothing interesting at all.
The last line says to look in /etc/xinetd.d/ for more service specific files. Our FTP
service would have a file /etc/xinetd.d/wu-ftpd containing:
¨ ¥
service ftp
{
socket_type = stream
server = /usr/sbin/in.ftpd
5 server_args = -l -a
wait = no
user = root
log_on_success += DURATION USERID
log_on_failure += USERID
10 nice = 10
}
§ ¦
This is similar to our /etc/inetd.conf line above, albeit more verbosely. Re-
spectively, this file dictates to: listen with a stream TCP socket; run the executable
/usr/sbin/in.ftpd on a successful incoming connection; pass the arguments
-l -a on the command-line to in.ftpd (see ftpd(8)); never wait for in.ftpd to
exit before accepting the next incoming connection; run in.ftpd as user root; addi-
tionally log the DURATION and USERID of successful connections; additionally log the
USERID of failed connections; and finally to be nice to the CPU by running in.ftpd
at a priority of 10.
284
29. Services Running Under inetd 29.5. Security
Limiting access
The security options of xinetd allow much flexibility. Most important is the
only from option to limit the remote hosts allowed to use a service. The most ex-
treme use is to add only from 127.0.0.1 to the top level config file:
¨ ¥
defaults
{
only_from = 127.0.0.1 mymachine.local.domain
.
5 .
.
§ ¦
which allows no remote machines to use any xinetd service at all. Alternatively, you
can add a only from line to any of the files in /etc/xinetd.d/ to restrict access on
a per-service bases.
only from can also take IP address ranges of the form nnn.nnn.nnn.nnn/bits, as
well as domain names. For example,
¨ ¥
only_from = 127.0.0.1 192.168.128.0/17 .somewhere.friendly.com
§ ¦
which in the last case allows access from all machines with host names ending in
.somewhere.friendly.com.
Finally there is the no access option which works identically to only from, dictating
hosts and IP ranges from where connections are not allowed:
¨ ¥
no_access = .snake.oil.net
§ ¦
29.5 Security
285
29.5. Security 29. Services Running Under inetd
286
Chapter 30
This chapter will effectively explain how to get L INUX up and running as a mail
server.
exim and sendmail are MTA’s (mail transfer agent). An MTA is just a daemon
process that listens on port 25 for incoming mail connections, spools &See page 187 about
spooling in general.- that mail in a queue (for exim, the /var/spool/exim/input/ di-
rectory, for sendmail, the /var/spool/mqueue/ directory), then re-sends that mail
to some other MTA or delivers it locally to some user’s mailbox. In other words, the
MTA is the very package that handles all mail spooling, routing and delivery. We saw
in Section 10.2 how to do a manual connection to an MTA. In that example sendmail
version 8.9.3 was the MTA running on machine mail.cranzgot.co.za.
sendmail is the original and popular U NIX MTA. It is probably necessary to
learn how to use because so many organisations standardise on it. However, because
exim is so easy to configure, it is worthwhile replacing sendmail wherever you see
it — there are at least three MTA’s that are preferable to sendmail. I will explain the
minimum of what you need to know about sendmail later on, and explain exim in
detail.
The exim home page <http://www.exim.org/> will give you a full rundown. Here
I will just say that exim is the simplest to configure. Moreover, its configuration file
works the same way you imagine mail to. Its really easy to customise the exim con-
figuration to do some really weird things. The whole package fits together cleanly,
logically, and intuitively. This is in contrast to sendmail’s sendmail.cf file, which
287
30.2. exim package contents 30. exim and sendmail
You can get exim as a .rpm or .deb file. After installation, the file
/usr/share/doc/exim-?.??/doc/spec.txt &or /usr/doc/- contains the com-
plete exim documentation. There is also an HTML version on their web page, while
the man page contains only command-line usage. exim is a drop in replacement for
sendmail, meaning that for every critical sendmail command, there is an exim com-
mand of the same name that takes the same options, so that needy scripts won’t know
the difference. These are:
¨ ¥
/etc/aliases
/usr/bin/mailq
/usr/bin/newaliases
/usr/bin/rmail
5 /usr/lib/sendmail
/usr/sbin/sendmail
§ ¦
Finally, there is the exim binary itself /usr/sbin/exim and configuration file
/etc/exim/config, /etc/exim.conf or /etc/exim/exim.conf, depending
on your L INUX distribution. Then there are the usual start/stop scripts,
/etc/init.d/exim &or /etc/rc.d/init.d/exim-.
288
30. exim and sendmail 30.3. exim configuration file
errors_address = postmaster
freeze_tell_mailmaster = yes
5 queue_list_requires_admin = false
prod_requires_admin = false
trusted_users = psheer
local_domains = localhost : ${primary_hostname}
never_users = root
10 # relay_domains = my.equivalent.domains : more.equivalent.domains
host_accept_relay = localhost : *.cranzgot.co.za : 192.168.0.0/16
exim_user = mail
exim_group = mail
end
15
###################### TRANSPORTS CONFIGURATION ######################
remote_smtp:
driver = smtp
hosts = 192.168.2.1
20 hosts_override
local_delivery:
driver = appendfile
file = /var/spool/mail/${local_part}
delivery_date_add
25 envelope_to_add
return_path_add
group = mail
mode_fail_narrower =
mode = 0660
30 end
The exim config file is divided into six logical sections separated by the end keyword.
The top or MAIN section contains global settings. The settings have the following mean-
ings:
289
30.3. exim configuration file 30. exim and sendmail
log subject Tell exim to log the subject in the mail log file. For example T="I
LOVE YOU" will be added to the log file.
errors address The mail address where errors are to be sent. It doesn’t matter what
you put here, because all mail will get rewritten to [email protected] as we
will see later.
freeze tell mailmaster Tell errors address about frozen messages. frozen
messages are messages that could not be delivered for some reason (like a per-
missions problem), and are flagged to sit idly in the mail queue, and not be pro-
cessed any further. Note that frozen messages usually mean something is wrong
with your system or mail configuration.
local domains Each mail message received gets processed in one of two ways: ei-
ther a local or remote delivery. A local delivery is one to a user on the local ma-
chine, while a remote deliver is somewhere else on the Internet. local domains
distinguishes between these two. According to the config line above, a message
destined to psheer@localhost or [email protected] is
local, while a message to [email protected] is remote. Note that the
list is colon delimited.
never users Never become this user. Just for security.
host accept relay, relay domains Discussed below.
exim user What user should exim run as?
exim group What group should exim run as?
290
30. exim and sendmail 30.3. exim configuration file
to the domain does not originate from us, nor is destined for us, yet must be allowed
only if the destination address matches the domains for which we are a backup. We put such
domains under relay domains.
Transports
The transport section comes immediately after the main configuration options. It de-
fines various methods of delivering mail that we are going to refer to later in the con-
figuration file. Our manual telneting to port 25 was transporting a mail message
by SMTP. Appending a mail message to the end of a mail folder is also a transport
method. These are represented by the remote smtp: and local delivery: labels
respectively.
For remote smtp there are sub-options that are explained as follows:
driver The actual method of delivery. driver = always specifies the kind of trans-
port, director, or router.
hosts override and hosts Using these two options together in this way overrides
any list of hosts that may have been looked up using DNS MX queries. By
“hosts” we mean machines that we might like to make an SMTP delivery to,
which though we have established from the recipients email address, are not go-
ing to use. Instead we send all mail to 192.168.2.1, which is my company’s
internal mail server.
driver The actual method of delivery. driver = always specifies the kind of trans-
port, director, or router.
file The file to append the mail message to. ${local part} is replaced with every-
thing before the @ character of the recipients address.
delivery date add, envelope to add and return path add Various things to
add to the header.
(It is obvious at this stage what these two transports are going to be used for. As far
as MTU’s are concerned the only two things that ever happen to an email message are
that it either (a) gets sent via SMTP to another host, or (b) gets appended to a file.)
291
30.4. Full blown mailserver 30. exim and sendmail
Directors
If a message arrives and it is listed in local domains, exim will attempt a local deliv-
ery. This means working through the list of directors until it finds one that does not fail.
The only director listed here is the one labelled localuser: with local delivery
as its transport. So quite simply, email address listed under local domains get ap-
pended to the user’s mailbox file — not very complicated.
A director directs mail to a mailbox.
Routers
If a message arrives and it is not listed in local domains, exim will attempt a remote
delivery. Similarly, this means working through the list of routers until it finds one that
does not fail.
There are two routers listed here. The first is for common email addresses. It uses the
lookuphost driver, which does a DNS MX query on the domain part of the email
address (i.e. everything after the @). The MX records found are then passed to the
remote smtp transport (and in our case, then ignored). The lookuphost driver will
fail if the domain part of the email address is a bracketed literal IP address.
The second router uses the ipliteral driver. This sends mail directly to
an IP address in the case of bracketed literal email addresses. For example
root@[111.1.1.1].
A router routes mail to another host.
An actual mail server config file contains very little extra. This one is the example
config file that comes by default with exim-3.16:
¨ ¥
#################### MAIN CONFIGURATION SETTINGS #####################
# primary_hostname =
# qualify_domain =
# qualify_recipient =
5 # local_domains =
never_users = root
# host_accept_relay = localhost
# host_accept_relay = my.friends.host : 131.111.0.0/16
# relay_domains = my.equivalent.domains : more.equivalent.domains
10 host_lookup = 0.0.0.0/0
# receiver_unqualified_hosts =
# sender_unqualified_hosts =
292
30. exim and sendmail 30.4. Full blown mailserver
rbl_domains = rbl.maps.vix.com
no_rbl_reject_recipients
15 rbl_warn_header
# rbl_domains = rbl.maps.vix.com:dul.maps.vix.com:relays.orbs.org
# percent_hack_domains = *
end
###################### TRANSPORTS CONFIGURATION ######################
20 remote_smtp:
driver = smtp
# procmail transport goes here <---
local_delivery:
driver = appendfile
25 file = /var/spool/mail/${local_part}
delivery_date_add
envelope_to_add
return_path_add
group = mail
30 mode = 0660
address_pipe:
driver = pipe
return_output
address_file:
35 driver = appendfile
delivery_date_add
envelope_to_add
return_path_add
address_reply:
40 driver = autoreply
end
###################### DIRECTORS CONFIGURATION #######################
# routers because of a "self=local" setting (not used in this configuration).
system_aliases:
45 driver = aliasfile
file = /etc/aliases
search_type = lsearch
user = mail
group = mail
50 file_transport = address_file
pipe_transport = address_pipe
userforward:
driver = forwardfile
file = .forward
55 no_verify
no_expn
check_ancestor
# filter
file_transport = address_file
60 pipe_transport = address_pipe
reply_transport = address_reply
# procmail director goes here <---
localuser:
driver = localuser
65 transport = local_delivery
end
###################### ROUTERS CONFIGURATION #########################
293
30.5. Shell commands for exim administration 30. exim and sendmail
# widen_domains = "sales.mycompany.com:mycompany.com"
lookuphost:
70 driver = lookuphost
transport = remote_smtp
# widen_domains =
literal:
driver = ipliteral
75 transport = remote_smtp
end
###################### RETRY CONFIGURATION ###########################
* * F,2h,15m; G,16h,1h,1.5; F,4d,8h
end
80 ######################################################################
§ ¦
As with other daemons, you can stop exim, start exim, and cause exim to reread its
configuration file with:
¨ ¥
/etc/init.d/exim stop
/etc/init.d/exim start
/etc/init.d/exim reload
§ ¦
You should always do a reload in order take effect on any config file changes. The
startup script actually just runs exim -bd -q30m, which tells exim to start as a
standalone daemon, listening for connections on port 25, and then do a runq (ex-
plained below) every 30 minutes.
To cause exim &and many other MTAs for that matter- to loop through the queue of pending
messages and consider each one for deliver, do
294
30. exim and sendmail 30.6. The queue
¨ ¥
runq
§ ¦
Which is the same as exim -q. To list mail that is queued for delivery, use
¨ ¥
mailq
§ ¦
Which is the same as exim -bp. To forcibly attempt delivery on any mail in the queue,
use
¨ ¥
exim -qf
§ ¦
and then to forcibly retry even frozen messages in the queue, use
¨ ¥
exim -qff
§ ¦
The man page exim(8) contains exhaustive treatment of command-line options. These
above are most of what you will use however.
cericon:/# ls -l /var/spool/exim/input/
total 16
10 -rw------- 1 root root 25 Jan 6 11:43 14Epss-0008DY-00-D
-rw------- 1 root root 550 Jan 6 11:43 14Epss-0008DY-00-H
-rw------- 1 root root 25 Jan 6 11:43 14Ept8-0008Dg-00-D
-rw------- 1 root root 530 Jan 6 11:43 14Ept8-0008Dg-00-H
§ ¦
295
30.7. /etc/aliases for equivalent addresses 30. exim and sendmail
Shows exactly that their are two messages queued for delivery. The files ending in -H
are headers, while those ending in -D are message bodies. The spec.txt document
will show you how to interpret the contents of the header files.
Don’t be afraid to manually rm files from this directory, but always delete them in pairs
(i.e. remove the both the header and the body file), and make sure exim is not running
when you do.
Often we would like certain local addresses to actually deliver to other addresses. For
instance we would like all mail destined to user MAILER-DAEMON to actually go to
user postmaster; or perhaps some user has two accounts but would like to read mail
from only one of them.
The /etc/aliases file performs this mapping. This file has become somewhat
of an institution, however you can see that in exims case, aliasing is completely ar-
bitrary: you can specify a lookup on any file under the system aliases: director
provided that file is colon delimited.
A default /etc/aliases file could contain as much as the following, and
you should check that the postmaster account does exist on your system, and test
whether you can read, send and receive mail as user postmaster:
¨ ¥
# This is a combination of what I found in the Debian
# and RedHat distributions.
MAILER-DAEMON: postmaster
5 abuse: postmaster
anonymous: postmaster
backup: postmaster
backup-reports: postmaster
bin: postmaster
10 daemon: postmaster
decode: postmaster
dns: postmaster
dns-admin: postmaster
dumper: postmaster
15 fetchmail-daemon: postmaster
games: postmaster
gnats: postmaster
ingres: postmaster
info: postmaster
20 irc: postmaster
list: postmaster
listmaster: postmaster
lp: postmaster
mail: postmaster
296
30. exim and sendmail 30.8. Realtime Blocking List — combating spam
25 mailer-daemon: postmaster
majordom: postmaster
man: postmaster
manager: postmaster
msql: postmaster
30 news: postmaster
nobody: postmaster
operator: postmaster
postgres: postmaster
proxy: postmaster
35 root: postmaster
sync: postmaster
support: postmaster
sys: postmaster
system: postmaster
40 toor: postmaster
uucp: postmaster
warnings: postmaster
web-master: postmaster
www-data: postmaster
45
# some users who want their mail redirected
arny: [email protected]
larry: [email protected]
§ ¦
You may remove a lot of these, since they assume services to be running that may not
be installed — games, ingres etc. Aliases can do two things: firstly, anticipate what
mail a person is likely to use if they need to contact the administrator; and secondly,
catch any mail sent by system daemons: for example the email address of the DNS
administrator is dictated by the DNS config files as explained on page 429.
Note that an alias in the /etc/aliases file does not have to have an account on
the system — larry and arny do not have to have have entries in the /etc/passwd
file.
(See also the footnote on page 93). Spam refers to unsolicited &Not looked for or requested;
unsought- bulk mail sent to users usually for promotional purposes. I.e. mail sent au-
tomatically to many people whom the sender has no relationship with, and where
the recipient did nothing to prompt the mail; all on the off chance that the recipient
may be interested in the subject matter. Alternatively spam can be thought of as any
mail sent to email addresses, where those addresses were obtained without their own-
ers consent. More practically, anyone who has had an email account for very long will
297
30.8. Realtime Blocking List — combating spam 30. exim and sendmail
have gotten messages like Subject: Fast way to earn big $$$!, which clut-
ters my mailbox. The longer you have an email address, the more of these messages
you will get, and the more irritated you will get.
To send spam is easy. Work your way around the Internet till you find a mail-
server that allows relaying. Then send it 10000 email addresses and a message about
where to get pictures of naked underage girls. Now you are a genuine worthy-of-
being-arrested spammer. Unfortunately for the unsuspecting administrator of that
machine, and provided you have even a little of a clue what you’re doing, he will
probably never be able to track you down. There are several other tricks employed to
get the most out of your $100-for-1000000-genuine-email-addresses.
Note that spam is not merely emails you are not interested in. People often con-
fuse mail with other types of communication. . . like telephone calls: if you get a tele-
phone call, you have to pick up the phone then and there — the call is an an invasion of
your privacy. The beauty of email is that you never need to have your privacy invaded.
You can simply delete the mail. If you are irritated by the presumption of the hidden
sender, then thats your problem. The point at which email becomes intrusive is purely
a question of volume, much like airwave advertisements; but with email you can al-
ways filter mail from addresses that become bothersome. Spam however is email sent
opportunistically to random addresses, that cumulatively consume a lot of resources.
Because it comes from a different place each time, you cannot protect yourself against
it with a simple mail filter.
Further, typical spam mails will begin with a spammers subject Create Wealth
From Home Now!! and then have the audacity to append the footer:
This is not a SPAM. You are receiving this because you are on a list of email
addresses that I have bought. And you have opted to receive informa-
tion about business opportunities. If you did not opt in to receive infor-
mation on business opportunities then please accept our apology. To be
REMOVED from this list simply reply with REMOVE as the subject. And
you will NEVER receive another email from me.
Need we say that you should be wary of replying with REMOVE, since it clearly
tells the sender that your email is a valid address.
You can start at least by adding the following lines to your MAIN configuration section:
¨ ¥
headers_check_syntax
headers_sender_verify
sender_verify
298
30. exim and sendmail 30.8. Realtime Blocking List — combating spam
receiver_verify
§ ¦
The option headers check syntax, causes exim to check all headers of incoming
mail messages for correct syntax, failing them otherwise. The next three options check
that one of the Sender:, Reply-To: or From: headers, as well as the addresses in
the SMTP MAIL and RCPT commands, are genuine email addresses.
The reasoning here is that spammers will often use malformed headers to trick the
MTA into sending things it ordinarily wouldn’t, I am not sure exactly how this applies
in exim’s case, but these are for the good measure of rejecting email messages at the
point where the SMTP exchange is being initiated.
To find out a lot more about spamming, banning hosts, reporting spam and email us-
age in general, see MAPS (Mail Abuse Prevention System LLC) <http://maps.vix.
com/>, as well as, Open Relay Behaviour-modification System <http://www.orbs.
org/>.
Realtime Blocking Lists RBL or RBL’s are a not-so-new idea that has been incor-
porated into exim as a feature. It works as follows. The spammer will have to use a
host that allows relays. The IP of that relay host will be clear to the MTA at the time
of connection. The MTA can then check that against a database of publicly available
banned IP addresses of relay hosts. For exim, this means the list under rbl domains.
If the rbl domains friendly has this IP blacklisted, then exim denies it also. You can
enable this with
¨ ¥
rbl_domains = rbl.maps.vix.com : dul.maps.vix.com
# rbl_domains = rbl.maps.vix.com : dul.maps.vix.com : relays.orbs.org
rbl_reject_recipients
recipients_reject_except = [email protected]
§ ¦
Mail administrator and email users are expected to be aware of the following:
• Spam is evil.
299
30.9. Sendmail 30. exim and sendmail
• Even as a user, you should followup spam by checking where it came from and
complaining to those administrators.
• Many mail administrators are not aware there is an issue. Remind them.
30.9 Sendmail
Like most stock MTA’s shipped with L INUX distributions, the sendmail pack-
age will work by default as a mailer without any configuration. However, as always,
you will have to add a list of relay hosts. This is done in the file /etc/mail/access
for sendmail-8.10 and above. To relay from yourself and, say, the hosts on network
192.168.0.0/16, as well as, say, the domain hosts.trusted.com, you must have at
least:
¨ ¥
localhost.localdomain RELAY
localhost RELAY
127.0.0.1 RELAY
192.168 RELAY
5 trusted.com RELAY
§ ¦
which is exactly what the host accept relay option does in the case of exim.
The domains for which you are acting as a backup mail-server must be listed in the
file /etc/mail/relay-domains, each on a single line. This is analogous to the
relay domains option of exim.
Then of course, the domains for which sendmail is going to receive mail must also be
specified. This is analogous to the local domains option of exim. These are listed in
the file /etc/mail/local-host-names, each on a single line.
The same /etc/aliases file is used by exim and sendmail.
Having configured anything under /etc/mail/ you should now run a make in this
directory. This will rebuild lookup tables for these files. You also have to run the
300
30. exim and sendmail 30.9. Sendmail
command newaliases whenever you modify the /etc/aliases file. In both cases,
you must restart sendmail.
sendmail has received a large number of security alerts in its time. It is imperative
that you install the latest version. Note that older versions of sendmail have config-
urations that allowed relaying by default — another reason to upgrade.
A useful resource to find out more tricks with sendmail is The Sendmail FAQ <http:
//www.sendmail.org/faq/>.
301
30.9. Sendmail 30. exim and sendmail
302
Chapter 31
lilo stands for linux loader. LILO: is the prompt you first see after boot up, where
you are usually able to choose the OS you would like to boot, and give certain boot
options to the kernel. This chapter will explain how to configure lilo, kernel boot
options, and get otherwise non-booting systems to boot.
which is not that interesting, except to know that the technical and user documentation
is there if hard core details are needed.
31.1 Usage
When you first start your L INUX system, the LILO: prompt will appear where you
can first enter boot options. Pressing the Tab key will give you a list of things to type.
The purpose of this is to allow the booting of different L INUX installations on the
same machine, or different operating systems stored in different partitions on the same
disk. Later you can actually view the file /proc/cmdline to see what boot options
(including default boot options) where used.
303
31.2. Theory 31. lilo, initrd and Booting
31.2 Theory
To boot a U NIX kernel, requires it to be loaded into memory off disk and be executed.
The execution of the kernel will cause it to uncompress itself, and then run &The word
boot itself comes from the concept that a computer cannot begin executing without program code, and pro-
gram code cannot get into memory without other program code — like trying to lift yourself up by your
-
bootstraps, and hence the name. . The first thing the kernel does after it runs, is initialise
various hardware devices. It then mounts the root file system off a previously defined
partition. Once the root file-system is mounted, the kernel executes /sbin/init to
begin the U NIX operating system. This is how all U NIX systems begin life.
PC’s begin life with a small program in the ROM BIOS that loads the very first sector
of the disk into memory, called the boot sector of the Master Boot Record or MBR. This
piece of code is up to 512 bytes long and is expected to start the operating system.
In the case of L INUX , the boot sector loads the file /boot/map, which contains a
list of the precise location of the disk sectors that the L INUX kernel (usually the file
/boot/vmlinuz) spans. It loads each of these sectors, thus reconstructing the kernel
image in memory. Then it jumps to the kernel to execute it.
You may ask how it is possible to load a file off a file-system when the file-system
is not mounted. Further, the boot partition is a small and simple program, and cer-
tainly does not have support for the many possibly types of file-system that the kernel
image may reside in. Actually, lilo doesn’t have to support a file-system to access
a file, so long as it has a list of the sectors that the file spans, and is prepared to use
the BIOS interrupts &Nothing to do with “interrupting” or hardware interrupts, but refers to BIOS
functions that are available to programs (that the L INUX kernel actually never uses itself).-. If the file
is never modified, that sector list will never change; this is how the /boot/map and
/boot/vmlinuz files are loaded.
Booting partitions
In addition to the MBR, each primary partition has a boot sector that can boot the
operating system in that partition. MSDOS (Windows) partitions have this, and hence
lilo can optionally load and execute these partition boot sectors, to start a Windows
installation in another partition.
304
31. lilo, initrd and Booting 31.3. lilo.conf and the lilo command
Limitations
There are several limitations that BIOS’s have inherited, due to lack of foresight of their
designers. Firstly, some BIOS’s do not support more than one IDE &At least according to
the lilo documentation.-. I myself have not come across this as a problem.
The second limitation is most important to note. As explained, lilo uses BIOS
functions to access the IDE drive, but the BIOS of a PC is limited to accessing the first
1024 cylinders of the disk. Hence whatever LILO reads, it must reside within the first
1024 cylinders (the first 500 Megabytes of disk space). Here is the list of things whose
sectors are required to be within this space:
1. /boot/vmlinuz,
2. Various lilo files /boot/*.b.
3. Any non-L INUX partition boot sector you would like to boot.
However a L INUX root partition can reside anywhere, because the boot sector pro-
gram never reads this partition except for the abovementioned files: a scenario where
the /boot/ directory is a small partition below the 500 Megabyte boundary, and the /
partition is above the 500 Megabyte boundary, is quite common. See page 147.
Note that newer “LBA” BIOS’s support more than the first 512 Megabytes — even up
to 8 Gigabytes. I personally do not count on this.
To “do a lilo” requires you run the lilo command as root, with a correct
/etc/lilo.conf file. The lilo.conf file will doubtless have been set up by your
distribution (check yours). A typical lilo.conf file that allows booting of a Windows
partition and two L INUX partitions, is as follows:
¨ ¥
boot=/dev/hda
prompt
timeout = 50
compact
5 vga = extended
lock
password = jAN]")Wo
restricted
append = ether=9,0x300,0xd0000,0xd4000,eth0
10 image = /boot/vmlinuz-2.2.17
label = linux
305
31.3. lilo.conf and the lilo command 31. lilo, initrd and Booting
root = /dev/hda5
read-only
image = /boot/vmlinuz-2.0.38
15 label = linux-old
root = /dev/hda6
read-only
other = /dev/hda2
label = win
20 table = /dev/hda
§ ¦
Running lilo will install into the MBR a boot loader, that understands
where to get the /boot/map file, which in turn understands where to get the
/boot/vmlinuz-2.2.12-20 file. It will give some output like:
¨ ¥
Added linux *
Added linux-old
Added win
§ ¦
It also backs up your existing MBR if this has not previously been done into a file
/boot/boot.0300 (where 0300 refers to the devices major and minor number).
Let’s go through the options:
boot This tells the device to boot. It will most always be /dev/hda or /dev/sda.
prompt This tell the loader to give you a prompt, asking which OS you would like to
boot.
timeout How many tenths of a seconds to display the prompt (after which the first
image is booted).
compact String adjacent sector reads together. This makes the kernel load much faster.
vga We would like 80 × 50 text mode. Your startup scripts may reset this to 80 × 25 —
search /etc/rc.d recursively for any file containing “textmode”.
lock Always default to boot the last OS booted &A very useful feature which is seldom
used.-.
append This is a kernel boot option. Kernel boot options are central to lilo and kernel
modules, but will be discussed in Chapter 42.5. They are mostly not needed in
simple installations.
306
31. lilo, initrd and Booting 31.4. Creating boot floppy disks
other Some other operating system to boot: in this case a Windows partition.
password Means that someone will have to type in a password to boot the machine.
restricted Means that a password only need be entered if parameters are entered
at the LILO: prompt.
Further other = partitions may follow, and many image = kernel images are al-
lowed.
The above lilo.conf file assumed a partition scheme as follows:
This is easy. We require a floppy disk containing the kernel image, to boot and then
immediately mount a particular partition as root. This is required for a system where
LILO is broken or absent. To boot the first ext2 partition of the above setup, insert a
new floppy into a working L INUX system, and overwrite it with the following:
¨ ¥
dd if=/boot/vmlinuz-2.2.17 of=/dev/fd0
rdev /dev/fd0 /dev/hda5
§ ¦
Then simply boot the floppy. The above merely requires a second L INUX installation.
Without even this, you will have to download the RAWRITE.EXE utility and a raw boot
disk image to create the floppy from a DOS prompt.
307
31.5. SCSI installation complications and initrd 31. lilo, initrd and Booting
Some of the following may be difficult to understand without knowledge of kernel modules
explained in Chapter 42. You may like to come back to it later.
Consider a system with zero IDE disks and one SCSI disk containing a L INUX in-
stallation. There are BIOS interrupts to read the SCSI disk, just like there were for
the IDE, so LILO can happily access a kernel image somewhere inside the SCSI parti-
tion. However, the kernel is going to be lost without a kernel module &See Chapter 42. The
kernel doesn’t support every kind of piece of hardware out there all by itself. It is actually divided into a
main part (being the kernel image discuss in this chapter) and hundreds of modules (loadable parts that sit
- that
in /lib/modules/) that support the many type of SCSI, network, sound etc. peripheral devices.
understands the particular SCSI driver. So though the kernel can load and execute, it
won’t be able to mount its root file-system without loading a SCSI module first. But the
module itself resides in root file-system in /lib/modules/. This is a tricky situation
to solve and is done in one of two ways: either (a) using a kernel with a pre-enabled
SCSI support; or (b) using what is known as an initrd preliminary root file-system im-
age.
The first method is what I recommend. Its a straight forward (though time con-
suming) procedure to create a kernel that has SCSI support for your SCSI card builtin
(and not in a separate module). Builtin SCSI and network drivers will also auto-detect
cards most of the time allowing immediate access to the device— they will work with-
out being given any options &Discussed in Chapter 42- and, most importantly, without
you having to read up how to configure them. This is known as compiled in support for
a hardware driver (as opposed to module support for the driver). The resulting kernel
image will be larger by an amount equal to the size of module. Chapter 42 discusses
such kernel compiles.
The second method is faster but more tricky. L INUX has support for what is
known as an initrd image (initial rAM disk image). This is a small 1.5 Megabyte
or so file-system that is loaded by LILO, and mounted by the kernel instead of the
real file-system. The kernel mounts this file-system as a RAM disk, executes the file
/linuxrc, and then only mounts the real file-system.
Start by creating a small file-system. Make a directory /initrd and copy the follow-
ing files into it.
¨ ¥
drwxr-xr-x 7 root root 1024 Sep 14 20:12 initrd/
drwxr-xr-x 2 root root 1024 Sep 14 20:12 initrd/bin/
-rwxr-xr-x 1 root root 436328 Sep 14 20:12 initrd/bin/insmod
308
31. lilo, initrd and Booting 31.5. SCSI installation complications and initrd
The file initrd/linuxrc should contain a script to load all the modules needed for
the kernel to access the SCSI partition. In this case, just the aic7xxx module &insmod
can take options such as the IRQ and IO-port for the device. See Chapter 42.-:
¨ ¥
#!/bin/sash
aliasall
Now double check all your permissions and create a file-system image similar to that
done in Section 19.8:
¨ ¥
dd if=/dev/zero of=˜/file-inird count=1500 bs=1024
losetup /dev/loop0 ˜/file-inird
mke2fs /dev/loop0
mkdir ˜/mnt
5 mount /dev/loop0 ˜/mnt
cp -a initrd/* ˜/mnt/
umount ˜/mnt
309
31.5. SCSI installation complications and initrd 31. lilo, initrd and Booting
losetup -d /dev/loop0
§ ¦
Your lilo.conf file can be changed slightly to force use of an initrd file-system.
Simply add the initrd option. For example:
¨ ¥
boot=/dev/sda
prompt
timeout = 50
compact
5 vga = extended
linear
image = /boot/vmlinuz-2.2.17
initrd = /boot/initrd-2.2.17
label = linux
10 root = /dev/sda1
read-only
§ ¦
Notice the use of the linear option. This is a BIOS trick that you can read about
in lilo(5). It is often necessary, but can make SCSI disks non-portible to different
BIOS’s (meaning that you will have to rerun lilo if you move the disk to a different
computer).
Using mkinitrd
Now that you have learned the manual method of create an initrd image, you can
read the mkinitrd man page. It creates an image in a single command.
310
Chapter 32
This chapter will explain how L INUX (and a U NIX system in general) initialises itself.
It follows on from the kernel boot explained in Section 31.2. We will also go into some
advanced uses for mgetty, like receiving of faxes.
After the kernel has been unpacked into memory, it begins to execute, initialising hard-
ware. The last thing it does is mount the root file-system which will necessarily contain
a program /sbin/init, which the kernel executes. init is one of the only programs
the kernel ever executes explicitly; the onus is then on init to bring the U NIX system
up. init will always have the process ID 1.
For the purposes of init the (rather arbitrary) concept of a U NIX run-level was
invented. The run-level is the current operation of the machine, numbered run-level 0
through run-level 9. When the U NIX system is at a particular run-level it means that a
certain selection of services are running. In this way the machine could be a mail-server
or an X Window workstation depending on what run-level it is in.
The traditionally defined run-levels are:
0 - Halt.
311
32.2. /etc/inittab 32. init, ?getty and U NIX run-levels
4 - Unused.
6 - Reboot.
7 - Undefined.
8 - Undefined.
9 - Undefined.
The idea here is that init begins at a particular run-level which can then be manu-
ally changed to any other by the superuser. There is a list of scripts for each run-level
used by init to start or stop each of the many services pertaining to that run-level.
These scripts are /etc/rc?.d/KNNservice or /etc/rc?.d/SNNservice &On some sys-
tems /etc/rc.d/rc?.d/. . . .-, where NN, K or S are prefixes to dictate the order of execu-
tion (since the files are executed in alphabetical order).
These scripts all take the options start and stop on the command-line, to begin or
terminate the service.
For example when init enters, say, run-level 5 from run-level 3, it executes the par-
ticular scripts from /etc/rc3.d/ and /etc/rc5.d/ to bring up or down the appro-
priate services. This may involve, say, executing
¨ ¥
/etc/rc3.d/S20exim stop
§ ¦
and others.
32.2 /etc/inittab
init has one config file: /etc/inittab. A minimal inittab file might consists of
the following.
¨ ¥
id:3:initdefault:
si::sysinit:/etc/rc.d/rc.sysinit
5 l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
312
32. init, ?getty and U NIX run-levels 32.2. /etc/inittab
l2:2:wait:/etc/rc.d/rc 2
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
10 l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6
ud::once:/sbin/update
x:5:respawn:/usr/bin/X11/xdm -nodaemon
§ ¦
The lines are colon separated fields and have the following meaning (lots more can be
gotten from inittab(5)):
313
32.3. Useful runlevels 32. init, ?getty and U NIX run-levels
If you modify the inittab file, init will probably not notice until you issue it a
SIGHUP. This is the same as typing
¨ ¥
telinit q
§ ¦
You get these when an inittab line makes no sense &These errors are common and very
-: like a getty running on non-
irritating when doing console work, hence an explicit section on it.
functioning serial port. Simply comment out or delete the appropriate line and then
¨ ¥
telinit q
§ ¦
Switching run-levels manually is something that is rarely done. The most common
way of shutting the machine is to use:
¨ ¥
shutdown -h now
§ ¦
314
32. init, ?getty and U NIX run-levels 32.4. getty invocation
¨ ¥
linux single
§ ¦
to enter single user mode when booting your machine. You change to single user mode
on a running system with:
¨ ¥
telinit S
§ ¦
getty opens a tty port, prompts for a login name and invokes the /bin/login
command. It is normally invoked by init(8).
Note that getty, agetty, fgetty and mingetty are just different implementations
of getty.
The most noticeable effect of init running at all is that it spawns a login to each of the
L INUX virtual consoles. It is the getty &Or perhaps mingetty on RedHat- command
as specified in the inittab line above that displays this login. Once the login name
is entered, getty invokes the /bin/login program which then prompts the user for
their password.
The login program (discussed in Section 11.8) then executes a shell. When the
shell dies (as a result of the user exiting their session) getty is just respawned.
At this point you should get a complete picture of the entire bootup process:
1. 1st sector loaded into RAM and executed — LILO: prompt appears.
315
32.6. Incoming faxes and modem logins 32. init, ?getty and U NIX run-levels
The original purpose of getty was to manage character terminals on main frame com-
puters. mgetty is a more comprehensive getty that deals with proper serial devices.
A typical inittab entry is
¨ ¥
S4:2345:respawn:/sbin/mgetty -r -s 19200 ttyS4 DT19200
§ ¦
Running mgetty (see mgetty(8)) is a common and trivial way to get a dial login to a
L INUX machine. Your inittab entry is just,
316
32. init, ?getty and U NIX run-levels 32.6. Incoming faxes and modem logins
¨ ¥
S0:2345:respawn:/sbin/mgetty -n 3 -s 115200 ttyS0 57600
§ ¦
where -n 3 says to answer the phone after the 3rd ring. Nothing more is needed than
to plug your modem into a telephone. You can then use dip -t, as done in Section
41.1.1, to dial this machine from on another L INUX box. Here is an example session:
¨ ¥
# dip -t
DIP: Dialup IP Protocol Driver version 3.3.7o-uri (8 Feb 96)
Written by Fred N. van Kempen, MicroWalt Corporation.
remote.dialup.provate login:
§ ¦
Note that this is purely a login session having nothing to do with PPP dialup.
mgetty recieves faxes by default, provided your modem supports it & If you modem
says it supports it, and this still does not work, you will have to spend a lot of time reading through you
modem’s AT command set manual, as well as the mgetty info documentation.-, and provided it
has not been explicitly disabled with the -D option. An appropriate inittab line is,
¨ ¥
S0:2345:respawn:/sbin/mgetty -x 4 -n 3 -s 57600 -I ’27 21 7654321’ ttyS0 57600
§ ¦
the options mean respectively to; set the debug level to 4; answer after 3 rings; set the
port speed to 57600; and set the fax ID number to 27 21 7654321. Alternatively,
you can use the line,
¨ ¥
S0:2345:respawn:/sbin/mgetty ttyS0 57600
§ ¦
317
32.6. Incoming faxes and modem logins 32. init, ?getty and U NIX run-levels
rings 3
speed 57600
fax-id 27 21 7654321
§ ¦
gives,
¨ ¥
/etc/mgetty+sendfax/new_fax
§ ¦
which is a script that mgetty secretly runs when new faxes arrive. It can be
used to convert faxes into something (like .gif graphics files &I recommend .png over
.gif any day however.-) readable by typical office programs. The following example
/etc/mgetty+sendfax/new fax script puts incoming faxes into /home/fax/ as
.gif files that all users can access &Modified from the mgetty contribs.-. Note how it uses
the CPU intensive convert program from the ImageMagic package:
¨ ¥
#!/bin/sh
# you must have pbm tools and they must be in your PATH
PATH=/usr/bin:/bin:/usr/X11R6/bin:/usr/local/bin
5
HUP="$1"
SENDER="$2"
PAGES="$3"
10 shift 3
P=1
318
32. init, ?getty and U NIX run-levels 32.6. Incoming faxes and modem logins
exit 0
§ ¦
319
32.6. Incoming faxes and modem logins 32. init, ?getty and U NIX run-levels
320
Chapter 33
Sending Faxes
This chapter discusses the sendfax program, with reference to the specific example
of setting up an artificial printer that will automatically use a modem to send its print
jobs to remote fax machines.
Here, fax filter.sh is a script that sends the print job through the fax machine after
requesting the telephone number through the gdialog &gdialog is part of the gnome-
utils package.-. An appropriate /etc/printcap entry is:
¨ ¥
fax:\
:sd=/var/spool/lpd/fax:\
321
33.1. Fax through printing 33. Sending Faxes
:mx#0:\
:sh:\
5 :lp=/dev/null:\
:if=/var/spool/lpd/fax/fax_filter.sh:
§ ¦
The file fax filter.sh itself could contain a script like this &Note to rotate the
/var/log/fax log file — see page 188.- for a modem on /dev/ttyS0:
¨ ¥
#!/bin/sh
exec 1>>/var/log/fax
exec 2>>/var/log/fax
5
echo
echo
echo $@
export DISPLAY=localhost:0.0
export HOME=/home/lp
15 function error()
{
gdialog --title "Send Fax" --msgbox "$1" 10 75 || \
echo ’Huh? no gdialog on this machine’
cd /
20 rm -Rf /tmp/$$fax || \
gdialog \
--title "Send Fax" \
--msgbox "rm -Rf /tmp/$$fax failed" \
10 75
25 exit 1
}
if /usr/bin/gdialog \
--title "Send Fax" \
35 --inputbox "Enter the phone number to fax:" \
10 75 "" 2>TEL ; then
:
else
echo "gdialog failed ‘< TEL‘"
40 rm -Rf /tmp/$$fax
exit 0
fi
TEL=‘< TEL‘
45 test -z "$TEL" && error ’no telephone number given’
322
33. Sending Faxes 33.1. Fax through printing
50 ls -al /var/lock/
/usr/sbin/sendfax -x 5 -n -l ttyS0 $TEL fax.ps.g3 || \
error "sendfax failed"
rm -Rf /tmp/$$fax
55
exit 0
§ ¦
This is not enough however. Above, sendfax requires access to the /dev/ttyS0
device as well as the /var/lock/ directory (to create a modem lock file). It cannot do
this as the lp user (under which the above filter runs) &This may not be true on some systems
(like Debian ) where there are special groups specifically for dialout. In this case such permissions would
-
have already been sorted out and you can just add the lp user to the dialout group. . On RedHat,
the command ls -ald /var/lock /dev/ttyS0 reveals that only uucp is allowed
to access modems. We can get around this by creating a sticky (see Section 14.1) binary
that runs as the uucp user. Do this by compiling the C program,
¨ ¥
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <unistd.h>
5
int main (int argc, char **argv)
{
char **a;
char *p;
10 int i;
/* exit on failure */
35 exit (1);
}
§ ¦
323
33.1. Fax through printing 33. Sending Faxes
in the same directory as the sendfax executable. Then replace sendfax with
sendfax wrapper in the filter script. You can see that sendfax wrapper just ex-
ecutes sendfax after changing the Group ID to the effective group ID as obtained from
the getegid function on line 13. The effective group ID is uucp because of the sticky
group bit (i.e. g+s) in the chmod command, and hence sendfax runs under the uucp
group with full access to the modem device.
The more enlightened will realise that the above wrapper should not be necessary.
However, it is a good exercise in security awareness to implement this, and then try to
figure out if it is vulnerable to exploitation.
324
Chapter 34
uucp is a command to copy a file from one U NIX system to another. uux executes a
command on another U NIX system, even if that command is recieving data through
stdin on the local system. This is extremely useful for automating many kinds of dis-
tributed functions, like mail and news.
The uucp and uux commands both come as part of the uucp package which stands
for Unix to Unix Copy. uucp may sound ridiculous considering the availability of mod-
ern commands like rcp (remote copy), rsh (remote shell) or even FTP transfers (which
accomplish the same thing), but uucp has features that these do not, making it an es-
sential, albeit antiquated, utility. For instance, uucp never executes jobs immediately.
It will, for example, queue a file copy for later processing, and then dial the remote
machine during the night to complete the operation.
uucp predates the Internet: it was originally used to implement a mail system using
only modems and telephone lines. It hence has sophisticated protocols for ensuring
that your file/command really does get there, with the maximum possible fault toler-
ance and the minimum of retransmission. This is why it should always be used for
automated tasks wherever there are unreliable (i.e. modem) connections. The uucp
version that comes with most L INUX distributions is called Taylor UUCP after its
author.
325
34.1. Command-line operation 34. uucp and uux
which runs rmail on the remote system cericon, feeding some text to the rmail
program. Note how you should quote the ! character to prevent it from being in-
terpreted by the shell. These commands will almost always fail with permission
denied by remote. The error will come in a mail message to the user that ran the
command.
34.2 Configuration
being also very carefully to limit what hosts can connect by using the techniques dis-
cussed in Chapter 29. Similarly for xinetd, create a file /etc/xinetd.d/uucp con-
taining,
326
34. uucp and uux 34.2. Configuration
¨ ¥
service uucp
{
only_from = 127.0.0.1 192.168.0.0/16
socket_type = stream
5 wait = no
user = uucp
server = /usr/lib/uucp/uucico
server_args = -l
disable = no
10 }
§ ¦
uucp configuration files are stored in /etc/uucp. I will now should how you
can configure a client machine, machine1.cranzgot.co.za to send mail via
server1.cranzgot.co.za, where server1.cranzgot.co.za is running the
uucico service above.
uucp has an antiquated authentication mechanism that uses its own list of users
and password completely distinct from ordinary U NIX accounts. We must first add
a common “user” and password to both machines for authentication purposes. For
machine1.cranzgot.co.za we can add to the file /etc/uucp/call the lines,
¨ ¥
server1 machine1login pAsSwOrD123
§ ¦
which tells uucp to use the login machine1login whenever trying to speak
to server1. while on server1.cranzgot.co.za we can add to the file
/etc/uucp/passwd the lines,
¨ ¥
machine1login pAsSwOrD123
§ ¦
Note that
the uucp name server1 was chosen for the machine server1.cranzgot.co.za
for convenience. uucp names however have nothing to do with domain names.
Next, we need to tell uucp about the intentions of machine1. Any machine to which
you might connect to or from must be listed in the /etc/uucp/sys file. Our entry
looks like,
¨ ¥
system machine1
call-login *
call-password *
commands rmail
5 protocol t
§ ¦
and you can have as many entries as you like. The only things server1 has to know
about machine1 are the user and password and the preferred protocol. The *’s mean
327
34.2. Configuration 34. uucp and uux
to look up the user and password in the /etc/uucp/passwd file, while protocol t
means to use a simple non-error correcting protocol (as appropriate for use over TCP).
The commands option takes a space separated list of commands that may be executed
— commands not in this list may not be executed for security reasons.
The /etc/uucp/sys file on machine1 will contain:
¨ ¥
system server1
call-login *
call-password *
time any
5 port TCP
address 192.168.3.2
protocol t
§ ¦
Here time any specifies what times of the day uucp may make calls to server1. The
default is time Never &See the uucp documentation under Time Strings for more info.-. The
option port TCP means that we are using a modem named TCP to execute the dialout.
All modems are defined in the file /etc/uucp/ports. We can add our modem entry
as follows,
¨ ¥
port TCP
type tcp
§ ¦
Note that /var/spool/uucppublic/ is the only directory you are allowed access to
by default. You should probably keep it this way for security.
uucico
Although we have queued a job for processing, nothing will transfer until the program
uucico is run (which stands for Unix-to-Unix copy in copy out). The idea is that both
server1 and machine1 may have queued a number of jobs; then when uucico is
328
34. uucp and uux 34.3. Modem dial
running on both machines, and talking to each other, all jobs on both machines are
processed in turn, regardless of which machine initiated the connection.
Usually uucico is run from a crond script every hour. Here we can
tail -f /var/log/uucp/Log while running uucico manually as follows:
¨ ¥
uucico --debug 3 --force --system server1
§ ¦
The higher the debug level, the more verbose output you will see in the Log file. This
will --forceably dial the --system server1 regardless of when it last dialed (usu-
ally there are constraints on calling soon after a failed call: --force overrides this).
If your mail server on server1 is configured correctly, it should now have queued the
message on the remote side.
If you are really going to use uucp the old fashioned way, you can use mgetty to
answer uucp calls on server1 by adding the following to your /etc/inittab file:
¨ ¥
S0:2345:respawn:/sbin/mgetty -s 57600 ttyS0
§ ¦
329
34.4. tty/UUCP Lock files 34. uucp and uux
dialer hayes
5 speed 57600
§ ¦
ACU is antiquated terminology and stands for Automatic Calling Unit (i.e. a modem). We
have to specify the usual types of things for serial ports, like the device (/dev/ttyS0
for a modem on COM1) and speed of the serial line. We also must specify a means to
initialise the modem — the dialer hayes option. A file /etc/uucp/dial should
then contain an entry for our type of modem matching “hayes” as follows:
¨ ¥
dialer hayes
chat "" ATZ\r\d\c OK\r \dATL0E1Q0\r\d\c OK\r ATDT\D CONNECT
chat-fail RING
chat-fail NO\sCARRIER
5 chat-fail ERROR
chat-fail NO\sDIALTONE
chat-fail BUSY
chat-fail NO\sANSWER
chat-fail VOICE
10 complete \d\d+++\d\dATM0H\r\c
abort \d\d+++\d\dATM0H\r\c
§ ¦
Hayes is a generic modem command set, hence the above will work for most modems.
More about modems and dialing will be covered with pppd in Chapter 41.
With the modem properly specified, we can change our entry in the sys file to
¨ ¥
system server1
call-login *
call-password *
time any
5 port ACU
phone 555-6789
protocol g
§ ¦
You will have noticed by now that several services make use serial devices, and many
of them can use the same device at different times. This creates a possibly conflict
when two services wish to use the same device at the same time. For instance, what if
someone wants to send a fax, will another person is dialing in?
330
34. uucp and uux 34.5. Debugging
The solution is the UUCP lock file. This is a file created in /var/lock/ of the form
LCK..device. For instance, when doing a sendfax through a modem connected on
/dev/ttyS0, a file /var/lock/LCK..ttyS0 appears — all mgetty programs obey
the UUCP lock file convention. The contents of this file actually contain the Process ID
of the program using the serial device, so it is easy to check if the lock file is bogus. A
lock file of such a dead process is called a stale lock file, and can be removed manually.
34.5 Debugging
uucp implementations will rarely run smoothly the first time. There are a va-
riety of verbose debugging options. uucico takes the --debug option to spec-
ify the level of debug output. You should examine the file /var/log/uucp/Log,
/var/log/uucp/Debug and /var/log/uucp/Stats to get an idea about
what is going on in the background. Also important is the spool directory
/var/spool/uucp/. You can specify the debugging level with --debug level where
level is in the range of 0 through 11. You can also use --debug chat to only see mo-
dem communication details. A full of other options is &Credits to the uucp documentation.-
:
--debug uucp proto’ Output debugging messages for the UUCP session protocol.
--debug proto Output debugging messages for the individual link protocols.
--debug port Output debugging messages for actions on the communication port.
--debug config Output debugging messages while reading the configuration files.
--debug spooldir Output debugging messages for actions in the spool directory.
331
34.6. Using uux with exim 34. uucp and uux
On machine1 we would like exim to spool all mail through uucp. This requires
using a pipe transport. exim merely sends mail through stdin of the uux command
and then forgets about it. uux is then responsible for executing rmail on server1.
The complete exim.conf file is simply:
¨ ¥
#################### MAIN CONFIGURATION SETTINGS #####################
log_subject
errors_address = admin
local_domains = localhost : ${primary_hostname} : machine1 : \
5 machine1.cranzgot.co.za
host_accept_relay = 127.0.0.1 : localhost : ${primary_hostname} : \
machine1 : machine1.cranzgot.co.za
never_users = root
exim_user = mail
10 exim_group = mail
end
###################### TRANSPORTS CONFIGURATION ######################
uucp:
driver = pipe
15 user = nobody
command = "/usr/bin/uux - --nouucico ${host}!rmail \
${local_part}@${domain}"
return_fail_output = true
local_delivery:
20 driver = appendfile
file = /var/spool/mail/${local_part}
delivery_date_add
envelope_to_add
return_path_add
25 group = mail
mode_fail_narrower =
mode = 0660
end
###################### DIRECTORS CONFIGURATION #######################
30 localuser:
driver = localuser
transport = local_delivery
end
###################### ROUTERS CONFIGURATION #########################
35 touucp:
driver = domainlist
route_list = "* server1"
transport = uucp
end
40 ###################### RETRY CONFIGURATION ###########################
* * F,2m,1m
end
§ ¦
On machine server1, exim must however be running as a full blown mail server to
properly route the mail elsewhere. Of course on server1, rmail is the sender, hence
332
34. uucp and uux 34.6. Using uux with exim
it appears to exim that the mail is coming from the local machine. This means that no
extra configuration is required to support mail coming from a uux command.
Note that further domains can be added to your route list so that your di-
alouts occur directly to the recipient’s machine. For instance:
¨ ¥
route_list = "machine2.cranzgot.co.za machine2 ; \
machine2 machine2 ; \
machine3.cranzgot.co.za machine3 ; \
machine3 machine3 ; \
5 * server1"
§ ¦
You can then add further entries to your /etc/uucp/sys file as follows:
¨ ¥
system machine2
call-login *
call-password *
time any
5 port ACU
phone 555-6789
protocol g
system machine3
10 call-login *
call-password *
time any
port ACU
phone 554-3210
15 protocol g
§ ¦
The exim.conf file on server1 must also have a router to get mail back to
machine1. This will look like:
¨ ¥
###################### ROUTERS CONFIGURATION #########################
touucp:
driver = domainlist
route_list = "machine2.cranzgot.co.za machine2 ; \
5 machine2 machine2 ; \
machine3.cranzgot.co.za machine3 ; \
machine3 machine3"
transport = uucp
lookuphost:
10 driver = lookuphost
transport = remote_smtp
end
§ ¦
333
34.7. Scheduling dialouts 34. uucp and uux
Which sends all mail matching our dialin hosts through the uucp transport while all
other mail (destined for the Internet) falls through to the lookuphost router.
Above we use uucico only manually. uucico does not operate as a daemon pro-
cess on its own, and must be invoked by crond. All systems that use uucp have a
/etc/crontab entry or a script under /etc/cron.hourly.
A typical /etc/crontab for machine1 might contain:
¨ ¥
45 * * * * uucp /usr/lib/uucp/uucico --master
40 8,13,18 * * * root /usr/bin/uux -r server1!
§ ¦
The option --master tells uucico to loop through all pending jobs and call any ma-
chines for which jobs are queued. It does this every hour. The second line queues a null
command three times daily for the machine server1. This will force uucico to dial
out to server1 at least three times a day on the appearance of real work to be done.
The point of this to pick up any jobs coming the other way. This process is known as
creating a poll file.
Clearly you can use uucp over a TCP link initiated by pppd. If a dial link is running in
demand mode, a uucp call will trigger a dialout, and make a straight TCP connection
through to the remote host. A common situation is where a number of satellite systems
are dialing an ISP that has no uucp facility of its own. Here a separate uucp server with
no modems of its own will sit with a permanent Internet connection and listen on TCP
for uucp transfers.
334
Chapter 35
This chapter is a reproduction of the Filesystem Hierarchy Standard, translated into LATEX
with some minor formatting changes. An original can be obtained from the FHS home
page <http://www.pathname.com/fhs/>.
If you have ever asked the questions “where in my file-system does file xxx go?”
or “what is directory yyy for?”, then this document should be consulted. It can be con-
sidered to provide the final word on such matters. Although this is mostly a reference
for people creating new L INUX distributions, all administrators can benefit from an
understanding of the rulings and explanations provided here.
This standard consists of a set of requirements and guidelines for file and directory placement
under U NIX-like operating systems. The guidelines are intended to support interoperability
of applications, system administration tools, development tools, and scripts as well as greater
uniformity of documentation for these systems.
April 12 2000
335
35. The L INUX Filesystem Standard
All trademarks and copyrights are owned by their owners, unless specifically noted otherwise.
Use of a term in this document should not be regarded as affecting the validity of any trademark
or service mark.
Permission is granted to make and distribute verbatim copies of this standard provided the
copyright and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this standard under the con-
ditions for verbatim copying, provided also that the title page is labeled as modified including a
reference to the original standard, provided that information on retrieving the original standard
is included, and provided that the entire resulting derived work is distributed under the terms
of a permission notice identical to this one.
Permission is granted to copy and distribute translations of this standard into another language,
under the above conditions for modified versions, except that this permission notice may be
stated in a translation approved by the copyright holder.
336
35. The L INUX Filesystem Standard 35.1. Introduction
35.1 Introduction
Comments on this standard are welcome from interested parties. Suggestions for changes
should be in the form of a proposed change of text, together with appropriate supporting com-
ments.
The guidelines in this standard are subject to modification. Use of information contained in this
document is at your own risk.
1. Introduction
2. The Filesystem: a statement of some guiding principles.
3. The Root Directory.
4. The /usr Hierarchy.
5. The /var Hierarchy.
6. Operating System Specific Annex.
Within each section, the subdirectories are arranged in ASCII order (uppercase letters first, then
in alphabetical order) for easy reference.
35.1.3 Conventions
A constant-width font is used for displaying the names of files and directories.
Components of filenames that vary are represented by a description of the contents enclosed in
”<” and ”>” characters, <thus>. Electronic mail addresses are also enclosed in ”<” and ”>”
but are shown in the usual typeface.
Optional components of filenames are enclosed in ”[” and ”]” characters and may be combined
with the ”<” and ”>” convention. For example, if a filename is allowed to occur either with or
without an extension, it might be represented by <filename>[.<extension>].
337
35.1. Introduction 35. The L INUX Filesystem Standard
The process of developing a standard filesystem hierarchy began in August 1993 with an effort
to restructure the file and directory structure of Linux. The FSSTND, a filesystem hierarchy
standard specific to the Linux operating system, was released on February 14, 1994. Subsequent
revisions were released on October 9, 1994 and March 28, 1995.
In early 1995, the goal of developing a more comprehensive version of FSSTND to address not
only Linux, but other U NIX-like systems was adopted with the help of members of the BSD
development community. As a result, a concerted effort was made to focus on issues that were
general to U NIX-like systems. In recognition of this widening of scope, the name of the standard
was changed to Filesystem Hierarchy Standard or FHS for short.
Volunteers who have contributed extensively to this standard are listed at the end of this docu-
ment. This standard represents a consensus view of those and other contributors.
35.1.5 Scope
This document specifies a standard filesystem hierarchy for FHS filesystems by specifying the
location of files and directories, and the contents of some system files.
This standard has been designed to be used by system integrators, package developers, and
system administrators in the construction and maintenance of FHS compliant filesystems. It
is primarily intended to be a reference and is not a tutorial on how to manage a conforming
filesystem hierarchy.
The FHS grew out of earlier work on FSSTND, a filesystem organization standard for the Linux
operating system. It builds on FSSTND to address interoperability issues not just in the Linux
community but in a wider arena including 4.4BSD-based operating systems. It incorporates
lessons learned in the BSD world and elsewhere about multi-architecture support and the de-
mands of heterogeneous networking.
Although this standard is more comprehensive than previous attempts at filesystem hierarchy
standardization, periodic updates may become necessary as requirements change in relation to
emerging technology. It is also possible that better solutions to the problems addressed here will
be discovered so that our solutions will no longer be the best possible solutions. Supplementary
drafts may be released in addition to periodic updates to this document. However, a specific
goal is backwards compatibility from one release of this document to the next.
Comments related to this standard are welcome. Any comments or suggestions for changes
should be directed to the FHS editor (Daniel Quinlan <[email protected]>), or if you
prefer, the FHS mailing list. Typographical or grammatical comments should be directed to the
FHS editor.
Before sending mail to the mailing list it is requested that you first contact the FHS editor in order
to avoid excessive re-discussion of old topics. Improper messages will not be well-received on
the mailing list.
Questions about how to interpret items in this document may occasionally arise. If you have
338
35. The L INUX Filesystem Standard 35.1. Introduction
need for a clarification, please contact the FHS editor. Since this standard represents a consensus
of many participants, it is important to make certain that any interpretation also represents their
collective opinion. For this reason it may not be possible to provide an immediate response
unless the inquiry has been the subject of previous discussion.
Here are some of the guidelines that have been used in the development of this standard:
The intended audience of this standard includes, but is not limited to the following groups of
people:
• System Developers
• System Integrators and Distributors
• Application Developers
• Documentation Writers
• System Administrators and other interested parties (for information purposes)
This section defines the meanings of the terms ”compliant” and ”compatible” with respect to
this standard, as well as ”partial” compliance and ”partial” compatibility.
An implementation is fully compliant with this standard if every requirement in this standard
is met. Every file or directory which is part of the implementation must be physically located
as specified in this document. If the contents of a file are described here the actual contents
must correspond to the description. The implementation must also attempt to find any files or
directories, even those external to itself, primarily or exclusively in the location specified in this
standard.
339
35.2. The Filesystem 35. The L INUX Filesystem Standard
An implementation is fully compatible with this standard if every file or directory which it con-
tains can be found by looking in the location specified here and will be found with the contents
as specified here, even if that is not the primary or physical location of the file or directory in
question. The implementation must, when it attempts to find any files or directories which are
not part of it, do so in the location specified in this standard, though it may also attempt to find
it in other (non-standard) locations.
To qualify as partially FHS compliant or partially FHS compatible an implementation must pro-
vide a list of all places at which it and the FHS document differ in addition to a brief explanation
of the reasoning for this difference. This list shall be provided with the implementation in ques-
tion, and also reported and made available to the FHS mailing list or the FHS editor.
The terms ”must”, ”should”, ”contains”, ”is” and so forth should be read as requirements for
compliance or compatibility.
Note that an implementation does not need to contain all the files and directories specified in
this standard to be compliant or compatible. Only the files and directories an implementation
actually contains need to be located appropriately. For example, if a particular filesystem is not
supported by a distribution, the tools for that filesystem need not be included, even though they
may be explicitly listed in this standard.
Furthermore, certain portions of this document are optional. In this case this will be stated
explicitly, or indicated with the use of one or more of ”may”, ”recommend”, or ”suggest”. Items
marked as optional have no bearing on the compliance or conformance of an implementation;
they are suggestions meant to encourage common practice, but may be located anywhere at the
implementor’s choice.
• A hierarchical structure
• Consistent treatment of file data
• Protection of file data
This standard assumes that the operating system underlying an FHS-compliant file system sup-
ports the same basic security features found in most U NIX filesystems. Note that this standard
does not attempt to agree in every possible respect with any particular U NIX system’s imple-
mentation. However, many aspects of this standard are based on ideas found in U NIX and other
U NIX-like systems.
340
35. The L INUX Filesystem Standard 35.2. The Filesystem
It is possible to define two independent categories of files: shareable vs. unshareable and vari-
able vs. static.
Shareable data is that which can be shared between several different hosts; unshareable is that
which must be specific to a particular host. For example, user home directories are shareable
data, but device lock files are not.
Static data includes binaries, libraries, documentation, and anything that does not change with-
out system administrator intervention; variable data is anything else that does change without
system administrator intervention.
For ease of backup, administration, and file-sharing on heterogenous networks of systems with
different architectures and operating systems, it is desirable that there be a simple and easily
understandable mapping from directories (especially directories considered as potential mount
points) to the type of data they contain.
Throughout this document, and in any well-planned filesystem, an understanding of this basic
principle will help organize the structure and lend it additional consistency.
The distinction between shareable and unshareable data is needed for several reasons:
• In a networked environment (i.e., more than one host at a site), there is a good deal of data
that can be shared between different hosts to save space and ease the task of maintenance.
• In a networked environment, certain files contain information specific to a single host.
Therefore these filesystems cannot be shared (without taking special measures).
• Historical implementations of U NIX-like filesystems interspersed shareable and unshare-
able data in the same hierarchy, making it difficult to share large portions of the filesystem.
• A /usr partition (or components of /usr) mounted (read-only) through the network
(using NFS).
• A /usr partition (or components of /usr) mounted from read-only media. A CD-ROM
is one copy of many identical ones distributed to other users by the postal mail system
and other methods. It can thus be regarded as a read-only filesystem shared with other
FHS-compliant systems by some kind of ”network”.
The ”static” versus ”variable” distinction affects the filesystem in two major ways:
• Since / contains both variable and static data, it needs to be mounted read-write.
341
35.3. The Root Directory 35. The L INUX Filesystem Standard
• Since the traditional /usr contains both variable and static data, and since we may want to
mount it read-only (see above), it is necessary to provide a method to have /usr mounted
read-only. This is done through the creation of a /var hierarchy that is mounted read-
write (or is a part of another read-write partition, such as /), taking over much of the
/usr partition’s traditional functionality.
Here is a summarizing chart. This chart is only an example for a common FHS-compliant system,
other chart layouts are possible within FHS-compliance.
shareable unshareable
static /usr /etc
/opt /boot
variable /var/mail /var/run
/var/spool/news /var/lock
• To boot a system, enough must be present on the root partition to mount other filesystems.
This includes utilities, configuration, boot loader information, and other essential start-
up data. /usr, /opt, and /var are designed such that they may be located on other
partitions or filesystems.
• To enable recovery and/or repair of a system, those utilities needed by an experienced
maintainer to diagnose and reconstruct a damaged system should be present on the root
filesystem.
• To restore a system, those utilities needed to restore from system backups (on floppy, tape,
etc.) should be present on the root filesystem.
The primary concern used to balance these considerations, which favor placing many things
on the root filesystem, is the goal of keeping root as small as reasonably possible. For several
reasons, it is desirable to keep the root filesystem small:
342
35. The L INUX Filesystem Standard 35.3. The Root Directory
• Disk errors that corrupt data on the root filesystem are a greater problem than errors on
any other partition. A small root filesystem is less prone to corruption as the result of a
system crash.
Software should never create or require special files or subdirectories in the root directory. Other
locations in the FHS hierarchy provide more than enough flexibility for any package.
BEGIN RATIONALE
There are several reasons why introducing a new subdirectory of the root filesystem
is prohibited:
END RATIONALE
Each directory listed above is specified in detail in separate subsections below. /usr and /var
each have a complete section in this document due to the complexity of those directories.
The operating system kernel image should be located in either / or /boot. Additional informa-
tion on kernel placement can be found in the section regarding /boot, below.
343
35.3. The Root Directory 35. The L INUX Filesystem Standard
35.3.1 /bin : Essential user command binaries (for use by all users)
/bin contains commands that may be used by both the system administrator and by users, but
which are required when no other filesystems are mounted (e.g. in single user mode). It may
also contain commands which are used indirectly by scripts.
Command binaries that are not essential enough to place into /bin should be placed in
/usr/bin, instead. Items that are required only by non-root users (the X Window System,
chsh, etc.) are generally not essential enough to be placed into the root partition.
344
35. The L INUX Filesystem Standard 35.3. The Root Directory
This directory contains everything required for the boot process except configuration files and
the map installer. Thus /boot stores data that is used before the kernel begins executing user-
mode programs. This may include saved master boot sectors, sector map files, and other data
that is not directly edited by hand. Programs necessary to arrange for the boot loader to be able
to boot a file should be placed in /sbin. Configuration files for boot loaders should be placed
in /etc.
Note: On some i386 machines, it may be necessary for /boot to be located on a separate partition located
completely below cylinder 1024 of the boot device due to hardware constraints.
Certain MIPS systems require a /boot partition that is a mounted MS-DOS filesystem or whatever
other filesystem type is accessible for the firmware. This may result in restrictions with respect to usable
filenames for /boot (only for affected systems).
If it is possible that devices in /dev will need to be manually created, /dev shall contain
a command named MAKEDEV, which can create devices as needed. It may also contain a
MAKEDEV.local for any local devices.
If required, MAKEDEV should have provisions for creating any device that may be found on the
system, not just those that a particular implementation installs.
/etc contains configuration files and directories that are specific to the current system.
345
35.3. The Root Directory 35. The L INUX Filesystem Standard
The following section is intended partly to illuminate the description of the contents of /etc
with a number of examples; it is definitely not an exhaustive list.
• Networking files:
Notes:
The setup of command scripts invoked at boot time may resemble System V or BSD models. Further
specification in this area may be added to a future version of this standard.
Systems that use the shadow password suite will have additional configuration files in /etc
(/etc/shadow and others) and programs in /usr/sbin (useradd, usermod, and others).
/etc/X11 is the recommended location for all X11 host-specific configuration. This directory
is necessary to allow local control if /usr is mounted read only. Files that should be in this
directory include Xconfig (and/or XF86Config) and Xmodmap.
Subdirectories of /etc/X11 may include those for xdm and for any other programs (some win-
dow managers, for example) that need them. We recommend that window managers with only
one configuration file which is a default .*wmrc file should name it system.*wmrc (unless
there is a widely-accepted alternative name) and not use a subdirectory. Any window manager
subdirectories should be identically named to the actual window manager binary.
/etc/X11/xdm holds the configuration files for xdm. These are most of the files normally found
in /usr/lib/X11/xdm. Some local variable data for xdm is stored in /var/lib/xdm.
Host-specific configuration files for add-on application software packages shall be installed
within the directory /etc/opt/<package>, where <package> is the name of the subtree
346
35. The L INUX Filesystem Standard 35.3. The Root Directory
in /opt where the static data from that package is stored. No structure is imposed on the inter-
nal arrangement of /etc/opt/<package>.
If a configuration file must reside in a different location in order for the package or system to
function properly, it may be placed in a location other than /etc/opt/<package>.
BEGIN RATIONALE
Refer to the rationale for /opt.
END RATIONALE
/home is a fairly standard concept, but it is clearly a site-specific filesystem. The setup will differ
from host to host. This section describes only a suggested placement for user home directories;
nevertheless we recommend that all FHS-compliant distributions use this as the default location
for home directories.
On small systems, each user’s directory is typically one of the many subdirectories of /home
such as /home/smith, /home/torvalds, /home/operator, etc.
On large systems (especially when the /home directories are shared amongst many hosts using
NFS) it is useful to subdivide user home directories. Subdivision may be accomplished by using
subdirectories such as /home/staff, /home/guests, /home/students, etc.
Different people prefer to place user accounts in a variety of places. Therefore, no program
should rely on this location. If you want to find out a user’s home directory, you should use the
getpwent(3) library function rather than relying on /etc/passwd because user information
may be stored remotely using systems such as NIS.
The /lib directory contains those shared library images needed to boot the system and run the
commands in the root filesystem.
Shared libraries that are only necessary for binaries in /usr (such as any X Window binaries)
do not belong in /lib. Only the shared libraries required to run binaries in /bin and /sbin
should be here. The library libm.so.* may also be placed in /usr/lib if it is not required by
anything in /bin or /sbin.
347
35.3. The Root Directory 35. The L INUX Filesystem Standard
This directory is provided so that the system administrator may temporarily mount filesystems
as needed. The content of this directory is a local issue and should not affect the manner in
which any program is run.
We recommend against the use of this directory by installation programs, and suggest that a
suitable temporary directory not in use by the system should be used instead.
A package to be installed in /opt shall locate its static files in a separate /opt/<package>
directory tree, where <package> is a name that describes the software package.
Package files that are variable (change in normal operation) should be installed in /var/opt.
See the section on /var/opt for more information.
Host-specific configuration files should be installed in /etc/opt. See the section on /etc for
more information.
No other package files should exist outside the /opt, /var/opt, and /etc/opt hierarchies
except for those package files that must reside in specific locations within the filesystem tree in
order to function properly. For example, device lock files must be placed in /var/lock and
devices must be located in /dev.
Distributions may install software in /opt, but should not modify or delete software installed
by the local system administrator without the assent of the local system administrator.
348
35. The L INUX Filesystem Standard 35.3. The Root Directory
BEGIN RATIONALE
The use of /opt for add-on software is a well-established practice in the U NIX com-
munity. The System V Application Binary Interface [AT&T 1990], based on the Sys-
tem V Interface Definition (Third Edition), provides for an /opt structure very sim-
ilar to the one defined here.
The Intel Binary Compatibility Standard v. 2 (iBCS2) also provides a similar struc-
ture for /opt.
Generally, all data required to support a package on a system should be
present within /opt/<package>, including files intended to be copied into
/etc/opt/<package> and /var/opt/<package> as well as reserved direc-
tories in /opt.
The minor restrictions on distributions using /opt are necessary because conflicts
are possible between distribution-installed and locally-installed software, especially
in the case of fixed pathnames found in some binary software.
END RATIONALE
/ is traditionally the home directory of the root account on U NIX systems. /root is used on
many Linux systems and on some U NIX systems (in order to reduce clutter in the / directory).
The root account’s home directory may be determined by developer or local preference. Obvious
possibilities include /, /root, and /home/root.
If the home directory of the root account is not stored on the root partition it will be necessary to
make certain it will default to / if it can not be located.
Note: we recommend against using the root account for mundane things such as mail and news, and that
it be used solely for system administration. For this reason, we recommend that subdirectories such as
Mail and News not appear in the root account’s home directory, and that mail for administration roles
such as root, postmaster and webmaster be forwarded to an appropriate user.
Utilities used for system administration (and other root-only commands) are stored in /sbin,
/usr/sbin, and /usr/local/sbin. /sbin typically contains binaries essential for booting
the system in addition to the binaries in /bin. Anything executed after /usr is known to be
mounted (when there are no problems) should be placed into /usr/sbin. Local-only system
administration binaries should be placed into /usr/local/sbin.
Deciding what things go into "sbin" directories is simple: If a normal (not a system admin-
istrator) user will ever run it directly, then it should be placed in one of the "bin" directories.
Ordinary users should not have to place any of the sbin directories in their path.
Note: For example, files such as chfn which users only occasionally use should still be placed in
/usr/bin. ping, although it is absolutely necessary for root (network recovery and diagnosis) is often
used by users and should live in /bin for that reason.
349
35.3. The Root Directory 35. The L INUX Filesystem Standard
We recommend that users have read and execute permission for everything in /sbin except,
perhaps, certain setuid and setgid programs. The division between /bin and /sbin was not
created for security reasons or to prevent users from seeing the operating system, but to pro-
vide a good partition between binaries that everyone uses and ones that are primarily used for
administration tasks. There is no inherent security advantage in making /sbin off-limits for
users.
• Shutdown commands:
* = one or more of ext, ext2, minix, msdos, xia and perhaps others
• Networking commands:
{ ifconfig, route }
The /tmp directory shall be made available for programs that require temporary files.
Although data stored in /tmp may be deleted in a site-specific manner, it is recommended that
files and directories located in /tmp be deleted whenever the system is booted.
Programs shall not assume that any files or directories in /tmp are preserved between invoca-
tions of the program.
BEGIN RATIONALE
IEEE standard P1003.2 (POSIX, part 2) makes requirements that are similar to the
above section.
FHS added the recommendation that /tmp be cleaned at boot time on the basis
of historical precedent and common practice, but did not make it a requirement
because system administration is not within the scope of this standard.
END RATIONALE
350
35. The L INUX Filesystem Standard 35.4. The /usr Hierarchy
No large software packages should use a direct subdirectory under the /usr hierarchy. An
exception is made for the X Window System because of considerable precedent and widely-
accepted practice. This section of the standard specifies the location for most such packages.
The following symbolic links to directories may be present. This possibility is based on the need
to preserve compatibility with older systems until all implementations can be assumed to use
the /var hierarchy.
Once a system no longer requires any one of the above symbolic links, the link may be removed,
if desired.
This hierarchy is reserved for the X Window System, version 11 release 6, and related files.
To simplify matters and make XFree86 more compatible with the X Window System on other
systems, the following symbolic links should be present:
351
35.4. The /usr Hierarchy 35. The L INUX Filesystem Standard
In general, software should not be installed or managed via the above symbolic links. They are
intended for utilization by users only. The difficulty is related to the release version of the X
Window System — in transitional periods, it is impossible to know what release of X11 is in use.
Because shell script interpreters (invoked with #!<path> on the first line of a shell script) can-
not rely on a path, it is advantageous to standardize their locations. The Bourne shell and C-shell
interpreters are already fixed in /bin, but Perl, Python, and Tcl are often found in many differ-
ent places. /usr/bin/perl, /usr/bin/python, and /usr/bin/tcl should reference the
perl, python, and tcl shell interpreters, respectively. They may be symlinks to the physical
location of the shell interpreters.
This is where all of the system’s general-use include files for the C and C++ programming lan-
guages should be placed.
/usr/lib includes object files, libraries, and internal binaries that are not intended to be exe-
cuted directly by users or shell scripts.
Applications may use a single subdirectory under /usr/lib. If an application uses a subdi-
rectory, all architecture-dependent data exclusively used by the application should be placed
within that subdirectory. For example, the perl5 subdirectory for Perl 5 modules and libraries.
352
35. The L INUX Filesystem Standard 35.4. The /usr Hierarchy
Some executable commands such as makewhatis and sendmail have also been traditionally
placed in /usr/lib. makewhatis is an internal binary and should be placed in a binary di-
rectory; users access only catman. Newer sendmail binaries are now placed by default in
/usr/sbin; a symbolic link should remain from /usr/lib. Additionally, systems using a
sendmail-compatible mail transport agent should provide /usr/sbin/sendmail as a sym-
bolic link to the appropriate executable.
A symbolic link /usr/lib/X11 pointing to the lib/X11 directory of the default X distribution
is required if X is installed.
Note: No host-specific data for the X Window System should be stored in /usr/lib/X11. Host-specific
configuration files such as Xconfig or XF86Config should be stored in /etc/X11. This should
include configuration data such as system.twmrc even if it is only made a symbolic link to a more
global configuration file (probably in /usr/X11R6/lib/X11).
The /usr/local hierarchy is for use by the system administrator when installing software
locally. It needs to be safe from being overwritten when the system software is updated. It may
be used for programs and data that are shareable amongst a group of hosts, but not found in
/usr.
This directory should always be empty after first installing a FHS-compliant system. No excep-
tions to this rule should be made other than the listed directory stubs.
Locally installed software should be placed within /usr/local rather than /usr unless it is
being installed to replace or upgrade software in /usr.
Note that software placed in / or /usr may be overwritten by system upgrades (though we
recommend that distributions do not overwrite data in /etc under these circumstances). For
this reason, local software should not be placed outside of /usr/local without good reason.
353
35.4. The /usr Hierarchy 35. The L INUX Filesystem Standard
This directory contains any non-essential binaries used exclusively by the system administrator.
System administration programs that are required for system repair, system recovery, mounting
/usr, or other essential functions should be placed in /sbin instead.
These server programs are used when entering the System V states known as ”run level 2”
(multi-user state) and ”run level 3” (networked state) or the BSD state known as ”multi-user
mode”. At this point the system is making services available to users (e.g., printer support) and
to other hosts (e.g., NFS exports).
The /usr/share hierarchy is for all read-only architecture independent data files. Much of
this data originally lived in /usr (man, doc) or /usr/lib (dict, terminfo, zoneinfo). This
hierarchy is intended to be shareable among all architecture platforms of a given OS; thus, for
example, a site with i386, Alpha, and PPC platforms might maintain a single /usr/share di-
rectory that is centrally-mounted. Note, however, that /usr/share is generally not intended
to be shared by different OSes or by different releases of the same OS.
Any program or package which contains or requires data that doesn’t need to be modified
should store that data in /usr/share (or /usr/local/share, if installed locally). It is rec-
ommended that a subdirectory be used in /usr/share for this purpose.
Note that Linux currently uses DBM-format database files. While these are not architecture-
independent, they are allowed in /usr/share in anticipation of a switch to the architecture-
independent DB 2.0 format.
354
35. The L INUX Filesystem Standard 35.4. The /usr Hierarchy
Game data stored in /usr/share/games should be purely static data. Any modifiable files,
such as score files, game play logs, and so forth, should be placed in /var/games.
Traditionally this directory contains only the English words file, which is used by look(1)
and various spelling programs. words may use either American or British spelling.
Sites that require both may link words to /usr/share/dict/american-english or
/usr/share/dict/british-english.
Word lists for other languages may be added using the English name for that language, e.g.,
/usr/share/dict/french, /usr/share/dict/danish, etc. These should, if possible, use
an ISO 8859 character set which is appropriate for the language in question; if possible the Latin1
(ISO 8859-1) character set should be used (this is often not possible).
Other word lists, such as the web2 ”dictionary” should be included here, if present.
BEGIN RATIONALE
The reason that only word lists are located here is that they are the only files com-
mon to all spell checkers.
END RATIONALE
This section details the organization for manual pages throughout the system, including
/usr/share/man. Also refer to the section on /var/cache/man.
355
35.4. The /usr Hierarchy 35. The L INUX Filesystem Standard
Provisions must be made in the structure of /usr/share/man to support manual pages which
are written in different (or multiple) languages. These provisions must take into account the
storage and reference of these manual pages. Relevant factors include language (including
geographical-based differences), and character code set.
<language>[ <territory>][.<character-set>][,<version>]
The <language> field shall be taken from ISO 639 (a code for the representation of names of
languages). It shall be two characters wide and specified with lowercase letters only.
The <territory> field shall be the two-letter code of ISO 3166 (a specification of representa-
tions of countries), if possible. (Most people are familiar with the two-letter codes used for the
country codes in email addresses.) It shall be two characters wide and specified with uppercase
letters only. & A major exception to this rule is the United Kingdom, which is ‘GB’ in the ISO 3166, but
‘UK’ for most email addresses. -
The <character-set> field should represent the standard describing the character set. If the
<character-set> field is just a numeric specification, the number represents the number of
the international standard describing the character set. It is recommended that this be a nu-
meric representation if possible (ISO standards, especially), not include additional punctuation
symbols, and that any letters be in lowercase.
Systems which use a unique language and code set for all manual pages may omit the
<locale> substring and store all manual pages in <mandir>. For example, systems
which only have English manual pages coded with ASCII, may store manual pages (the
man<section> directories) directly in /usr/share/man. (That is the traditional circum-
stance and arrangement, in fact.)
356
35. The L INUX Filesystem Standard 35.4. The /usr Hierarchy
Countries for which there is a well-accepted standard character code set may omit the
<character-set> field, but it is strongly recommended that it be included, especially for
countries with several competing standards.
Various examples:
Similarly, provision must be made for manual pages which are architecture-dependent, such
as documentation on device-drivers or low-level system administration commands. These
should be placed under an <arch> directory in the appropriate man<section> direc-
tory; for example, a man page for the i386 ctrlaltdel(8) command might be placed in
/usr/share/man/<locale>/man8/i386/ctrlaltdel.8.
Manual pages for commands and data under /usr/local are stored in /usr/local/man.
Manual pages for X11R6 are stored in /usr/X11R6/man. It follows that all manual page hier-
archies in the system should have the same structure as /usr/share/man. Empty directories
may be omitted from a manual page hierarchy. For example, if /usr/local/man has no man-
ual pages in section 4 (Devices), then /usr/local/man/man4 may be omitted.
The cat page sections (cat<section>) containing formatted manual page entries are also
found within subdirectories of <mandir>/<locale>, but are not required nor should they
be distributed in lieu of nroff source manual pages.
The numbered sections ”1” through ”8” are traditionally defined. In general, the file name for
manual pages located within a particular section end with .<section>.
In addition, some large sets of application-specific manual pages have an additional suffix ap-
pended to the manual page filename. For example, the MH mail handling system manual pages
should have mh appended to all MH manuals. All X Window System manual pages should have
an x appended to the filename.
357
35.4. The /usr Hierarchy 35. The L INUX Filesystem Standard
This directory contains miscellaneous architecture-independent files which don’t require a sep-
arate subdirectory under /usr/share. It is a required directory under /usr/share.
Other (application-specific) files may appear here, but a distributor may place them in
/usr/lib at their discretion. Some such files include:
358
35. The L INUX Filesystem Standard 35.5. The /var Hierarchy
/var contains variable data files. This includes spool directories and files, administrative and
logging data, and transient and temporary files.
Some portions of /var are not shareable between different systems. For instance,
/var/log, /var/lock, and /var/run. Other portions may be shared, notably /var/mail,
/var/cache/man, /var/cache/fonts, and /var/spool/news.
/var is specified here in order to make it possible to mount /usr read-only. Everything that
once went into /usr that is written to during system operation (as opposed to installation and
software maintenance) must be in /var.
If /var cannot be made a separate partition, it is often preferable to move /var out of the root
partition and into the /usr partition. (This is sometimes done to reduce the size of the root
partition or when space runs low in the root partition.) However, /var should not be linked to
/usr because this makes separation of /usr and /var more difficult and is likely to create a
naming conflict. Instead, link /var to /usr/var.
Applications should generally not add directories to the top level of /var. Such directories
should only be added if they have some system-wide implication, and in consultation with the
FHS mailing list.
359
35.5. The /var Hierarchy 35. The L INUX Filesystem Standard
The cache, lock, log, run, spool, lib, and tmp directories must be included and used in
all distributions; the account, crash, games, mail, and yp directories must be included and
used if the corresponding applications or features are provided in the distribution.
Several directories are ‘reserved’ in the sense that they should not be used arbitrarily by some
new application, since they would conflict with historical and/or local practice. They are:
/var/backups
/var/cron
/var/lib
/var/local
/var/msgs
/var/preserve
This directory holds the current active process accounting log and the composite process usage
data (as used in some U NIX-like systems by lastcomm and sa).
/var/cache is intended for cached data from applications. Such data is locally generated as
a result of time-consuming I/O or calculation. The application must be able to regenerate or
restore the data. Unlike /var/spool, the cached files can be deleted without data loss. The
data should remain valid between invocations of the application and rebooting the system.
Files located under /var/cache may be expired in an application specific manner, by the sys-
tem administrator, or both. The application should always be able to recover from manual dele-
tion of these files (generally because of a disk space shortage). No other requirements are made
on the data format of the cache directories.
BEGIN RATIONALE
The existence of a separate directory for cached data allows system administrators
to set different disk and backup policies from other directories in /var.
END RATIONALE
360
35. The L INUX Filesystem Standard 35.5. The /var Hierarchy
Note: this standard does not currently incorporate the TEX Directory Structure (a document that
describes the layout TEX files and directories), but it may be useful reading. It is located at
ftp://ctan.tug.org/tex/.
Other dynamically created fonts may also be placed in this tree, under appropriately-named
subdirectories of /var/cache/fonts.
This directory provides a standard location for sites that provide a read-only /usr partition, but
wish to allow caching of locally-formatted man pages. Sites that mount /usr as writable (e.g.,
single-user installations) may choose not to use /var/cache/man and may write formatted
man pages into the cat<section> directories in /usr/share/man directly. We recommend
that most sites use one of the following options instead:
The structure of /var/cache/man needs to reflect both the fact of multiple man page hierar-
chies and the possibility of multiple language support.
Man pages written to /var/cache/man may eventually be transferred to the appropriate pre-
formatted directories in the source man hierarchy or expired; likewise formatted man pages in
the source man hierarchy may be expired if they are not accessed for a period of time.
If preformatted manual pages come with a system on read-only media (a CD-ROM, for instance),
they shall be installed in the source man hierarchy (e.g. /usr/share/man/cat<section>).
/var/cache/man is reserved as a writable cache for formatted manual pages.
BEGIN RATIONALE
361
35.5. The /var Hierarchy 35. The L INUX Filesystem Standard
Release 1.2 of the standard specified /var/catman for this hierarchy. The path
has been moved under /var/cache to better reflect the dynamic nature of the
formatted man pages. The directory name has been changed to man to allow for
enhancing the hierarchy to include post-processed formats other than ”cat”, such
as PostScript, HTML, or DVI.
END RATIONALE
This directory holds system crash dumps. As of the date of this release of the standard, system
crash dumps were not supported under Linux.
Any variable data relating to games in /usr should be placed here. /var/games should hold
the variable data previously found in /usr; static data, such as help text, level descriptions, and
so on, should remain elsewhere, such as /usr/share/games.
BEGIN RATIONALE
/var/games has been given a hierarchy of its own, rather than leaving it merged
in with the old /var/lib as in release 1.2. The separation allows local control
of backup strategies, permissions, and disk usage, as well as allowing inter-host
sharing and reducing clutter in /var/lib. Additionally, /var/games is the path
traditionally used by BSD.
END RATIONALE
This hierarchy holds state information pertaining to an application or the system. State informa-
tion is data that programs modify while they run, and that pertains to one specific host. Users
should never need to modify files in /var/lib to configure a package’s operation.
State information is generally used to preserve the condition of an application (or a group of
inter-related applications) between invocations and between different instances of the same ap-
plication. State information should generally remain valid after a reboot, should not be logging
output, and should not be spooled data.
362
35. The L INUX Filesystem Standard 35.5. The /var Hierarchy
/var/lib/<name> is the location that should be used for all distribution packaging support.
Different distributions may use different names, of course.
An important difference between this version of this standard and previous ones is that applica-
tions are now required to use a subdirectory of /var/lib.
These directories contain saved files generated by any unexpected termination of an editor (e.g.,
elvis, jove, nvi).
Other editors may not require a directory for crash-recovery files, but may require a well-
defined place to store other information while the editor is running. This information should
be stored in a subdirectory under /var/lib (for example, GNU Emacs would place lock files
in /var/lib/emacs/lock).
Future editors may require additional state information beyond crash-recovery files and lock
files — this information should also be placed under /var/lib/<editor>.
BEGIN RATIONALE
Previous Linux releases, as well as all commercial vendors, use /var/preserve
for vi or its clones. However, each editor uses its own format for these crash-
recovery files, so a separate directory is needed for each editor.
Editor-specific lock files are usually quite different from the device or resource lock
files that are stored in /var/lock and, hence, are stored under /var/lib.
END RATIONALE
This directory contains variable data not placed in a subdirectory in /var/lib. An attempt
should be made to use relatively unique names in this directory to avoid namespace conflicts.
Note that this hierarchy should contain files stored in /var/db in current BSD releases. These
include locate.database and mountdtab, and the kernel symbol database(s).
363
35.5. The /var Hierarchy 35. The L INUX Filesystem Standard
Device lock files, such as the serial device lock files that were originally found in either
/usr/spool/locks or /usr/spool/uucp, must now be stored in /var/lock. The nam-
ing convention which must be used is
LCK.. followed by the base name of the device file. For example, to lock /dev/cua0 the file
LCK..cua0 would be created.
The format used for device lock files must be the HDB UUCP lock file format. The HDB format is
to store the process identifier (PID) as a ten byte ASCII decimal number, with a trailing newline.
For example, if process 1230 holds a lock file, it would contain the eleven characters: space,
space, space, space, space, space, one, two, three, zero, and newline.
Then, anything wishing to use /dev/cua0 can read the lock file and act accordingly (all locks
in /var/lock should be world-readable).
This directory contains miscellaneous log files. Most logs should be written to this directory or
an appropriate subdirectory.
The mail spool must be accessible through /var/mail and the mail spool files must take the
form <username>. /var/mail may be a symbolic link to another directory.
User mailbox files in this location should be stored in the standard U NIX mailbox format.
BEGIN RATIONALE
The logical location for this directory was changed from /var/spool/mail in or-
der to bring FHS in-line with nearly every U NIX implementation. This change is
important for inter-operability since a single /var/mail is often shared between
multiple hosts and multiple U NIX implementations (despite NFS locking issues).
It is important to note that there is no requirement to physically move the mail
spool to this location. However, programs and header files should be changed to
use /var/mail.
END RATIONALE
364
35. The L INUX Filesystem Standard 35.5. The /var Hierarchy
package is stored, except where superseded by another file in /etc. No structure is imposed on
the internal arrangement of /var/opt/<package>.
BEGIN RATIONALE
Refer to the rationale for /opt.
END RATIONALE
This directory contains system information data describing the system since it was booted. Files
under this directory should be cleared (removed or truncated as appropriate) at the beginning
of the boot process. Programs may have a subdirectory of /var/run; this is encouraged for
programs that use more than one run-time file.
Note: programs that run as non-root users may be unable to create files under /var/run and therefore
need a subdirectory owned by the appropriate user.
Process identifier (PID) files, which were originally placed in /etc, should be placed in
/var/run. The naming convention for PID files is <program-name>.pid. For example,
the crond PID file is named /var/run/crond.pid.
The internal format of PID files remains unchanged. The file should consist of the process iden-
tifier in ASCII-encoded decimal, followed by a newline character. For example, if crond was
process number 25, /var/run/crond.pid would contain three characters: two, five, and new-
line.
Programs that read PID files should be somewhat flexible in what they accept; i.e., they should
ignore extra whitespace, leading zeroes, absence of the trailing newline, or additional lines in the
PID file. Programs that create PID files should use the simple specification located in the above
paragraph.
The utmp file, which stores information about who is currently using the system, is located in
this directory.
Programs that maintain transient U NIX-domain sockets should place them in this directory.
365
35.5. The /var Hierarchy 35. The L INUX Filesystem Standard
/var/spool contains data which is awaiting some kind of later processing. Data in
/var/spool represents work to be done in the future (by a program, user, or administrator);
often data is deleted after it has been processed.
UUCP lock files must be placed in /var/lock. See the above section on /var/lock.
The lock file for lpd, lpd.lock, should be placed in /var/spool/lpd. It is suggested that
the lock file for each printer be placed in the spool directory for that specific printer and named
lock.
This directory holds the rwhod information for other systems on the local net.
BEGIN RATIONALE
Some BSD releases use /var/rwho for this data; given its historical location in
/var/spool on other systems and its approximate fit to the definition of ‘spooled’
data, this location was deemed more appropriate.
END RATIONALE
The /var/tmp directory is made available for programs that require temporary files or direc-
tories that are preserved between system reboots. Therefore, data stored in /var/tmp is more
persistent than data in /tmp.
Files and directories located in /var/tmp must not be deleted when the system is booted. Al-
though data stored in /var/tmp is typically deleted in a site-specific manner, it is recommended
that deletions occur at a less frequent interval than /tmp.
Variable data for the Network Information Service (NIS), formerly known as the Sun Yellow
Pages (YP), shall be placed in this directory.
366
35. The L INUX Filesystem Standard 35.6. Operating System Specific Annex
BEGIN RATIONALE
/var/yp is the standard directory for NIS (YP) data and is almost exclusively used
in NIS documentation and systems.
NIS should not be confused with Sun NIS+, which uses a different directory,
/var/nis.
END RATIONALE
This section is for additional requirements and recommendations that only apply to a specific
operating system. The material in this section should never conflict with the base standard.
35.6.1 Linux
/ : Root directory
On Linux systems, if the kernel is located in /, we recommend using the names vmlinux or
vmlinuz, which have been used in recent Linux kernel source packages.
All devices and special files in /dev should adhere to the Linux Allocated Devices docu-
ment, which is available with the Linux kernel source. It is maintained by H. Peter Anvin
<[email protected]>.
Symbolic links in /dev should not be distributed with Linux systems except as provided in the
Linux Allocated Devices document.
BEGIN RATIONALE
The requirement not to make symlinks promiscuously is made because local setups
will often differ from that on the distributor’s development machine. Also, if a dis-
tribution install script configures the symbolic links at install time, these symlinks
will often not get updated if local changes are made in hardware. When used re-
sponsibly at a local level, however, they can be put to good use.
END RATIONALE
367
35.6. Operating System Specific Annex 35. The L INUX Filesystem Standard
The proc filesystem is the de-facto standard Linux method for handling process and system
information, rather than /dev/kmem and other similar methods. We strongly encourage this for
the storage and retrieval of process information as well as other kernel and memory information.
Static ln (sln) and static sync (ssync) are useful when things go wrong. The primary
use of sln (to repair incorrect symlinks in /lib after a poorly orchestrated upgrade) is no
longer a major concern now that the ldconfig program (usually located in /usr/sbin)
exists and can act as a guiding hand in upgrading the dynamic libraries. Static sync is
useful in some emergency situations. Note that these need not be statically linked versions
of the standard ln and sync, but may be.
The ldconfig binary is optional for /sbin since a site may choose to run ldconfig at
boot time, rather than only when upgrading the shared libraries. (It’s not clear whether
or not it is advantageous to run ldconfig on each boot.) Even so, some people like
ldconfig around for the following (all too common) situation:
1. I’ve just removed /lib/<file>.
2. I can’t find out the name of the library because ls is dynamically linked, I’m using
a shell that doesn’t have ls built-in, and I don’t know about using ”echo *” as a
replacement.
3. I have a static sln, but I don’t know what to call the link.
• Miscellaneous:
{ ctrlaltdel, kbdrate }
So as to cope with the fact that some keyboards come up with such a high repeat rate as
to be unusable, kbdrate may be installed in /sbin on some systems.
368
35. The L INUX Filesystem Standard 35.6. Operating System Specific Annex
Since the default action in the kernel for the Ctrl-Alt-Del key combination is an instant
hard reboot, it is generally advisable to disable the behavior before mounting the root
filesystem in read-write mode. Some init suites are able to disable Ctrl-Alt-Del, but
others may require the ctrlaltdel program, which may be installed in /sbin on those
systems.
These symbolic links are required if a C or C++ compiler is installed and only for systems not
based on glibc.
For systems based on glibc, there are no specific guidelines for this directory. For systems based
on Linux libc revisions prior to glibc, the following guidelines and rationale apply:
The only source code that should be placed in a specific location is the Linux kernel source code.
It is located in /usr/src/linux.
If a C or C++ compiler is installed, but the complete Linux kernel source code is not installed,
then the include files from the kernel source code shall be located in these directories:
/usr/src/linux/include/asm-<arch>
/usr/src/linux/include/linux
BEGIN RATIONALE
It is important that the kernel include files be located in /usr/src/linux and not
in /usr/include so there are no problems when system administrators upgrade
their kernel version for the first time.
END RATIONALE
This directory contains the variable data for the cron and at programs.
369
35.7. Appendix 35. The L INUX Filesystem Standard
35.7 Appendix
The FHS mailing list is located at <[email protected]>. To subscribe to the list send mail to
<[email protected]> with body ”ADD fhs-discuss”.
Thanks to Network Operations at the University of California at San Diego who allowed us to
use their excellent mailing list server.
As noted in the introduction, please do not send mail to the mailing list without first contacting
the FHS editor or a listed contributor.
35.7.2 Acknowledgments
The developers of the FHS wish to thank the developers, system administrators, and users
whose input was essential to this standard. We wish to thank each of the contributors who
helped to write, compile, and compose this standard.
The FHS Group also wishes to thank those Linux developers who supported the FSSTND, the
predecessor to this standard. If they hadn’t demonstrated that the FSSTND was beneficial, the
FHS could never have evolved.
35.7.3 Contributors
Brandon S. Allbery <[email protected]>
Keith Bostic <[email protected]>
Drew Eckhardt <[email protected]>
Rik Faith <[email protected]>
Stephen Harris <[email protected]>
Ian Jackson <[email protected]>
John A. Martin <[email protected]>
Ian McCloghrie <[email protected]>
Chris Metcalf <[email protected]>
Ian Murdock <[email protected]>
David C. Niemi <[email protected]>
Daniel Quinlan <[email protected]>
Eric S. Raymond <[email protected]>
Mike Sangrey <[email protected]>
David H. Silber <[email protected]>
Theodore Ts’o <[email protected]>
Stephen Tweedie <[email protected]>
Fred N. van Kempen <[email protected]>
Bernd Warken <[email protected]>
370
Chapter 36
Here we will show how to set up a web server running virtual domains, and dynamic
CGI web pages. HTML is not covered, and you are expected to have some understand-
ing of what HTML is, or at least where to find documentation about it.
In Section 26.2 we showed a simple HTTP session using the telnet command. A web
server is really nothing more than a program that reads files off disk at the request of
GET /filename.html HTTP/1.0 requests coming in from a particular port. Here we
will show a simple web server written in shell script &Not by me. The author did not put his
name in the source, so if you are out there, please drop me an email.-. You will need to add the line
¨ ¥
www stream tcp nowait nobody /usr/local/sbin/sh-httpd
§ ¦
to your /etc/inetd.conf file. If you are running xinetd, then you will need to add
a file containing,
¨ ¥
service www
{
socket_type = stream
wait = no
5 user = nobody
server = /usr/local/sbin/sh-httpd
}
§ ¦
371
36.1. Web server basics 36. httpd — Apache Web Server
to your /etc/xinetd.d/ directory. Then you must stop any already running web
servers and restart inetd (or xinetd).
You will also have to create a log file (/usr/local/var/log/sh-httpd.log) and
at least one web page (/usr/local/var/sh-www/index.html) for your server to
serve. It can contain, say:
¨ ¥
<HTML>
<HEAD>
<TITLE>My First Document</TITLE>
</HEAD>
5 <BODY bgcolor=#CCCCCC text="#000000">
This is my first document<P>
Please visit
<A HREF="http://rute.sourceforge.net/">
The Rute Home Page
10 </A>
for more info.</P>
</BODY>
</HTML>
§ ¦
Note that the server runs as nobody so the log file must be writable by the nobody
user, while the index.html file must be readable. Also note the use of the
getpeername command which can be changed to PEER="" if you do not have the
netpipes package installed &I am not completely sure if there are other commands used here
which are not available on other U NIX systems.-.
¨ ¥
#!/bin/sh
VERSION=0.1
NAME="ShellHTTPD"
DEFCONTENT="text/html"
5 DOCROOT=/usr/local/var/sh-www
DEFINDEX=index.html
LOGFILE=/usr/local/var/log/sh-httpd.log
log() {
10 local REMOTE_HOST=$1
local REFERRER=$2
local CODE=$3
local SIZE=$4
print_header() {
20 echo -e "HTTP/1.0 200 OK\r"
echo -e "Server: ${NAME}/${VERSION}\r"
echo -e "Date: ‘date‘\r"
}
372
36. httpd — Apache Web Server 36.1. Web server basics
25 print_error() {
echo -e "HTTP/1.0 $1 $2\r"
echo -e "Content-type: $DEFCONTENT\r"
echo -e "Connection: close\r"
echo -e "Date: ‘date‘\r"
30 echo -e "\r"
echo -e "$2\r"
exit 1
}
35 guess_content_type() {
local FILE=$1
local CONTENT
case ${FILE##*.} in
40 html) CONTENT=$DEFCONTENT ;;
gz) CONTENT=application/x-gzip ;;
*) CONTENT=application/octet-stream ;;
esac
do_get() {
local DIR
50 local NURL
local LEN
if [ ! -d $DOCROOT ]; then
log ${PEER} - 404 0
55 print_error 404 "No such file or directory"
fi
if [ -z "${URL##*/}" ]; then
URL=${URL}${DEFINDEX}
60 fi
DIR="‘dirname $URL‘"
if [ ! -d ${DOCROOT}/${DIR} ]; then
log ${PEER} - 404 0
65 print_error 404 "Directory not found"
else
cd ${DOCROOT}/${DIR}
NURL="‘pwd‘/‘basename ${URL}‘"
URL=${NURL}
70 fi
if [ ! -f ${URL} ]; then
log ${PEER} - 404 0
print_error 404 "Document not found"
75 fi
print_header
guess_content_type ${URL}
373
36.1. Web server basics 36. httpd — Apache Web Server
90 read REQUEST
read DIRT
case $COMMAND in
100 HEAD)
print_error 501 "Not implemented (yet)"
;;
GET)
do_get
105 ;;
*)
print_error 501 "Not Implemented"
;;
esac
110 }
#
# It was supposed to be clean - without any non-standard utilities
# but I want some logging where the connections come from, so
115 # I use just this one utility to get the peer address
#
# This is from the netpipes package
PEER="‘getpeername | cut -d ’ ’ -f 1‘"
120 read_request
exit 0
§ ¦
Now telnet localhost 80 as in Section 26.2. If that works, and your log files are
being properly appended, you can try connect to http://localhost/ with a web
browser likeNetscape.
Notice also that a command getsockname (which tells you which of your own IP
addresses the remote client connected to) could allow the script to serve pages from a
different directory for each IP address. This is virtual domains in a nutshell &Groovy baby
I’m in a giant nutshell.... how do I get out?-.
374
36. httpd — Apache Web Server 36.2. Installing and configuring Apache
Because all distributions package Apache in a different way, here I will assume Apache
to have been installed from its source tree, rather than be installed from a .deb
or .rpm package. You can refer to Section 24.1 on how to install Apache from its
source .tar.gz file like any other GNU package. (You can even install it un-
der Win95/NT and OS/2.) The source tree is of course available from The Apache
Home Page <http://www.apache.org>. Here I will assume you have installed it
in --prefix=/opt/apache/. In the process, Apache will dump a huge reference
manual into /opt/apache/htdocs/manual/.
Apache has several legacy configuration files access.conf and srm.conf. These
files are now depreciated and should be left empty. A single configuration file
/opt/apache/conf/httpd.conf may contain at minimum:
¨ ¥
ServerType standalone
ServerRoot "/opt/apache"
PidFile /opt/apache/logs/httpd.pid
ScoreBoardFile /opt/apache/logs/httpd.scoreboard
5 Port 80
User nobody
Group nobody
HostnameLookups Off
ServerAdmin [email protected]
10 UseCanonicalName On
ServerSignature On
DefaultType text/plain
ErrorLog /opt/apache/logs/error_log
LogLevel warn
15 LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog /opt/apache/logs/access_log common
DocumentRoot "/opt/apache/htdocs"
DirectoryIndex index.html
AccessFileName .htaccess
20 <Directory />
Options FollowSymLinks
AllowOverride None
Order Deny,Allow
Deny from All
25 </Directory>
<Files ˜ "ˆ\.ht">
Order allow,deny
Deny from all
</Files>
30 <Directory "/opt/apache/htdocs">
Options Indexes FollowSymLinks MultiViews
AllowOverride All
375
36.2. Installing and configuring Apache 36. httpd — Apache Web Server
Order allow,deny
Allow from all
35 </Directory>
<Directory "/opt/apache/htdocs/home/*/www">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
40 Allow from all
</Directory>
UserDir /opt/apache/htdocs/home/*/www
§ ¦
With the config file ready, you can move the index.html file above to
/opt/apache/htdocs/. You will notice the complete Apache manual and a demo
page already install there, which you can move to another directory for the time being.
Now run
¨ ¥
/opt/apache/bin/httpd -X
§ ¦
ServerType As discussed in Section 29.2, some services can run standalone or from
inetd (or xinetd). This directive can be exactly standalone or inetd. If
inetd is chosen, the you will need to add an appropriate line into your inetd
configuration, although a web server should almost certainly choose standalone
mode.
ServerRoot This is the directory superstructure &See page 131.- under which Apache
is installed. It will always be the same as the value passed to --prefix=.
PidFile Many system services store the Process ID in a file for shutdown and moni-
toring purposes. On most distributions /var/run/httpd.pid.
ScoreBoardFile Used for communication between Apache parent and child pro-
cesses on some non-U NIX systems.
Port TCP port to listen on for standalone servers.
User, Group This is important for security. It forces httpd to user nobody privileges.
If the web server is ever hacked, the attack will not be able to gain more than the
privileges of the nobody user.
376
36. httpd — Apache Web Server 36.2. Installing and configuring Apache
HostnameLookups If you would like to force a reverse DNS lookup on every con-
necting host, set this directive to on. If you want to force a forward lookup on ev-
ery reverse lookup, set this to double. This is for logging purposes since access
control does a reverse and forward reverse lookup anyway. It should certainly
be off if you want to reduce latency.
ServerAdmin Error messages include this email address.
UseCanonicalName If Apache has to return a URL for any reason it will normally
try return the full name of the server. Setting to off uses the very hostname sent
by the client.
ServerSignature Causes addition of the server name to HTML error messages.
DefaultType All files returned to the client have a type field saying how they should
be displayed. In the case of Apache being unable to deduce the type, the files are
assumed to be of Mime Type text/plain.
ErrorLog Where errors get logged, usually /var/log/httpd/error log
LogLevel How much info to log.
LogFormat Define a new log format. Here we defined a log format and call it
common. Multiple lines are allowed. Lots of interesting info can actually
be logged: see /opt/apache/htdocs/manual/mod/mod log config.html
for a full description.
CustomLog The log file and its (previously defined) format.
DocumentRoot This is the top level directory that client connections will see. The
string /opt/apache/htdocs/ will be prepended to any file lookup, and
hence a URL http://localhost/manual/index.html.en will return the
file /opt/apache/htdocs/manual/index.html.en.
DirectoryIndex This gives the default file to try serve for URL’s that contain only
a directory name. If a file index.html does not exist under that directory, an
index of the directory will be sent to the client. Other common configurations
use index.htm or default.html etc.
AccessFileName Before serving a file to a client, Apache will read additional di-
rectives from a file .htaccess in the same directory as the requested file. If a
parent directory contains a .htaccess instead, this one will take priority. The
.htaccess file contains directives that limit access to the directory. This will
discussed below.
The above is merely the general configuration of Apache. To actually serve pages,
you need to defined directories, each with a particular purpose, containing particular
HTML or graphic files. The Apache configuration file is very much like an HTML
377
36.2. Installing and configuring Apache 36. httpd — Apache Web Server
document. Sections are started with <section parameter> and ended with </section>.
The most common directive of this sort is <Directory /directory> which does such
directory definition. Before defining any directories, we need to limit access to the root
directory. This is critical for security:
¨ ¥
<Directory />
Options FollowSymLinks
Deny from All
Order Deny,Allow
5 AllowOverride None
</Directory>
§ ¦
This tells Apache about the root directory, giving clients very restrictive access to it.
The directives are &Some of this is shamelessly plagiarised from the Apache manual.-:
Options
The Options directive controls which server features are available in a partic-
ular directory. There is also the syntax +option or -option to include the options of
the parent directory. For example, Options +FollowSymLinks -Indexes.
FollowSymLinks The server will follow any symbolic links beneath the direc-
tory. Be careful about what symbolic links you have beneath directories
with FollowSymLinks. You can for example give everyone access to the
root directory by having a link ../../../ under htdocs — not what you
want.
ExecCGI Execution of CGI scripts is permitted.
Includes Server-side includes are permitted (more on this later).
IncludesNOEXEC Server-side includes are permitted, but the #exec command
and #include of CGI scripts are disabled.
Indexes If a client asks for a directory by name, and there is no index.html
file (or whatever DirectoryIndex file you specified) present, then an list-
ing of the contents of that directory is created returned. For security you
may want to turn this option off.
MultiViews Content negotiated MultiViews are allowed (more on this later).
SymLinksIfOwnerMatch The server will only follow symbolic links for which
the target file or directory is owned by the same user id as the link (more on
this later).
All All options except for MultiViews. This is the default setting.
Deny This specifies what hosts are not allowed to connect. You can specify a hostname
or IP address for example as:
378
36. httpd — Apache Web Server 36.2. Installing and configuring Apache
¨ ¥
Deny from 10.1.2.3
Deny from 192.168.5.0/24
Deny from cranzgot.co.za
§ ¦
which will deny access to 10.1.2.3, all hosts beginning with 192.168.5. and
all hosts ending in .cranzgot.co.za, include the host cranzgot.co.za.
Allow This specifies what hosts are allowed to connect, using the same syntax as
Deny.
Order If order is Deny,Allow then the Deny directives are checked first, and any
client which does not match a Deny directive or does match an Allow directive
will be allowed access to the server.
If order is Allow,Deny then the Allow directives are checked first any client
which does not match an Allow directive or does match a Deny directive will be
denied access to the server.
AllowOverride In addition to the directives specified here, additional directives
will be read from the file specified by AccessFileName, usually called
.htaccess. This file would usually exist alongside your .html files; or oth-
erwise in a parent directory. If the file exists, its contents are read into the
current <Directory . . . > directive. AllowOverride says what directives
the .htaccess file is allowed to squash. The complete list can be found in
/opt/apache/htdocs/manual/mod/core.html
You can see above that we give very restrictive Options to the root directory, as well
as very restrictive access. The only server feature we allow is FollowSymLinks, then
we Deny any access, and then we remove the possibility that a .htaccess file could
override our restrictions.
The <Files . . . > directive sets restrictions on all files matching a particular regular
expression. As a security measure, we use it to prevent access to all .htaccess files
as follows:
¨ ¥
<Files ˜ "ˆ\.ht">
Order allow,deny
Deny from all
</Files>
§ ¦
We are now finally ready to add actual web page directories. These take a less restric-
tive set of access controls:
¨ ¥
<Directory "/opt/apache/htdocs">
Options Indexes FollowSymLinks MultiViews
AllowOverride All
379
36.2. Installing and configuring Apache 36. httpd — Apache Web Server
Order allow,deny
5 Allow from all
</Directory>
§ ¦
Now our users may require that Apache knows about their private web page directo-
ries /www/. This is easy to add with the special UserDir directive:
¨ ¥
<Directory "/opt/apache/htdocs/home/*/www">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
5 Allow from all
</Directory>
UserDir /opt/apache/htdocs/home/*/www
§ ¦
is a reasonable compromise.
36.2.4 Aliasing
Sometimes HTML documents will want to refer to a file or graphic using a simple pre-
fix, rather than a long directory name. Other times you want two different references
to source the same file. The Alias directive creates virtual links between directories.
For example, adding this line, means that a URL /icons/bomb.gif will serve the file
/opt/apache/icons/bomb.gif:
¨ ¥
Alias /icons/ "/opt/apache/icons/"
§ ¦
380
36. httpd — Apache Web Server 36.2. Installing and configuring Apache
You will find the directory lists generated by the above configuration rather bland. The
directive:
¨ ¥
IndexOptions FancyIndexing
§ ¦
causes nice descriptive icons to be printed to the left of the filename. What icons match
what file types is a trick issue. You can start with:
¨ ¥
AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip
AddIconByType (TXT,/icons/text.gif) text/*
AddIconByType (IMG,/icons/image2.gif) image/*
AddIconByType (SND,/icons/sound2.gif) audio/*
5 AddIconByType (VID,/icons/movie.gif) video/*
AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip
AddIcon /icons/a.gif .ps .eps
AddIcon /icons/layout.gif .html .shtml .htm
§ ¦
This requires the Alias directive above to be present. The default Apache configura-
tion contains a far more extensive map of file types.
Now if a client requests a file index.html, whereas only a file index.html.gz ex-
ists, Apache will decompress it on the fly. Note that you must have the MultiViews
options enabled.
381
36.2. Installing and configuring Apache 36. httpd — Apache Web Server
The LanguagePriority directive indicates the preferred language if the browser did
not specify any.
Now some files may contain a .koi8-r extension indicating a Russian character set
encoding for this file. Many languages have such custom character sets. Russian files
will be named webpage.html.ru.koi8-r. Apache must tell the web browser about
the encoding type based on the extension. Here are directives for Japanese, Russian,
and UTF-8 &UTF-8 is a Unicode character set encoding useful for any language.-, as follows:
¨ ¥
AddCharset ISO-2022-JP .jis
AddCharset KOI8-R .koi8-r
AddCharset UTF-8 .utf8
§ ¦
Once again, the default Apache configuration contains a far more extensive map of
languages and character sets.
Apache actually has a builtin programming language that interprets .shtml files as
scripts. The output of such a script is returned to the client. Most of a typical .shtml
file will be ordinary HTML which will be served unmodified. However lines like:
¨ ¥
<!--#echo var="DATE_LOCAL" -->
§ ¦
will be interpreted, and their output included into the HTML — hence the name server-
side includes. Server-side includes are ideal for HTML pages that contain mostly static
382
36. httpd — Apache Web Server 36.2. Installing and configuring Apache
HTML with small bits of dynamic content. To demonstrate, add the following to your
httpd.conf:
¨ ¥
AddType text/html .shtml
AddHandler server-parsed .shtml
<Directory "/opt/apache/htdocs/ssi">
Options Includes
5 AllowOverride None
Order allow,deny
Allow from all
</Directory>
§ ¦
and then a file footer.html containing anything you like. It is obvious how useful
this is for creating many documents with the same banner using a #include state-
ment. If you are wondering what other variables you can print besides DATE LOCAL,
try the following:
¨ ¥
<HTML>
<PRE>
<!--#printenv -->
</PRE>
5 </HTML>
§ ¦
(I have actually never managed to figure out why CGI is called CGI.) CGI is where
a URL points to a script. What comes up in your browser is the output of the script
(were it to be executed) instead of the contents of the script itself. To try this, create a
file /opt/apache/htdocs/test.cgi:
383
36.2. Installing and configuring Apache 36. httpd — Apache Web Server
¨ ¥
#!/bin/sh
Make this script executable with chmod a+x test.cgi and test the output by run-
ning it on the command line. Add the line,
¨ ¥
AddHandler cgi-script .cgi
§ ¦
to your httpd.conf file. Next, modify your Options for the directory
/opt/apache/htdocs to include ExecCGI like,
¨ ¥
<Directory "/opt/apache/htdocs">
Options Indexes FollowSymLinks MultiViews ExecCGI
AllowOverride All
Order allow,deny
5 Allow from all
</Directory>
§ ¦
384
36. httpd — Apache Web Server 36.2. Installing and configuring Apache
echo ’<PRE>’
set
echo ’</PRE>’
echo ’</HTML>’
§ ¦
This will show ordinary bash environment variables as well as more interesting vari-
ables like QUERY STRING: Change your script to,
¨ ¥
#!/bin/sh
This will dump the table list of the template1 database if it exists. Apache will have
to run as a user that can access this database which means changing User nobody to
User postgres &Note that you should really limit who can connect to the postgres database for
security — see Section 38.4-.
To create a functional form, use the HTTP <FORM> tag as follows. A file
/opt/apache/htdocs/test/form.html could contain:
385
36.2. Installing and configuring Apache 36. httpd — Apache Web Server
¨ ¥
<HTML>
<FORM name="myform" action="test.cgi" method="get">
<TABLE>
<TR>
5 <TD colspan="2" align="center">
Please enter your personal details:
</TD>
</TR>
<TR>
10 <TD>Name:</TD><TD><INPUT type="text" name="name"></TD>
</TR>
<TR>
<TD>Email:</TD><TD><INPUT type="text" name="email"></TD>
</TR>
15 <TR>
<TD>Tel:</TD><TD><INPUT type="text" name="tel"></TD>
</TR>
<TR>
<TD colspan="2" align="center">
20 <INPUT type="submit" value="Submit">
</TD>
</TR>
</TABLE>
</FORM>
25 </HTML>
§ ¦
Note how this form calls our existing test.cgi script. Here is a script that adds the
entered data to a postgres SQL table:
¨ ¥
#!/bin/sh
386
36. httpd — Apache Web Server 36.2. Installing and configuring Apache
echo
5
opts=‘echo "$QUERY_STRING" | \
sed -e ’s/[ˆA-Za-z0-9 %&+,.\/:=@_˜-]//g’ -e ’s/&/ /g’ -e q‘
exit 0
§ ¦
Note how the first lines of script remove all unwanted characters from QUERY STRING.
This is imperative for security because shell scripts can easily execute commands
should characters like $ and ‘ be present in a string.
The POST method sends the query text through stdin of the CGI script. Hence you
need to also change your opts= line to
¨ ¥
opts=‘cat | \
sed -e ’s/[ˆA-Za-z0-9 %&+,.\/:=@_˜-]//g’ -e ’s/&/ /g’ -e q‘
§ ¦
387
36.2. Installing and configuring Apache 36. httpd — Apache Web Server
Running Apache as a privileged user has security implications. Another way to get
this script to execute as user postgres is to create a setuid binary. To do this, create a
file test.cgi by compiling the following C program.
¨ ¥
#include <unistd.h>
Then
run chown postgres:www test.cgi and chmod a-w,o-rx,u+s test.cgi (or
chmod 4550 test.cgi). Recreate your shell script as test.sh and goto the URL
again. Apache runs test.cgi which becomes user postgres and then executes the
script as the postgres user. Even with Apache as User nobody your script will
still work. Note how your setuid program is insecure: it takes no arguments and per-
forms only a single function, however it takes environment variables (or input from
stdin) that could influence its functionality. If a login user could execute the script,
they could send data via these variables that could cause the script to behave in an
unforeseen way. An alternative is:
¨ ¥
#include <unistd.h>
This nullifies the environment before starting the CGI, thus forcing you to use the POST
method only. Because the only information that can be passed to the script is a sin-
gle line of text (via the -e q option to sed), and because that line of text is carefully
stripped of unwanted characters, we can be much more certain of security.
388
36. httpd — Apache Web Server 36.2. Installing and configuring Apache
CGI execution is extremely slow if Apache has to invoke a shell script each time.
Apache has a number of facilities for built-in interpreters that will parse script files
with high efficiency. A well known programming language developed specifically
for the web is PHP. PHP can be downloaded as source from The PHP Home Page
<http://www.php.net>, and contains the usual GNU installation instructions.
Apache has the facility for adding functionality at run time using what it calls DSO
Dynamic Shared Object files. This feature is for distribution vendors who want to
ship split installs of Apache that enable users to only install the parts of Apache
they like. This is conceptually the same as what we saw in Section 23.2: to give
your program some extra feature provided by some library, you can either statically
link the library to your program or compile the library as a shared .so file to be
linked at run time. The difference here is that the library files are (usually) called
mod name and are stored in /opt/apache/libexec/. They are also only loaded
if a LoadModule name module appears in httpd.conf. To enable DSO support,
rebuild and re-install Apache starting with:
¨ ¥
./configure --prefix=/opt/apache --enable-module=so
§ ¦
Any source package that creates an Apache module can now use the Apache utility
/opt/apache/bin/apxs to tell it about the current Apache installation, hence you
should make sure this is in your PATH.
You can now follow the instructions
for installing PHP, possibly beginning with ./configure --prefix=/opt/php
--with-apxs=/opt/apache/bin/apxs --with-pgsql=/usr. (This assumes
that you want to enable support for the postgres SQL database and have postgres
previously installed as a package under /usr.) Finally, check that a file libphp4.so
eventually ends up in /opt/apache/libexec/.
Your httpd.conf then needs to know about PHP scripts. Add the following lines,
¨ ¥
LoadModule php4_module /opt/apache/libexec/libphp4.so
AddModule mod_php4.c
AddType application/x-httpd-php .php
§ ¦
389
36.2. Installing and configuring Apache 36. httpd — Apache Web Server
Virtual hosting is where a single web server serves the web pages of more than one
domain. Although the web browser seems to be connecting to a web site that is an
isolated entity, that web site may in fact be hosted alongside many others on the same
machine.
This is rather trivial to configure. Let us say that we have three domains
www.domain1.com, www.domain2.com and www.domain3.com. We would
like domains www.domain1.com and www.domain2.com to share IP address
196.123.45.1, while www.domain3.com has its own IP address of 196.123.45.2.
The sharing of a single IP address is called name-based virtual hosting, while the use of a
different IP address for each domain is called IP-based virtual hosting.
If our machine has one IP address 196.123.45.1, we may need to configure a sepa-
rate IP address on the same network card as follows (see Section 25.9):
¨ ¥
ifconfig eth0:1 196.123.45.2 netmask 255.255.255.0 up
§ ¦
<VirtualHost 196.123.45.1>
ServerName www.domain1.com
5 DocumentRoot /opt/apache/htdocs/www.domain1.com/
</VirtualHost>
<VirtualHost 196.123.45.1>
ServerName www.domain2.com
10 DocumentRoot /opt/apache/htdocs/www.domain2.com/
</VirtualHost>
390
36. httpd — Apache Web Server 36.2. Installing and configuring Apache
<VirtualHost 196.123.45.2>
ServerName www.domain3.com
15 DocumentRoot /opt/apache/htdocs/www.domain3.com/
</VirtualHost>
§ ¦
All that remains is to configure a correct DNS zone for each domain so that lookups of
www.domain1.com and www.domain2.com return 196.123.45.1 while lookups
of www.domain3.com return 196.123.45.2.
You can then add index.html files to each directory.
391
36.2. Installing and configuring Apache 36. httpd — Apache Web Server
392
Chapter 37
crond and atd are two very simple and important services that everyone should be
familiar with. crond does the job of running commands periodically (daily, weekly),
while atd’s main feature is run a command once at some future time.
These two services are so basic that we are not going to detail their package contents
and invocation.
The /etc/crontab file dictates a list of periodic jobs to be run — like updating the
locate and whatis databases, rotating logs, and possibly performing backup tasks.
If there is anything that needs to be done periodically, you can schedule that job in this
file. /etc/crontab is read by crond on startup. crond will already be running on
all but the most broken of U NIX systems.
/etc/crontab consists of
single line definitions for what time of the day/week/month a particular command
should be run.
393
37.1. /etc/crontab configuration file 37. crond and atd
<time> is a time pattern that the current time must match for the command to be exe-
cuted. <user> tells under what user the command is to be executed. <executable>
is the command to be run.
The time pattern gives the minute, hour, month-day, month, and week-day that the
current time is compared. The comparison is done at the start of every single minute. If
crond gets a match, it will execute the command. A simple time pattern is as follows.
¨ ¥
50 13 2 9 6 root /usr/bin/play /etc/theetone.wav
§ ¦
Which will play the given WAV file on Sat Sep 2 13:50:00 every year, while
¨ ¥
50 13 2 * * root /usr/bin/play /etc/theetone.wav
§ ¦
will play at 13:50:00 and at 14:50:00 on both Friday, Saturday and Sunday, while
¨ ¥
*/10 * * * 6 root /usr/bin/play /etc/theetone.wav
§ ¦
Will play every 10 minutes the whole of Saturday. The / is a special notation meaning
“in steps of”.
In the above examples, the play command is executed as root.
The following is an actual /etc/crontab file:
¨ ¥
# Environment variables first
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
5 HOME=/
# Time specs
30 20 * * * root /etc/cron-alarm.sh
35 19 * * * root /etc/cron-alarm.sh
10 58 18 * * * root /etc/cron-alarm.sh
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
394
37. crond and atd 37.2. The at command
Note that the # character is used for comments as usual. crond also allows you to
specify environment variables under which commands are to be run.
Your time additions should come like mine have, to remind me of the last three Metro
trains of the day.
The last four entries are vendor supplied. The run-parts command is a simple script
to run all the commands listed under /etc/cron.hourly, /etc/cron.daily etc.
Hence, if you have a script which needs to be run every day, but not at a specific
time, you needn’t edit your crontab file: rather just place the script with the others in
/etc/cron.<interval>.
My own /etc/cron.daily/ directory contains:
¨ ¥
total 14
drwxr-xr-x 2 root root 1024 Sep 2 13:22 .
drwxr-xr-x 59 root root 6144 Aug 31 13:11 ..
-rwxr-xr-x 1 root root 140 Aug 13 16:16 backup
5 -rwxr-xr-x 1 root root 51 Jun 16 1999 logrotate
-rwxr-xr-x 1 root root 390 Sep 14 1999 makewhatis.cron
-rwxr-xr-x 1 root root 459 Mar 25 1999 radiusd.cron.daily
-rwxr-xr-x 1 root root 99 Jul 23 23:48 slocate.cron
-rwxr-xr-x 1 root root 103 Sep 25 1999 tetex.cron
10 -rwxr-xr-x 1 root root 104 Aug 30 1999 tmpwatch
§ ¦
It is advisable to go through each of these now to see what your system is doing to
itself behind your back.
at will execute a command at some future time, and only once. I suppose it is essential
to know, although I never used it myself until writing this chapter. at is the front end
to the atd daemon which, like crond will almost definitely be running.
Try our wave file example, remembering to press – to get the <EOT> (End
Of Text):
¨ ¥
[root@cericon /etc]# at 14:19
at> /usr/bin/play /etc/theetone.wav
at> <EOT>
warning: commands will be executed using /bin/sh
5 job 3 at 2000-09-02 14:19
395
37.2. The at command 37. crond and atd
§ ¦
a means is the queue name, 3 is the job number, and 2000-09-02 14:19 is the sched-
uled time of execution. While play is executing, atq will give:
¨ ¥
3 2000-09-02 14:19 =
§ ¦
396
Chapter 38
This chapter will show you how to set up an SQL server for free.
38.1 SQL
Typically, the database tables will sit in files, managed by an SQL server daemon pro-
cess. The SQL server will listen on a TCP socket for incoming requests from client
machines, and service those requests.
SQL has become a de facto industry standard. However the protocols (over TCP/IP)
via which those SQL requests are sent are different from implementation to implemen-
tation.
SQL servers are a major branch of server software. Management of database tables
is actually a complicated affair. A good SQL server will properly streamline multiple
simultaneous requests that may access and modify rows in the same table. Doing this
efficiently, along with the many types of complex searches and cross-referencing, while
also ensuring data integrity, is a complex task.
397
38.2. postgres 38. postgres SQL server
38.2 postgres
postgres (PostGreSQL) is a Free SQL server written under the BSD license.
postgres supports an extended subset of SQL92 &The definitive SQL standard.- — it
does a lot of very nifty things that no other database can (it seems). About the only
commercial equivalent worth buying over postgres is a certain very expensive in-
dustry leader. postgres runs on every flavour of U NIX and also Windows NT.
The postgres documentation proudly states:
postgres is also fairly dry. Most people ask why it doesn’t have a graphical
front-end. Considering that it runs on so many different platforms, it makes sense for
it to be purely a backend engine. A graphical interface is a different kind of software
project, that would probably support more than one type of database server at the
back, and possibly run under only one kind of graphical interface.
The postgres consists of the following files:
Each of these has a man page which you should get an inkling of.
398
38. postgres SQL server 38.4. Installing and initialising postgres
Further man pages will provide references to actual SQL commands. Try man l
select (explained further on):
¨ ¥
SELECT(l) SELECT(l)
NAME
SELECT - Retrieve rows from a table or view.
5
SYNOPSIS
SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
expression [ AS name ] [, ...]
[ INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table ]
10 [ FROM table [ alias ] [, ...] ]
[ WHERE condition ]
[ GROUP BY column [, ...] ]
[ HAVING condition [, ...] ]
[ { UNION [ ALL ] | INTERSECT | EXCEPT } select ]
15 [ ORDER BY column [ ASC | DESC | USING operator ] [, ...] ]
[ FOR UPDATE [ OF class_name [, ...] ] ]
LIMIT { count | ALL } [ { OFFSET | , } start ]
§ ¦
Most important is the enormous amount of HTML documentation that comes with
postgres. Point your web browser to /usr/doc/postgresql-?.?.?, then dive
into the admin, user, programmer, tutorial and postgres directories.
Finally there are the start and stop script in /etc/rc.d/init.d/ and the direc-
tory in which the database tables themselves are stored: /var/lib/pgsql/.
postgres can be gotten prepackaged for your favourite distribution. Simply install
the package then follow the following instructions.
Stop the postgres server if it is running; the init.d script may be called postgres
or postgresql:
¨ ¥
/etc/rc.d/init.d/postgresql stop
/etc/rc.d/init.d/postgres stop
§ ¦
Edit the init.d script to support TCP requests. There will be a line like what follows
that you can add the -i option to. Mine looks like:
¨ ¥
su -l postgres -c "/usr/bin/pg_ctl -D $PGDATA \
-p /usr/bin/postmaster -o ’-i -o -e’ start >/dev/null 2>&1"
§ ¦
399
38.5. Querying your database with psql 38. postgres SQL server
which also (via the -o -e option) forces European date formats (28/4/1984 in-
stead of 4/28/1984). Note that hosts will not be able to connect unless you edit
your /var/lib/pgsql/data/pg hba.conf (/etc/postgresql/pg hba.conf
on Debian ) file, and add lines like,
¨ ¥
host mydatabase 192.168.4.7 255.255.255.255 trust
§ ¦
In either case, you should check this file to ensure that only trusted hosts can connect
to your database, or remove the -i option altogether if you are only connecting from
the local machine. To a limited extent, you can also limit what users can connect within
this file.
Note that it would be nice if the UNIX domain socket that postgres listens
on (i.e. /tmp/.s.PGSQL.5432) had permissions 0770 instead of 0777. This
way you could limit connections to only those users belonging to the postgres
group. You can set this by searching for the C chmod command within
src/backend/libpq/pqcomm.c inside the postgres-7.0 sources. Later versions
may have added a feature to set the permissions on this socket.
To run postgres you need a user of that name. If you do not already have one then
enter,
¨ ¥
/usr/sbin/useradd postgres
§ ¦
The postgres init.d script initialises a template database on first run, so you may
have to start it twice.
Now you can create your own database. The following creates a database finance as
well as a postgres user finance. It does these creations while being user postgres
(this is what the -U option is for). You should run these commands as user root or as
user postgres without the -U postgres.
¨ ¥
/usr/sbin/useradd finance
createuser -U postgres --adduser --createdb finance
createdb -U finance finance
§ ¦
Now that the database exists, you can begin running sql queries:
400
38. postgres SQL server 38.5. Querying your database with psql
¨ ¥
# psql -U finance
Welcome to psql, the PostgreSQL interactive terminal.
The above are postgres’s internal tables Some are actual tables, while some are views
of tables &A selective representation of an actual table.-.
401
38.6. An introduction to SQL 38. postgres SQL server
§ ¦
The following are 99% of the commands you are ever going to use. (Note that all SQL
commands require a semi-colon at the end — you won’t be the first person to ask why
nothing happens when you press without the semi-colon).
Creating tables
The created table will title the columns, name, gender and address. Columns are
typed. This means that only the kind of data that was specified at the time of creation
can go in that the column. In the case of gender, it can only be true or false for the
boolean type, which we will associate to the male and female genders. There is prob-
ably no reason to use the boolean value here: using an integer or text field can often
be far more descriptive and flexible. In the case of name and address, these can hold
anything, since they are of the text type that is the most ubiquitous type of all.
Note: in the postgres documentation, a “column” is called an “attribute” for
historical reasons.
You are warned to choose types according to the kind of searches you are going to do
and not according to the data it holds. Here are most of the useful types you would like
to use as well as their SQL92 equivalents. The types in bold are to be used in preference
to other similar types for greater range or precision:
The list of types of available to postgres is:
402
38. postgres SQL server 38.6. An introduction to SQL
403
38.6. An introduction to SQL 38. postgres SQL server
Listing a table
The SELECT statement is the most widely used statement in SQL. It returns data from
tables and can do searches:
¨ ¥
finance=# SELECT * FROM PEOPLE;
name | gender | address
------+--------+---------
(0 rows)
§ ¦
Adding a column
Delete/dropping a column
You cannot drop columns in postgres, you have to create a new table from the old
table without the column. How to do this will become obvious further on.
Delete/dropping a table
404
38. postgres SQL server 38.6. An introduction to SQL
the return value is the oid (Object ID) of the row. postgres is an Object Relational
database. This term gets thrown around a lot, but really means that every table has a
hidden column called the oid column that stores a unique identity number for each
row. The identity number is unique across the entire database. Because it uniquely
identifies rows across all tables, you could call the rows “objects”. The oid feature is
most useful to programmers.
Locating rows
Here we create a new table and fill two of its columns from columns in our original
table
¨ ¥
finance=# CREATE TABLE sitings (person text, place text, siting text);
CREATE
finance=# INSERT INTO sitings (person, place) SELECT name, address FROM people;
INSERT 20324 1
§ ¦
Deleting rows
405
38.6. An introduction to SQL 38. postgres SQL server
¨ ¥
finance=# DELETE FROM people WHERE name = ’Paul Sheer’;
DELETE 1
§ ¦
Searches
The first % is a wildcard that matches any length of text before the Paul, while the
final % matches any text after — it is the usual way of searching with a field, instead of
trying to get an exact match.
The possibilities are endless:
¨ ¥
SELECT * FROM people WHERE gender = true AND phone = ’8765432’;
§ ¦
The command,
¨ ¥
COPY people TO ’/tmp/people.txt’;
§ ¦
406
38. postgres SQL server 38.6. An introduction to SQL
¨ ¥
COPY people FROM ’/tmp/people.txt’;
§ ¦
inserts into the table people the rows from /tmp/people.txt. It assumes one line
per row, and the tab character between each cell.
Note: unprintable characters are escaped with a backslash \ in both output and the
interpretation of input.
Hence it is simple to get data from another database. You just have to work out how to
dump it as text.
When you have some very complicated set of tables in front of you, you are likely to
want to merge, select, search and cross-reference them in enumerable ways to get the
information you want out of them.
Being able to efficiently query the database in this way is the true power of SQL, but
this is about as far as I am going to go here. The postgres documentation cited above
contains details on everything you can do.
407
38.6. An introduction to SQL 38. postgres SQL server
408
Chapter 39
39.1 Introduction
A lot of emphasis has been placed on peaceful coexistence between Unix and Win-
dows. The Usenix Association has even created an annual conference (LISA/NT–July
14-17, 1999) around this theme. Unfortunately, the two systems come from very differ-
ent cultures and they have difficulty getting along without mediation. . . . and that, of
course, is Samba’s job. Samba <http://samba.org/> runs on Unix platforms, but
speaks to Windows clients like a native. It allows a Unix system to move into a Win-
dows ”Network Neighborhood” without causing a stir. Windows users can happily
access file and print services without knowing or caring that those services are being
offered by a Unix host.
All of this is managed through a protocol suite which is currently known as the
”Common Internet File System”, or CIFS <http://www.cifs.com>. This name was
introduced by Microsoft, and provides some insight into their hopes for the future. At
the heart of CIFS is the latest incarnation of the Server Message Block (SMB) protocol,
which has a long and tedious history. Samba is an open source CIFS implementation,
and is available for free from the http://samba.org/ mirror sites.
Samba and Windows are not the only ones to provide CIFS networking. OS/2
supports SMB file and print sharing, and there are commercial CIFS products for Mac-
intosh and other platforms (including several others for Unix). Samba has been ported
to a variety of non-Unix operating systems, including VMS, AmigaOS, & NetWare.
CIFS is also supported on dedicated file server platforms from a variety of vendors. In
409
39.1. Introduction 39. smbd — Samba NT Server
It started a long time ago, in the early days of the PC, when IBM and Sytec co-
developed a simple networking system designed for building small LANs. The sys-
tem included something called NetBIOS, or Network Basic Input Output System. Net-
BIOS was a chunk of software that was loaded into memory to provide an interface
between programs and the network hardware. It included an addressing scheme that
used 16-byte names to identify workstations and network-enabled applications. Next,
Microsoft added features to DOS that allowed disk I/O to be redirected to the NetBIOS
interface, which made disk space sharable over the LAN. The file-sharing protocol that
they used eventually became known as SMB, and now CIFS.
Lots of other software was also written to use the NetBIOS API (Application
Programmer’s Interface), which meant that it would never, ever, ever go away. Instead,
the workings beneath the API were cleverly gutted and replaced. NetBEUI (NetBIOS
Enhanced User Unterface), introduced by IBM, provided a mechanism for passing Net-
BIOS packets over Token Ring and Ethernet. Others developed NetBIOS LAN emula-
tion over higher-level protocols including DECnet, IPX/SPX and, of course, TCP/IP.
NetBIOS and TCP/IP made an interesting team. The latter could be routed be-
tween interconnected networks (internetworks), but NetBIOS was designed for iso-
lated LANs. The trick was to map the 16-byte NetBIOS names to IP addresses so that
messages could actually find their way through a routed IP network. A mechanism
for doing just that was described in the Internet RFC1001 and RFC1002 documents. As
Windows evolved, Microsoft added two additional pieces to the SMB package. These
were service announcement, which is called ”browsing”, and a central authentication
and authorization service known as Windows NT Domain Control.
Andrew Tridgell, who is both tall and Australian, had a bit of a problem. He needed to
mount disk space from a Unix server on his DOS PC. Actually, this wasn’t the problem
at all because he had an NFS (Network File System) client for DOS and it worked just
fine. Unfortunately, he also had an application that required the NetBIOS interface.
Anyone who has ever tried to run multiple protocols under DOS knows that it can
be...er...quirky.
So Andrew chose the obvious solution. He wrote a packet sniffer, reverse en-
gineered the SMB protocol, and implemented it on the Unix box. Thus, he made the
Unix system appear to be a PC file server, which allowed him to mount shared filesys-
tems from the Unix server while concurrently running NetBIOS applications. Andrew
410
39. smbd — Samba NT Server 39.1. Introduction
published his code in early 1992. There was a quick, but short succession of bug-fix
releases, and then he put the project aside. Occasionally he would get E’mail about it,
but he otherwise ignored it. Then one day, almost two years later, he decided to link
his wife’s Windows PC with his own Linux system. Lacking any better options, he
used his own server code. He was actually surprised when it worked.
Through his E’mail contacts, Andrew discovered that NetBIOS and SMB were ac-
tually (though nominally) documented. With this new information at his fingertips he
set to work again, but soon ran into another problem. He was contacted by a company
claiming trademark on the name that he had chosen for his server software. Rather
than cause a fuss, Andrew did a quick scan against a spell-checker dictionary, look-
ing for words containing the letters ”smb”. ”Samba” was in the list. Curiously, that
same word is not in the dictionary file that he uses today. (Perhaps they know it’s been
taken.)
The Samba project has grown mightily since then. Andrew now has a whole team
of programmers, scattered around the world, to help with Samba development. When
a new release is announced, thousands of copies are downloaded within days. Com-
mercial systems vendors, including Silicon Graphics, bundle Samba with their prod-
ucts. There are even Samba T-shirts available. Perhaps one of the best measures of the
success of Samba is that it was listed in the ”Halloween Documents”, a pair of internal
Microsoft memos that were leaked to the Open Source community. These memos list
Open Source products which Microsoft considers to be competitive threats. The abso-
lutely best measure of success, though, is that Andrew can still share the printer with
his wife.
Samba consists of two key programs, plus a bunch of other stuff that we’ll get to later.
The two key programs are smbd and nmbd. Their job is to implement the four basic
modern-day CIFS services, which are:
File and print services are, of course, the cornerstone of the CIFS suite. These
are provided by smbd, the SMB Daemon. Smbd also handles ”share mode” and ”user
mode” authentication and authorization. That is, you can protect shared file and print
services by requiring passwords. In share mode, the simplest and least recommended
411
39.1. Introduction 39. smbd — Samba NT Server
It works like this: The clients send their NetBIOS names & IP addresses to the
NBNS server, which keeps the information in a simple database. When a client wants
to talk to another client, it sends the other client’s name to the NBNS server. If the
name is on the list, the NBNS hands back an IP address. You’ve got the name, look up
the number.
Clients on different subnets can all share the same NBNS server so, unlike broad-
cast, the point-to-point mechanism is not limited to the local LAN. In many ways the
412
39. smbd — Samba NT Server 39.1. Introduction
NBNS is similar to the DNS, but the NBNS name list is almost completely dynamic
and there are few controls to ensure that only authorized clients can register names.
Conflicts can, and do, occur fairly easily.
Finally, there’s browsing. This is a whole ’nother kettle of worms, but Samba’s
nmbd handles it anyway. This is not the web browsing we know and love, but a brows-
able list of services (file and print shares) offered by the computers on a network.
On a LAN, the participating computers hold an election to decide which of them
will become the Local Master Browser (LMB). The ”winner” then identifies itself by
claiming a special NetBIOS name (in addition to any other names it may have). The
LMBs job is to keep a list of available services, and it is this list that appears when you
click on the Windows ”Network Neighborhood” icon.
In addition to LMBs, there are Domain Master Browsers (DMBs). DMBs coordi-
nate browse lists across NT Domains, even on routed networks. Using the NBNS, an
LMB will locate its DMB to exchange and combine browse lists. Thus, the browse list
is propagated to all hosts in the NT Domain. Unfortunately, the synchronization times
are spread apart a bit. It can take more than an hour for a change on a remote subnet
to appear in the Network Neighborhood.
Other Stuff
Samba comes with a variety of utilities. The most commonly used are:
smbclient A simple SMB client, with an interface similar to that of the FTP utility. It
can be used from a Unix system to connect to a remote SMB share, transfer files,
and send files to remote print shares (printers).
nmblookup A NetBIOS name service client. Nmblookup can be used to find NetBIOS
names on a network, lookup their IP addresses, and query a remote machine for
the list of names the machine believes it ownes.
swat The Samba Web Administration Tool. Swat allows you to configure Samba re-
motely, using a web browser.
There are more, of course, but describing them would require explaining even
more bits and pieces of CIFS, SMB, and Samba. That’s where things really get tedious,
so we’ll leave it alone for now.
One of the cool things that you can do with a Windows box is use an SMB file share as
if it were a hard disk on your own machine. The N: drive can look, smell, feel, and act
413
39.1. Introduction 39. smbd — Samba NT Server
like your own disk space, but it’s really disk space on some other computer somewhere
else on the network.
Linux systems can do this too, using the smbfs filesystem. Built from Samba
code, smbfs (which stands for SMB Filesystem) allows Linux to map a remote SMB
share into its directory structure. So, for example, the /mnt/zarquon directory might
actually be an SMB share, yet you can read, write, edit, delete, and copy the files in that
directory just as you would local files.
The smbfs is nifty, but it only works with Linux. In fact, it’s not even part of
the Samba suite. It is distributed with Samba as a courtesy and convenience. A more
general solution is the new smbsh (SMB shell, which is still under development at the
time of this writing). This is a cool gadget. It is run like a Unix shell, but it does some
funky fiddling with calls to Unix libraries. By intercepting these calls, smbsh can make
it look as though SMB shares are mounted. All of the read, write, etc. operations are
available to the smbsh user. Another feature of smbsh is that it works on a per-user,
per shell basis, while mounting a filesystem is a system-wide operation. This allows
for much finer-grained access controls.
Samba is configured using the smb.conf file. This is a simple text file designed to
look a lot like those *.ini files used in Windows. The goal, of course, is to give network
administrators familiar with Windows something comfortable to play with. Over time,
though, the number of things that can be configured in Samba has grown, and the
percentage of Network Admins willing to edit a Windows *.ini file has shrunk. For
some people, that makes managing the smb.conf file a bit daunting.
Still, learning the ins and outs of smb.conf is a worth-while penance. Each of
the smb.conf variables has a purpose, and a lot of fine tuning can be accomplished.
The file structure contents are fully documented, so as to give administrators a running
head start, and smb.conf can be manipulated using swat, which at least makes it
nicer to look at.
The Present
Samba 2.0 was released in January 1999. One of the most significant and cool features
of the 2.0 release was improved speed. Ziff-Davis Publishing used their Netbench
software to benchmark Samba 2.0 on Linux against Windows NT4. They ran all of their
tests on the same PC hardware, and their results showed Samba’s throughput under
load to be at least twice that of NT. Samba is shipped with all major Linux distributions,
and Ziff-Davis tested three of those.
414
39. smbd — Samba NT Server 39.1. Introduction
Another milestone was reached when Silicon Graphics (SGI) became the first
commercial Unix vendor to support Samba. In their December 1998 press release, they
claimed that their Origin series servers running Samba 2.0 were the most powerful line
of file servers for Windows clients available. SGI now offers commercial support for
Samba as do several other providers, many of which are listed on the Samba web site
(see http://samba.org/). Traditional Internet support is, of course, still available
via the comp.protocols.smb newsgroup and the [email protected] mailing list.
The Samba Team continues to work on new goodies. Current interests include
NT ACLs (Access Control Lists), support for LDAP (the Lightweight Directory Access
Protocol), NT Domain Control, and Microsoft’s DFS (Distributed File System).
The Future
Windows 2000 looms on the horizon like a lazy animal peeking its head over the edge
of its burrow while trying to decide whether or not to come out. No one is exactly sure
about the kind of animal it will be when it does appear, but folks are fairly certain that
it will have teeth.
Because of their dominance on the desktop, Microsoft gets to decide how CIFS
will grow. Windows 2000, like previous major operating system releases, will give us a
whole new critter to study. Based on the beta copies and the things that Microsoft has
said, here are some things to watch for:
CIFS Without NetBIOS Microsoft will attempt to decouple CIFS and NetBIOS. Net-
BIOS won’t go away, mind you, but it won’t be required for CIFS networking
either. Instead, the SMB protocol will be carried natively over TCP/IP. Name
lookups will occur via the DNS.
Dynamic DNS Microsoft will implement Dynamic DNS, a still-evolving system de-
signed by the IETF (Internet Engineering Task Force). Dynamic DNS allows names
to be added to a DNS server on-the-fly.
Kerberos V Microsoft has plans to use Kerberos V. The Microsoft K5 tickets are sup-
posed to contain a Privilege Attribute Certificate (PAC) <http://www.usenix.
org/publications/login/1997-11/embraces.html>, which will in-
clude user and group ID information from the Active Directory. Servers will
be looking for this PAC when they grant access to the services that they provide.
Thus, Kerberos may be used for both authentication and authorization.
Active Directory The Active Directory appears to be at the heart of Windows 2000
networking. It is likely that legacy NetBIOS services will register their names in
the Active Directory.
415
39.2. Configuring Samba 39. smbd — Samba NT Server
One certainty is that W2K (as it is often called) is, and will be, under close
scrutiny. Windows has already attracted the attention of some of the Internet Wonder-
land’s more curious inhabitants, including security analysts, standards groups, crack-
ers dens, and general all-purpose geeks. The business world, which has finally gotten
a taste of the freedom of Open Source Software, may be reluctant to return to the world
of proprietary, single-vendor solutions. Having the code in your hands is both reassur-
ing and empowering.
Whatever the next Windows animal looks like, it will be Samba’s job to help
it get along with its peers in the diverse world of the Internet. The Samba Team, a
microcosm of the Internet community, are among those watching W2K to see how it
develops. Watching does not go hand-in-hand with waiting, though, and Samba is
an on-going and open effort. Visit the Samba web site, join the mailing lists, and see
what’s going on.
Participate in the future
That said, configuring smbd is really easy. A typical LAN will require a U NIX
machine that can share /home/* directories to Windows clients, where each user
can login as the name of their home directory. It must also act as a print share
that redirects print jobs through lpr; and then in PostScript, the way we like
it. Consider a Windows machine divinian.cranzgot.co.za on a local LAN
192.168.3.0/24. The user of that machine would have a U NIX login psheer on
the server cericon.cranzgot.co.za.
The usual place for Samba’s configuration file is /etc/samba/smb.conf on
most distributions. A minimalist configuration file to perform the above functions
might be:
¨ ¥
[global]
workgroup = MYGROUP
server string = Samba Server
hosts allow = 192.168. 127.
5 printcap name = /etc/printcap
load printers = yes
printing = bsd
log file = /var/log/samba/%m.log
max log size = 0
10 security = user
416
39. smbd — Samba NT Server 39.2. Configuring Samba
The SMB protocol stores passwords differently to U NIX. It therefore needs its own
password file, usually /etc/samba/smbpasswd there is also a mapping between
U NIX logins and Samba logins in /etc/samba/smbusers, but for simplicity we will
use the same U NIX name as the Samba login name. We can add a new U NIX user and
Samba user, and set both their passwords with
¨ ¥
smbadduser psheer:psheer
useradd psheer
smbpasswd psheer
passwd psheer
§ ¦
Note that with SMB there are all sorts of issues with case interpretation — an incor-
rectly typed password could still work with Samba but obviously won’t with U NIX.
To start Samba, run the familiar,
¨ ¥
/etc/init.d/smbd start
( /etc/rc.d/init.d/smbd start )
( /etc/init.d/samba start )
§ ¦
For good measure, there should also be a proper DNS configuration with forward and
reverse lookups for all client machines.
At this point you can test your Samba server from the U NIX side. L INUX has native
support for SMB shares with the smbfs file-system. Try mounting a share served by
the local machine:
¨ ¥
mkdir -p /mnt/smb
mount -t smbfs -o username=psheer,password=12345 //cericon/psheer /mnt/smb
§ ¦
417
39.3. Configuring Windows 39. smbd — Samba NT Server
cericon (192.168.3.2) connect to service psheer as user psheer (uid=500, gid=500) (pid 10854)
§ ¦
The useful utility smbclient is a generic tool for running SMB requests, but is
mostly useful for printing. Make sure your printer daemon is running (and working)
and then try,
¨ ¥
echo hello | smbclient //cericon/lp 12345 -U psheer -c ’print -’
§ ¦
which will create a small entry in the lp print queue. Your log file will be appended
with:
¨ ¥
cericon (192.168.3.2) connect to service lp as user psheer (uid=500, gid=500) (pid 13281)
§ ¦
Next, you need to Log Off from the Start menu and log back in as your Samba user.
418
39. smbd — Samba NT Server 39.4. Configuring a Windows printer
Finally, go to Run. . . in the Start menu and enter \\cericon\psheer. You will be
prompted for a password which you should enter as the same as for the smbpasswd
program above.
This should bring up your home directory like you have probable never seen it before.
Under Settings in your Start menu, you will be able to add new printers. Your U NIX
lp print queue is visible as the \\cericon\lp network printer, and should be entered
as such in the configuration wizard. For a printer driver, you should choose “Apple
Color Laserwriter”, since this driver just produces regular PostScript output. In the
printer driver options you should also select to optimise for “portability”.
419
39.6. Windows NT caveats 39. smbd — Samba NT Server
swat is a service run from inetd that listens for HTTP connections on port 901. It
allows complete remote management of Samba from a web browser. To configure, add
the service swat 901/tcp to your /etc/services file, and the following to your
/etc/inetd.conf file:
¨ ¥
swat stream tcp nowait root /usr/sbin/tcpd /usr/sbin/swat
§ ¦
being careful who you allow connections from. If you are running xinetd, create a
file /etc/xinetd.d/swat:
¨ ¥
service swat
{
port = 901
socket_type = stream
5 wait = no
only_from = localhost 192.168.0.0/16
user = root
server = /usr/sbin/swat
server_args = -s /etc/samba/smb.conf
10 log_on_failure += USERID
disable = no
}
§ ¦
After restarting inetd (or xinetd) you can point your web browser to
http://cericon:901/. The web page interface is extremely easy to use and, be-
ing written by the Samba developers themselves, can be trusted to produce working
configurations. The web page also gives a convenient interface to all the documenta-
tion. Do note that it will completely write over your existing configuration file.
Windows SMB servers compete to be the name server of their domain by version num-
ber and uptime. By this we again mean the Windows name service and not the DNS
service. How exactly this works I will not cover here &Probably because I have no idea what
I am talking about.-, but do be aware that configuring a Samba server on a network of
420
39. smbd — Samba NT Server 39.6. Windows NT caveats
many NT machines, and getting it to work, can be a nightmare. A solution once at-
tempted was to shut down all machines on the LAN, then pick one as the domain
server, then bring it up first after waiting an hour for all possible timeouts to have
elapsed. After verifying that it was working properly, the rest of the machines were
booted.
Then of course don’t forget your nmblookup command.
421
39.6. Windows NT caveats 39. smbd — Samba NT Server
422
Chapter 40
423
40. named — Domain Name Server
Keep this window throughout the entire setup and testing procedure. From now on,
when I refer to messages I am refering to a message in this window.
Documentation
The man page for named are hostname(7), named-xfer(8), named(8), and ndc(8).
The man pages reference a document called the “Name Server Operations Guide
for BIND”. What they ac-
tually mean is a text file /usr/doc/bind-8.2/bog/file.lst or a PostScript file
/usr/doc/bind-8.2/bog/file.psf for printing.
The problem with some of this documentation is that it is still based
on the old (now depreciated) named.boot configuration file. There is a pro-
gram /usr/doc/bind-8.2/named-bootconf/named-bootconf that reads a
named.boot file from stdin and writes a named.conf file to stdout. I found it useful
to echo "old config line" | named-bootconf to see what a new style equiv-
alent would be.
The most important info is in /usr/doc/bind-8.2/html which contains a
complete reference to configuration.
There are also FAQ documents in /usr/doc/bind-8.2/misc and various
thesis on security. /usr/doc/bind-8.2/misc/style.txt contains the recom-
mended layout of the configuration files for consistent spacing and readability. Finally
/usr/doc/bind-8.2/rfc contains the relevant RFC’s (See Section 13.5).
Configuration files
There is only one main configuration file for named: /etc/named.conf. The named
service once used a file /etc/named.boot but this has been scrapped. If there is a
named.boot file in your /etc directory then it is not being used, except possibly by
a very old version of bind.
The named.conf file will have a line in it directory "/var/named"; or
directory "/etc/named";. This directory holds various files containing textual
lists of name to IP address mappings. The following example is a nameserver for a
company that has been given a range of IP address (196.28.133.20–30), as well as
one single IP address (160.124.182.44). It also must support a range of internal IP
addresses (192.168.2.0–255) The trick is not to think about how everything works.
If you just copy and edit things in a consistent fashion, carefully reading the comments,
this will work fine.
424
40. named — Domain Name Server
425
40. named — Domain Name Server
IN NS ns1.cranzgot.co.za.
10 IN NS ns2.cranzgot.co.za.
IN A 160.124.182.44
IN MX 10 mail1.cranzgot.co.za.
IN MX 20 mail2.cranzgot.co.za.
15
ns1 IN A 196.28.144.1
ns2 IN A 196.28.144.2
ftp IN A 196.28.133.3
426
40. named — Domain Name Server
pc1 IN A 192.168.2.1
30 pc2 IN A 192.168.2.2
pc3 IN A 192.168.2.3
pc4 IN A 192.168.2.4
§ ¦
IN NS localhost.
10
1 IN PTR localhost.
§ ¦
IN NS localhost.
10
1 IN PTR pc1.cranzgot.co.za.
2 IN PTR pc2.cranzgot.co.za.
3 IN PTR pc3.cranzgot.co.za.
4 IN PTR pc4.cranzgot.co.za.
§ ¦
427
40. named — Domain Name Server
IN NS ns1.cranzgot.co.za.
10 IN NS ns2.cranzgot.co.za.
1 IN PTR ns1.cranzgot.co.za.
2 IN PTR ns2.cranzgot.co.za.
3 IN PTR ftp.cranzgot.co.za.
§ ¦
IN NS ns1.cranzgot.co.za.
10 IN NS ns2.cranzgot.co.za.
IN PTR www.cranzgot.co.za.
§ ¦
If you have made typing errors, or named files incorrectly, you will get appropriate
error messages. Novice administrators are want to edit named configuration files
and restart named without checking /var/log/messages for errors. NEVER do
this.
428
40. named — Domain Name Server
The options section in our case specifies only one parameter: the directory for
locating any files.
/usr/doc/bind-8.2/html/options.html has a complete list of options.
The lines zone "." {. . . will be present in almost all nameserver configurations.
It tells named that the whole Internet is governed by the file named.ca. named.ca in
turn contains the list of root nameservers.
The lines zone "cranzgot.co.za" {. . . says that info for forward lookups is
located in the file named.cranzgot.co.za.
Each of the above named. files has a similar format. They begin with $TTL line and
then an @ IN SOA. TTL stands for Time To Live, the default expiry time for all subse-
quent entries. This not only prevents a No default TTL set. . . warning message,
but really tells the rest of the Internet how long to cache an entry. If you plan on mov-
ing your site soon/often, set this to a smaller value. SOA stands for Start of Authority.
The hostname on the first line specifies the authority for that domain, and the adjacent
<user>.<hostname> specifies the email address of the responsible person.
The next few lines contain timeout specifications for cached data and data propa-
gation across the net. These are reasonable defaults, but if you would like to tune these
values, consult the relevant documentation listed above. The values are all in seconds.
The serial number for the file (i.e. 2000012101) is used to tell when a change
has been made and hence that new data should be propagated to other servers. When
updating the file in any way, this serial number should be incremented. The format
is conventionally YYYYMMDDxx — exactly ten digits. xx begins with, say, 01 and is
incremented with each change made during a day.
429
40. named — Domain Name Server
Always be careful to properly end qualified hostnames with a dot, since failing to
do so causes named to append a further domain.
Empty hostnames
An omitted hostname is substitute with the domain. The purpose of this notation is
also for elegance. For example
¨ ¥
IN NS ns1.cranzgot.co.za.
§ ¦
is the same as
¨ ¥
cranzgot.co.za. IN NS ns1.cranzgot.co.za.
§ ¦
430
40. named — Domain Name Server 40.1. Configuring named for dialup use
The most basic type of record is the A and PTR records. They simply associates a
hostname with an IP number, or an IP number with a hostname respectively. You
should not have more than one host associated to a particular IP number.
The CNAME record says that a host is just an alias to another host. So rather have
¨ ¥
ns1 IN A 196.28.144.1
mail1 IN CNAME ns1.cranzgot.co.za.
§ ¦
than,
¨ ¥
ns1 IN A 196.28.144.1
mail1 IN A 196.28.144.1
§ ¦
If you have a dialup connection, the nameserver should be configured as what is called
a caching-only nameserver. Of course their is no such thing as a caching-only name-
server — it just means that the name. files have only a few essential records in them.
The point of a caching server is to prevent spurious DNS lookups that may eat mo-
dem bandwidth or cause a dial-on-demand server to initiate a dialout. It also pre-
vents applications blocking waiting for DNS lookup. (A typical example of this is
sendmail, which blocks for couple of minutes when a machine is turned on without
the network plugged in; and netscape 4, which tries to look up the IP address of
news.<localdomain>.)
The /etc/name.conf file should look as follows. Replace <naneserver> with
the IP address of the nameserver your ISP has given you. Your local machine name is
assumed to be cericon.priv.ate. (The following listings are minus superfluous
comments and newlines for the purposes of brevity):
¨ ¥
options {
forwarders {
<nameserver>;
};
431
40.1. Configuring named for dialup use 40. named — Domain Name Server
5 directory "/var/named";
};
Dynamic IP addresses
The one contingency of dialup machines is that IP addresses are often dynamically
assigned. So your 192.168. addresses aren’t going to apply. Probably one way to
get around this is to get a feel for what IP addresses you are likely to get by dialling in
a few times. Assuming you know that your ISP always gives you 196.26.x.x, you
432
40. named — Domain Name Server 40.2. Secondary or slave DNS servers
can have a reverse lookup file named.196.26 with nothing in it. This will just cause
reverse lookups to fail instead of blocking.
This is actually a bad idea because an application may legitimately need to re-
verse lookup in this range. The real complete solution would involve creating a script
to modify the named.conf file and restart named upon each dialup.
For instance, pppd (from the ppp-2.x.x package) executes a user defined script
upon a successful dial. This script would be run by pppd after determining the new IP
address. The script should create a complete named configuration based on the current
IP and then restart named.
In Section 41.3 we show a dynamic DNS configuration that does this.
Both of these plans may be unnecessary. It is probably best to identify the par-
ticular application that is causing a spurious dial-out, or causing a block, and then
apply your creativity for the particular case. For instance, in my own case, a setup had
netscape taking minutes to start up — rather irritating to the user. I immediately
diagnosed that netscape was trying to do a reverse lookup of some sort. An strace
revealed that it was actually trying to find a news server on the local domain. Simply
creating a news record pointing to the local machine fixed the problem &Actually it could
also have been fixed in the Netscape configuration were the news server can be specified.-.
named can operate as a backup server to another server also called a slave or secondary
server.
Like the caching-only server there is no such thing as a secondary server. Its just
the same named running with reduced info.
Lets say we would like ns2.cranzgot.co.za to be a secondary to
ns1.cranzgot.co.za. The named.conf file would look as follows:
¨ ¥
options {
directory "/var/named";
// query-source address * port 53;
};
5
433
40.2. Secondary or slave DNS servers 40. named — Domain Name Server
zone "cranzgot.co.za" {
type slave;
15 file "named.cranzgot.co.za";
masters {
196.28.144.1;
};
};
20
Where an entry has a “master” in it, you must supply the appropriate file. Where
an entry has a “slave” in it, named will automatically download the file from
196.28.144.1 (i.e. ns1.cranzgot.co.za) the first time a lookup is required from
that domain.
An that’s DNS!
434
Chapter 41
Dial up networking is unreliable and difficult to configure. This is simply because tele-
phones were not designed for data. However, considering that the telephone network
is by far the largest electronic network on the globe, it makes sense to make use of it.
This is why modems were created. On the other hand the recent advent of ISDN is
slightly more expensive and a better choice for all but home dial-up. See Section 41.6
for more info.
although only one of the files will be used. And then running the following command
at a shell prompt:
¨ ¥
pppd connect \
"chat -S -s -v \
’’ ’AT S7=45 S0=0 L1 V1 X4 &c1 E1 Q0’ \
OK ATDT<tel-number> CONNECT ’’ \
5 name: <username> assword: ’\q<password>’ \
435
41.1. Basic Dialup 41. Point to Point Protocol — Dialup Networking
con: ppp" \
/dev/<modem> 57600 debug crtscts modem lock nodetach \
hide-password defaultroute \
user <username> \
10 noauth
§ ¦
This is a minimalists dial in command and it’s specific to my ISP only. Don’t use the
exact command unless you have an account with the Internet Solution ISP in South
Africa, prior to the year 2000.
The command-line options are explained as follows:
connect <script> This is the script that pppd is going to use to start things up.
When you use a modem manually (as you will be shown further below), you
need to go through the steps of initialising the modem, causing a dial, connecting,
logging in, and finally telling the remote computer that you would like to start
modem data communication mode, called the point to point protocol, or PPP. The
<script> is the automation of this manual procedure.
/dev/tty?? This tells the device you are going to use. This will usually be
/dev/ttyS0, /dev/ttyS1, /dev/ttyS2 or /dev/ttyS3.
57600 The speed the modem is to be set to. This is only the speed between the PC and
the modem, and has nothing to do with the actual data throughput. It should be
set as high as possible except in the case of very old machines whose serial ports
may possibly only handle 38400. It’s best to choose 115200 unless it doesn’t
work.
436
41. Point to Point Protocol — Dialup Networking 41.1. Basic Dialup
lock Create a UUCP style lock file in /var/lock/. This is just a file of the form
/var/lock/LCK..tty?? that tells other applications that the serial device is in
use. For this reason, you must not call the device /dev/modem or /dev/cua?.
nodetach Don’t go into the background. This allows you to watch pppd run and
stop it with ˆC.
defaultroute Create an IP route after PPP comes alive. Henceforth, packets will go
to the right place.
hide-password Do not show the password in the logs. This is important for security.
user <username> Specifies the line from the /etc/ppp/chap-secrets and
/etc/ppp/pap-secrets file to use. There is usually only one.
To determine the list of expect–send sequences, you need to do a manual dial in. The
command
¨ ¥
dip -t
§ ¦
c2-ctn-icon:ppp
Entering PPP mode.
Async interface address is unnumbered (FastEthernet0)
20 Your IP address is 196.34.157.148. MTU is 1500 bytes
437
41.1. Basic Dialup 41. Point to Point Protocol — Dialup Networking
Now you can modify the above chat script as you need. The kinds of things that will
differ are trivial: like having login: instead of name:. Some also require you to type
something instead of ppp, and some require nothing to be typed after your password.
Some further require nothing to be typed at all, thus immediately entering PPP mode.
Note that dip creates UUCP lock files as explained in Section 34.4.
If you run the pppd command above, you will get output something like this:
¨ ¥
send (AT S7=45 S0=0 L1 V1 X4 &c1 E1 Q0ˆM)
expect (OK)
AT S7=45 S0=0 L1 V1 X4 &c1 E1 Q0ˆMˆM
OK
5 -- got it
send (ATDT4068500ˆM)
expect (CONNECT)
ˆM
10 ATDT4068500ˆMˆM
CONNECT
-- got it
send (ˆM)
15 expect (name:)
45333/ARQ/V90/LAPM/V42BISˆM
Checking authorization, Please wait...ˆM
username:
-- got it
20
send (psheerˆM)
expect (assword:)
psheerˆM
password:
25 -- got it
send (??????)
expect (con:)
ˆM
30 ˆM
c2-ctn-icon:
438
41. Point to Point Protocol — Dialup Networking 41.1. Basic Dialup
-- got it
send (pppˆM)
35 Serial connection established.
Using interface ppp0
Connect: ppp0 <--> /dev/ttyS0
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x88c5a54f> <pcomp> <accomp>]
rcvd [LCP ConfReq id=0x3d <asyncmap 0xa0000> <magic 0x3435476c> <pcomp> <accomp>]
40 sent [LCP ConfAck id=0x3d <asyncmap 0xa0000> <magic 0x3435476c> <pcomp> <accomp>]
rcvd [LCP ConfAck id=0x1 <asyncmap 0x0> <magic 0x88c5a54f> <pcomp> <accomp>]
sent [IPCP ConfReq id=0x1 <addr 192.168.3.9> <compress VJ 0f 01>]
sent [CCP ConfReq id=0x1 <deflate 15> <deflate(old#) 15> <bsd v1 15>]
rcvd [IPCP ConfReq id=0x45 <addr 168.209.2.67>]
45 sent [IPCP ConfAck id=0x45 <addr 168.209.2.67>]
rcvd [IPCP ConfRej id=0x1 <compress VJ 0f 01>]
sent [IPCP ConfReq id=0x2 <addr 192.168.3.9>]
rcvd [LCP ProtRej id=0x3e 80 fd 01 01 00 0f 1a 04 78 00 18 04 78 00 15 03 2f]
rcvd [IPCP ConfNak id=0x2 <addr 196.34.157.131>]
50 sent [IPCP ConfReq id=0x3 <addr 196.34.157.131>]
rcvd [IPCP ConfAck id=0x3 <addr 196.34.157.131>]
local IP address 196.34.25.95
remote IP address 168.209.2.67
Script /etc/ppp/ip-up started (pid 671)
55 Script /etc/ppp/ip-up finished (pid 671), status = 0x0
Terminating on signal 2.
Script /etc/ppp/ip-down started (pid 701)
sent [LCP TermReq id=0x2 "User request"]
rcvd [LCP TermAck id=0x2]
§ ¦
You can see the expect–send sequences working, so its easy to correct if you made a
mistake somewhere.
At this point you might want to type route -n and ifconfig in another terminal:
¨ ¥
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
168.209.2.67 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
5 0.0.0.0 168.209.2.69 0.0.0.0 UG 0 0 0 ppp0
§ ¦
¨ ¥
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:3924 Metric:1
RX packets:2547933 errors:0 dropped:0 overruns:0 frame:0
5 TX packets:2547933 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
This clearly shows what pppd has done: both creating a network device as well as a
route to it.
If your
name server is configured, you should now be able to ping metalab.unc.edu or
some well known host.
439
41.2. Demand-dial, masquerading 41. Point to Point Protocol — Dialup Networking
Note that pppd creates UUCP lock files as explained in Section 34.4.
Dial-on-demand really just envolves adding the demand option to the pppd command-
line above. The other way of doing dial-on-demand is using the diald package, but
here we discuss the pppd implementation.
With the demand option, you will notice that spurious dial-outs take place. You
need to add some filtering rules to ensure that only the services you are interested in
cause a dial-out. This is not ideal since there is still the possibility of other services
connecting on ports outside of the 1-1024 range. In addition you should also make
sure there are no services running except the ones you are interested in.
A firewall script might look as follows. This uses the old ipfwadm command
possibly called /sbin/ipfwadm-wrapper on your machine &The newer ipchains com-
mand is soon to be superceded by a completed different packet filtering system in kernel 2.4 hence I see
no reason to change from ipfwadm at this point.- The ports 21, 22, 25, 53, 80, 113 and 119
represent ftp, ssh (Secure Shell), smtp (Mail), domain (DNS), www, auth and nntp
(News) services respectively. The auth service is not needed, but should be kept open
so that connecting services get a failure instead of waiting for a timeout. You can com-
ment out the auth line in /etc/inetd.conf for security.
¨ ¥
# enable ip forwarding and dynamic address changing
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/ip_dynaddr
440
41. Point to Point Protocol — Dialup Networking 41.3. Dynamic DNS
The pppd script becomes (note that you need pppd-2.3.11 or later for this to work
as I have it here):
¨ ¥
pppd connect \
"chat -S -s -v \
’’ ’AT S7=45 S0=0 L1 V1 X4 &c1 E1 Q0’ \
OK ATDT<tel-number> CONNECT ’’ \
5 name: <username> assword: ’\q<password>’ \
con: ppp" \
/dev/ttyS0 57600 debug crtscts modem lock nodetach \
hide-password defaultroute \
user <username> \
10 demand \
:10.112.112.112 \
idle 180 \
holdoff 30
§ ¦
(See also Chapter 40 for other named setups, and Chapter 27 for configuring your ma-
chine’s DNS lookups.)
Having pppd give IP connectivity on demand is not enough. You also need to
your DNS configuration to change dynamically to reflect the current IP address that
your ISP would have assigned you.
441
41.3. Dynamic DNS 41. Point to Point Protocol — Dialup Networking
# $1 $2 $3 $4 $5 $6
# interface-name tty-device speed local-IP-address remote-IP-address ipparam
5
mkdir /etc/named-dynamic/ >& /dev/null
442
41. Point to Point Protocol — Dialup Networking 41.4. Dial-in servers
75 IN NS $HOST.$DOMAIN.
IN PTR $HOST.$DOMAIN.
EOF
killall -1 named
§ ¦
The options dialup yes; notify no; forward first tell bind to use
the link as little as possible; not send notify messages (there are no slave servers on
our LAN to notify; and try forward requests to the name server under forwarders
before trying to answer them itself; respectively.
There is one problem with this configuration. Queued DNS requests are flushed
when the configuration is reread with killall -1 named. When you try, say
ftp sunsite.unc.edu, the first DNS request by ftp causes a dial-out, but then
is discarded. The next DNS request (18 seconds later — options timeout:18
attempts:4) also doesn’t make it (dial-outs take more than 30 seconds on my ma-
chine). Only the third request gets through. What is really needed is a DNS program
designed especially for masquerading dynamically-assigned-IP servers.
The above scripts are probably over-kill, so use them sparingly. For example,
there is probably no application that really needs forward and reverse lookups on the
ppp0 device, hence you can do with a DNS configuration that doesn’t need restarting
on every dial-out. The bind documentation promises better support for dialup servers
in the future.
There is a further option: that is to use the dnrd, a DNS package especially for
dial-out servers. It was not created with dial-on-demand in mind though, hence it has
some limitations.
If you consider that pppd is really just a way to initiate a network device over a serial
port, regardless of whether you initiate or listen for a connection. As long as there is a
serial connection between two machines, pppd will negotiate a link.
443
41.4. Dial-in servers 41. Point to Point Protocol — Dialup Networking
To listen for a pppd dial-in, you need just add the following line to your
/etc/inittab file:
¨ ¥
S0:2345:respawn:/sbin/mgetty -s 115200 ttyS0
§ ¦
The proxyarp setting adds the remote client to the ARP tables. This enables your client
to connect through to the Internet on the other side of the connection without extra
routes. The file /etc/ppp/chap-secrets can be filled with lines like,
¨ ¥
dialup * <passwd> 192.168.254.123
§ ¦
You should be carefully to have a proper DNS configuration for forward and reverse
lookups of your pppd IP addresses. This is so that no services block with long timeouts,
and also so that other Internet machines will be friendly to your user’s connections.
444
41. Point to Point Protocol — Dialup Networking 41.5. Using tcpdump
Note that the above also supports faxes, logins, voice and uucp (see Section
34.3) on the same modem, because mgetty only starts pppd if it sees an LCP re-
quest (part of the PPP protocol). If you just want PPP, read the config files in
/etc/mgetty+sendfax/ (Debian /etc/mgetty/) to disable the other services.
If a dial out does occur unexpectedly, you can run tcpdump to dump packets going
to your ppp0 device. This will probably highlight the error. You can then look at the
TCP port of the service and try to figure out what process the packet might have come
from. The command is:
¨ ¥
tcpdump -n -N -f -i ppp0
§ ¦
A lot of companies will see a regular modem as the best way to get connected to the
Internet. Because ISDN is considered esoteric, they may have not looked at it as an
option. In fact ISDN is preferable everywhere except for single user dial-up (i.e. home
use).
For those who are not familiar with ISDN, this paragraph will give you a quick
summary. ISDN stands for Integrated Services Digital Network. ISDN lines are like regu-
lar telephone lines, except that an ISDN line comes with two analogue and two digital
channels. The analogue channels are regular telephone lines in every respect — just
plug your phone in and start making calls. The digital lines each support 64 kilo-
bits/second data transfer — only ISDN communication equipment is meant to plug
into these. To communicate over the digital line you need to dial an ISP just like with
a regular telephone. Now it used to be that only very expensive ISDN routers could
work with ISDN, but ISDN modems and ISDN ISA/PCI cards have become cheap
enough to allow anyone to use ISDN, while most telephone companies will install an
ISDN line as readily as a regular telephone line. So you may ask whats with the “In-
tegrated Services”. I suppose it was thought that this service, in both allowing data
as well as regular telephone, would be the ubiquitous communications service. This
remains to be seen.
If you have a hundred ISDN boxes to setup, it would be well worth it to buy
internal ISDN cards: they are really low priced these days. Configuring these is not
covered here for now. However, if you have one ISDN box to configure and no clue
445
41.6. ISDN instead of Modems 41. Point to Point Protocol — Dialup Networking
about ISDN, an internal card is going to waist your time. In this case a ISDN external
modem is the best option. These are devices designed as drop in replacements to a
normal external modem — they plug into your serial port and accept (probably ignore)
the same AT command strings as a normal modem.
Although these are supposed to be drop in replacements, ISDN is a completely
different technology. In particular, there are different protocols for different countries
which have to be specified to the modem. Some cheap tricks are:
For an Asyscom modem running on a particular ISP here in South Africa we had
to enter the AT commands:
¨ ¥
ATB4
ATP=17
§ ¦
This should give you an idea of what you may have to change to get ISDN work-
ing, it is by no means a product endorsement, or exhaustive treatment. There is also a
large amount of HOWTO information on ISDN out there on the Internet. You should
have no problem finding reading material.
Be weary when setting up ISDN. ISDN dials really fast. It can dial out a thousand
times in a few minutes which is expensive.
446
Chapter 42
This chapter will explain how to configure, patch and build a kernel from source.
A kernel installation consists of the kernel boot image, the kernel modules, the
System.map file, the kernel headers (needed only for development) and various sup-
port daemons (already provided by your distribution). This constitutes everything
that is called “Linux” under L INUX , and is built from about 50 megabytes of C
code of around 1.5 million lines. This is also why you should rather call L INUX
“GNU/Linux”, because most of the important base libraries and utilities, including the
C compiler, were the efforts of the Free Software Foundation who began the GNU
project &On the other hand, I personally feel that a critical part of L INUX is the X Window System which
runs on all U NIX’s and is not GNU. It comprises over 2 million lines of code, containing copyrights that go
back to 1985. “GNU/X-Windows-System/Linux” anyone? . -
• The L INUX kernel image is a 400–600 kilobyte file that sits in /boot/ (See
Chapter 31). If you look in this directory you may see several of them, which to
boot is probably available to you to choose at boot time, through lilo.
The kernel in /boot/ is compressed. I.e. it is gzip compressed and is actually
about twice the size when unpacked into memory on boot.
• The kernel also has detached parts called modules. These all sit in
447
42.2. Kernel version numbers 42. Kernel Source, Modules, Hardware
• Next there is the System.map file in /boot also. It is used by klogd to resolve
kernel address references to symbols, so as to write logs about them, and then
also by depmod to work out module dependencies (what modules need what other
modules to be loaded first).
• The “various support daemons” should be running already. Since 2.2 these have
been reduced to klogd only. The other kernel daemons that appear to be running
are generated by the kernel itself.
A module is usually a device driver pertaining to some device node generated with
the mknod command, or already existing in the /dev/ directory. For instance, the
SCSI driver automatically locks onto device major = 8, minor = 0, 1,. . . , when it loads;
and the Sound module onto device major = 14, minor = 3 (/dev/dsp), and others. The
448
42. Kernel Source, Modules, Hardware 42.3. Modules, insmod command and siblings
modules people most often play with are SCSI, Ethernet, and Sound modules. There
are also many modules that support extra features instead of hardware.
Modules are loaded with the insmod command, and removed with the rmmod
command. This is somewhat like the operation of linking shown in the Makefile on
page 219. To list currently loaded modules, use lsmod. Try: (kernel 2.4 paths are
different and are given in braces)
¨ ¥
insmod /lib/modules/<version>/fs/fat.o
( insmod /lib/modules/<version>/kernel/fs/fat/fat.o )
lsmod
rmmod fat
5 lsmod
§ ¦
although you can run it manually at any time. The lsmod listing also shows module
dependencies in square braces:
¨ ¥
Module Size Used by
de4x5 41396 1 (autoclean)
parport_probe 3204 0 (autoclean)
parport_pc 5832 1 (autoclean)
5 lp 4648 0 (autoclean)
parport 7320 1 (autoclean) [parport_probe parport_pc lp]
slip 7932 2 (autoclean)
slhc 4504 1 (autoclean) [slip]
sb 33812 0
10 uart401 6224 0 [sb]
sound 57464 0 [sb uart401]
soundlow 420 0 [sound]
449
42.4. Interrupts, IO-ports and DMA Channels 42. Kernel Source, Modules, Hardware
A loaded module that drives hardware will often consume IO-ports, IRQs, and pos-
sibly a DMA channel as explained in Chapter 3. You can get a full list of occupied
resources from the /proc/ directory:
¨ ¥
[root@cericon]# cat /proc/ioports
0000-001f : dma1
0020-003f : pic1
5 0040-005f : timer
0060-006f : keyboard
0070-007f : rtc
0080-008f : dma page reg
00a0-00bf : pic2
10 00c0-00df : dma2
00f0-00ff : fpu
0170-0177 : ide1
01f0-01f7 : ide0
0220-022f : soundblaster
15 02f8-02ff : serial(auto)
0330-0333 : MPU-401 UART
0376-0376 : ide1
0378-037a : parport0
0388-038b : OPL3/OPL2
20 03c0-03df : vga+
03f0-03f5 : floppy
03f6-03f6 : ide0
03f7-03f7 : floppy DIR
03f8-03ff : serial(auto)
25 e400-e47f : DC21140 (eth0)
f000-f007 : ide0
f008-f00f : ide1
CPU0
0: 8409034 XT-PIC timer
450
42. Kernel Source, Modules, Hardware 42.5. Module options and device configuration
1: SoundBlaster8
2: floppy
50 4: cascade
5: SoundBlaster16
§ ¦
The above configuration is typical. Note that the second column of the IRQ listing
shows the number of interrupts signals received from the device. Moving my mouse a
little, and listing the IRQs again gives me:
¨ ¥
3: 104851 XT-PIC serial
§ ¦
showing that several hundred interrupts were since received. Another useful entry is
/proc/devices, which shows what major devices numbers of allocated and being
used. It is extremely useful for seeing what is peripherals are “alive” on you system.
Device modules often need information about their hardware configuration. For in-
stance, ISA device drivers need to know the IRQ and IO-port that the ISA card is phys-
ically configured to access. This information is passed to the module as module options
that the module uses to initialise itself. Note that most devices will not need options
at all. PCI cards mostly auto-detect; it is mostly ISA cards that require these options.
There are five ways to pass options to a module.
1. If a module is compiled into the kernel, then the module will be initialised at
boot time. lilo passes module options to the kernel from the command-line at
the LILO: prompt. For instance, at the LILO: prompt, you can type &See Section
4.4-:
451
42.5. Module options and device configuration 42. Kernel Source, Modules, Hardware
¨ ¥
linux aha1542=<portbase>[,<buson>,<busoff>[,<dmaspeed>]]
§ ¦
to initialise the Adaptec 1542 SCSI driver. What these options are,
and exactly what goes in them can be gotten from reading the file
/usr/src/linux-<version>/drivers/scsi/aha1542.c. Near the top of
the file are comments explaining the meaning of these options.
2. If you are using LOADLIN.EXE or some other DOS or Windows kernel loader,
then it too can take similar options. I will not go into these.
3. /etc/lilo.conf can take the append = option as discussed on page 306. This
passes options to the kernel as though you had typed them at the LILO: prompt,
¨ ¥
append = aha1542=<portbase>[,<buson>,<busoff>[,<dmaspeed>]]
§ ¦
4. The insmod and modprobe commands can take options that are passed to the
module. These are vastly different to the way you pass options using append =.
For instance, you can give a compiled-in Ethernet module options like,
¨ ¥
append = ether=9,0x300,0xd0000,0xd4000,eth0
append = ether=0,0,eth1
§ ¦
Note that the 0xd0000,0xd4000 are only applicable to a few Ethernet modules
and are usually omitted. Also, the 0’s in ether=0,0,eth1 mean to try auto-
detect. To find out what options a module will take you can use the modinfo
command shows that the wd driver is one of the few Ethernet drivers where you
can set their RAM usage &This has not been discussed, but cards can sometimes use areas of
memory directly.-.
452
42. Kernel Source, Modules, Hardware 42.5. Module options and device configuration
¨ ¥
[root@cericon]# modinfo -p /lib/modules/<version>/net/wd.o
( [root@cericon]# modinfo -p /lib/modules/<version>/kernel/drivers/net/wd.o )
io int array (min = 1, max = 4)
irq int array (min = 1, max = 4)
5 mem int array (min = 1, max = 4)
mem_end int array (min = 1, max = 4)
§ ¦
You may like to see a complete summary of all module options with examples of each
of the five ways of passing options. No such summary exists at this point, simply
because there is no overall consistency, and because people are mostly interested in
getting one particular device to work which will doubtless have peculiarities best dis-
cussed in a specialised document. Further, some specialised modules are mostly used
in compiled-out form, while others are mostly used in compiled-in form.
To get
an old or esoteric device working, it is best to read the appropriate HOWTO docu-
ments. These are: BootPrompt-HOWTO, Ethernet-HOWTO and Sound-HOWTO. The
device may also be documented in /usr/linux-<version>/Documentation/ or
under one of its subdirectories, like sound/ and networking/. This is documen-
tation written by the driver authors themselves. Of particular interest is the file
/usr/src/linux/Documentation/networking/net-modules.txt, which, al-
though outdated, has a fairly comprehensive list of networking modules and the
453
42.6. Configuring various devices 42. Kernel Source, Modules, Hardware
module options they take. Another source of documentation is the driver C code it-
self, like the aha1542.c example above. It may explain the /etc/lilo.conf or
/etc/modules.conf options to use, but will often be quite cryptic. A driver is often
written with only one of compiled-in or compiled-out support in mind (even though
it really supports both) — rather choose whether to compile-in or compiled-out based
on what is implied in the documentation or C source.
Further examples on getting common devices to work now follow.
Only a few devices are discussed here. See the documentation sources above for more
info. I am going to concentrate here on what is normally done.
Plug and play (PnP) ISA sound cards (like Sound Blaster cards) are possibly the more
popular of anything that people have gotten working under L INUX . Here we use
the sound card example to show how to get a PnP ISA cards working in a few minutes.
This is of course applicable to cards other than sound.
There is a utility called isapnp. It takes one argument, the file
/etc/isapnp.conf, and configures all ISA plug and play devices to the IRQs and
IO-ports specified therein. /etc/isapnp.conf is a complicated file but can be gen-
erated with another utility, pnpdump. pnpdump outputs a example isapnp.conf file
to stdout, which contains IRQ and IO-port values allowed by your devices. You must
edit these to unused values. Alternatively, you can use pnpdump --config to get a
/etc/isapnp.conf file with the correct IRQ, IO-port and DMA channels automati-
cally guessed based on an examination of the /proc/ entries. This comes down to,
¨ ¥
[root@cericon]# pnpdump --config | grep -v ’ˆ\(#.*\|\)$’ > /etc/isapnp.conf
[root@cericon]# isapnp /etc/isapnp.conf
which gets any ISA PnP card configured with just two commands. Note that the
/etc/isapnp.gone file can be used to make pnpdump avoid using certain IRQ and
IO-ports. Mine contains,
¨ ¥
IO 0x378,2
IRQ 7
454
42. Kernel Source, Modules, Hardware 42.6. Configuring various devices
§ ¦
If you get no kernel or other errors, then the devices are working.
Now we want to set up dynamic loading of the module. Remove all the sound and
other modules with rmmod -a and/or manually, and then try:
¨ ¥
aumix
§ ¦
Then try:
¨ ¥
playmidi <somefile>.mid
§ ¦
455
42.6. Configuring various devices 42. Kernel Source, Modules, Hardware
Note that if you had to comment out the alias lines, then a kernel message
like modprobe: Can’t locate module sound-slot-0 would result. This
indicates that the kernel is attempting a /sbin/modprobe sound-slot-0: a
cue to insert an alias line. Actually, sound-service-0-0,1,2,3,4 are the
/dev/mixer,sequencer,midi,dsp,audio devices respectively. sound-slot-0
means a card that should supply all of these. The post-install option means to run
an additional command after installing the sb module; this takes care of the Adlib se-
quencer driver &I was tempted to try removing the post-install line and adding a alias sound-
service-0-1 adlib card. This works, but not if you run aumix before playmidi — *shrug*-.
Parallel port
Merely make sure that your IRQ and IO-Port match those in your CMOS (see Section
3.3), and that it does not conflict with any other devices.
Here I will demonstrate non-PnP ISA cards and PCI cards using Ethernet devices as an
example. (NIC stands for Network Interface Cards, i.e. an Ethernet 10 or 100 Mb card.)
For old ISA cards with jumpers, you will need to check your /proc/ files for
unused IRQ and IO-ports and then physically set the jumpers. Now you can do a
modprobe as usual, for example:
¨ ¥
modinfo -p ne
modprobe ne io=0x300 irq=9
§ ¦
Of course, for dynamic loading, your /etc/modules.conf file must have the lines:
¨ ¥
alias eth0 ne
options ne io=0x300 irq=9
456
42. Kernel Source, Modules, Hardware 42.6. Configuring various devices
§ ¦
On some occasions you will come across a card that has software configurable
jumpers, like PnP, but which can only be configured using a DOS utility. In this case
compiling the module into the kernel will cause it to be autoprobed on startup without
needing any other configuration.
A worst case scenario is a card whose make is unknown, as well its IRQ/IO-ports.
The chip number on the card may sometimes give you a hint (grep the kernel sources
for this number), but not always. To get this card working, compile in support for
several modules that you think the card is likely to be. Experience will help you make
better guesses. If one of your guesses is correct, your card will almost certainly be
discovered on reboot. You can find its IRQ/IO-port values in /proc/ and run a dmesg
to see the autoprobe message line; the message will begin with eth0: and contain
some info about the driver. This information can be used if you decide later to use
modules instead of your custom kernel.
As explained, PCI devices almost never require IRQ or IO-ports to be given as
options. So long as you have the correct module, a simple
¨ ¥
modprobe <module>
§ ¦
will always work. Finding the correct module can still be a problem however, because
suppliers will call a card all sorts of marketable things besides the actual chipset it is
compatible with. There is a utility scanpci (which is actually part of X) that checks
your PCI slots for PCI devices. Running scanpci might output something like:
¨ ¥
.
.
.
pci bus 0x0 cardnum 0x09 function 0x0000: vendor 0x1011 device 0x0009
5 Digital DC21140 10/100 Mb/s Ethernet
pci bus 0x0 cardnum 0x0b function 0x0000: vendor 0x8086 device 0x1229
Intel 82557/8/9 10/100MBit network controller
10 pci bus 0x0 cardnum 0x0c function 0x0000: vendor 0x1274 device 0x1371
Ensoniq es1371
§ ¦
Another utility is lspci from the pciutils package, which gives comprehensive
information where scanpci sometimes gives none. Then a simple script (kernel 2.4
paths in braces again),
¨ ¥
for i in /lib/modules/<version>/net/* ; do strings $i \
| grep -q -i 21140 && echo $i ; done
( for i in /lib/modules/<version>/kernel/drivers/net/* \
; do strings $i | grep -q -i 21140 && echo $i ; done )
457
42.6. Configuring various devices 42. Kernel Source, Modules, Hardware
faithfully outputs three modules de4x5.o, eepro100.o and tulip.o, of which two
are correct. On another system lspci gave,
¨ ¥
.
.
.
00:08.0 Ethernet controller: Macronix, Inc. [MXIC] MX987x5 (rev 20)
5 00:0a.0 Ethernet controller: Accton Technology Corporation SMC2-1211TX (rev 10)
§ ¦
and the same for. . . grep. . . Accton gave rtl8139.o and tulip.o (the former of
which was correct), and for. . . grep. . . Macronix (or even 987) gave tulip.o, which
hung the machine. I have yet to get that card working, although Eddie across the the
room claims he got a similar card working fine. Cards are cheap — there are enough
working brands to not have to waist your time on difficult ones.
The scanpci output just above also shows the popular Ensoniq Sound card, some-
times built into motherboards. Simply adding the line
¨ ¥
alias sound es1371
§ ¦
to your modules.conf file will get this card working. It is relatively easy to find the
type of card from the card itself — Ensoniq cards actually have es1371 printed on one
of the chips.
There are a lot of sound (and other) cards whose manufacturers refuse to supply the
Free software community with specs. Disclosure of programming information would enable
L INUX users to buy their cards; Free software developers would produce a driver at no cost.
Actually, manufacturers’ reasons are often just pigheadedness.
458
42. Kernel Source, Modules, Hardware 42.6. Configuring various devices
If you have more than one Ethernet card it is easy to specify both in your
modules.conf file as shown in Section 42.5 above. Modules compiled into the kernel
only probe a single card (eth0) by default. Adding the line,
¨ ¥
append = "ether=0,0,eth1 ether=0,0,eth2 ether=0,0,eth3"
§ ¦
will cause eth1, eth2 and eth3 to be probed as well. Further, replacing the 0’s with
actual values can force certain interfaces to certain physical cards. If all your cards are
PCI however, you will have to get the order of assignment by experimentation &Do the
probes happen in order of PCI slot number?-.
If you have two of the same card, your kernel may complain when you try to
load the same module twice. The -o option to insmod specifies a different internal
name for the driver to trick the kernel into thinking that the driver is not really loaded:
¨ ¥
alias eth0 3c509
alias eth1 3c509
options eth0 -o 3c509-0 io=0x280 irq=5
options eth1 -o 3c509-1 io=0x300 irq=7
§ ¦
However with the following two PCI cards this was not necessary:
¨ ¥
alias eth0 rtl8139
alias eth1 rtl8139
§ ¦
SCSI disks
SCSI (pronounced scuzzy) stands for Small Computer System Interface. SCSI is a ribbon
spec and electronic protocol for communicating between devices and computers. Like
your IDE ribbons, SCSI ribbons can connect to their own SCSI harddisks. SCSI ribbons
have gone through some versions to make SCSI faster, the latest “Ultra-Wide” SCSI rib-
bons are thin, with a dense array of pins. Unlike your IDE, SCSI can also connect tape
459
42.6. Configuring various devices 42. Kernel Source, Modules, Hardware
drives, scanners, and many other types of peripherals. SCSI theoretically allows mul-
tiple computers to share the same device, although I have not seen this implemented
in practice. Because many U NIX hardware platforms only support SCSI, it has become
an integral part of U NIX operating systems.
SCSI’s also introduce the concept of LUN’s (which stands for Logical Unit Number),
Buses’s and ID’s. These are just numbers given to each device in order of the SCSI
cards you are using (if more than one), the SCSI cables on those cards, and the SCSI
devices on those cables — the SCSI standard was designed to support a great many
of these. The kernel assigns each SCSI drive in sequence as it finds them: /dev/sda,
/dev/sdb etc. so these details are usually irrelevant.
An enormous amount should be said on SCSI, but the bare bones is that, for 90%
of situations, insmod <pci-scsi-driver> is all you are going to need. You can
then immediately begin accessing the device through /dev/sd? for disks, /dev/st?
for tapes, /dev/scd? for CDROM’s or /dev/sg? for scanners &Scanner user programs
will have docs on what devices they access.-. SCSI’s often also come with their own BIOS that
you can enter on startup (like your CMOS). This will enable you to set certain things.
In some cases, where your distribution compiles-out certain modules, you may have to
load one of sd mod.o, st.o, sr mod.o or sg.o respectively. The core scsi mod.o
module may also need loading, and /dev/ devices may need to be created. A safe bet
is to run,
¨ ¥
cd /dev
./MAKEDEV -v sd
./MAKEDEV -v st0 st1 st2 st3
./MAKEDEV -v scd0 scd1 scd2 scd3
5 ./MAKEDEV -v sg
§ ¦
to ensure that all necessary device files exist in the first place.
It is recommended that you compile into your kernel support for your SCSI
card (also called the SCSI Host Adapter) that you have, as well as support for tapes,
CDROMs, etc. When your system next boots, everything will just autoprobe. An ex-
ample system with a SCSI disk and tape gives the following in bootup:
¨ ¥
(scsi0) <Adaptec AIC-7895 Ultra SCSI host adapter> found at PCI 0/12/0
(scsi0) Wide Channel A, SCSI ID=7, 32/255 SCBs
(scsi0) Cables present (Int-50 YES, Int-68 YES, Ext-68 YES)
(scsi0) Illegal cable configuration!! Only two
5 (scsi0) connectors on the SCSI controller may be in use at a time!
(scsi0) Downloading sequencer code... 384 instructions downloaded
(scsi1) <Adaptec AIC-7895 Ultra SCSI host adapter> found at PCI 0/12/1
(scsi1) Wide Channel B, SCSI ID=7, 32/255 SCBs
(scsi1) Downloading sequencer code... 384 instructions downloaded
10 scsi0 : Adaptec AHA274x/284x/294x (EISA/VLB/PCI-Fast SCSI) 5.1.28/3.2.4
<Adaptec AIC-7895 Ultra SCSI host adapter>
scsi1 : Adaptec AHA274x/284x/294x (EISA/VLB/PCI-Fast SCSI) 5.1.28/3.2.4
460
42. Kernel Source, Modules, Hardware 42.6. Configuring various devices
You should also check Section 31.5 to find out how to boot SCSI disks when the
needed module,. . . is on a file-system,. . . inside a SCSI disk,. . . that needs the module.
This is the most important section to read regarding SCSI. You may be used to IDE
ribbons that just plug in and work. SCSI ribbons are not of this variety, they need to be
impedance matched and terminated. These are electrical technicians’ terms. Basically,
it means that you must use high quality SCSI ribbons and terminate your SCSI device.
SCSI ribbons allow many SCSI disks/tapes to be connected to one ribbon. Terminating
means setting certain jumpers or switches on the last devices on the ribbon. It may
also mean plugging the last cable connector into something else. Your adaptor docu-
mentation and disk documentation should explain what to do. If terminate incorrectly,
everything may work fine, but you may get disk errors later in the life of the machine.
Also note that some SCSI devices have automatic termination.
461
42.6. Configuring various devices 42. Kernel Source, Modules, Hardware
CD Writers
A system with an ATAPI (IDE) CD -Writer and ordinary CDROM will give a mes-
sage on bootup like,
¨ ¥
hda: FUJITSU MPE3084AE, ATA DISK drive
hdb: CD-ROM 50X L, ATAPI CDROM drive
hdd: Hewlett-Packard CD-Writer Plus 9300, ATAPI CDROM drive
§ ¦
We will explain how to get these to work in this section. (Note that these devices
should give BIOS messages before LILO: starts to indicate that they are correctly
installed.)
The /etc/modules.conf lines to get the CD Writer working are:
¨ ¥
alias scd0 sr_mod # load sr_mod upon access of /dev/scd0
alias scsi_hostadapter ide-scsi # SCSI hostadaptor emulation
options ide-cd ignore="hda hdc hdd" # Our normal IDE CD is on /dev/hdb
§ ¦
The alias scd0 line must be left out if sr mod it is compiled into the kernel — search
your /lib/modules/<version>/ directory. Note that the kernel does not support
ATAPI CD Writers directly. The ide-scsi module emulates a SCSI adaptor on behalf
of the ATAPI CDROM. CD Writer software expects to speak to /dev/scd?, and the
ide-scsi module makes this device appear like a real SCSI CD Writer &Real SCSI CD
Writers are much more expensive.-. There is one caveat: your ordinary IDE CDROM driver,
ide-cd, will also want to probe your CD Writer like it were a normal CDROM. The
ignore option makes the ide-cd module overlook any drives that should not be
probed — on this system, this would be the hard disk, CD Writer and non-existent sec-
ondary master. However, there is no way of giving an ignore option to a compiled-in
ide-cd module (which is how many distributions ship), hence read on.
An alternative is to compile in support for ide-scsi and completely leave out
support for ide-cd. Your normal CDROM will work perfectly as a read only CDROM
under SCSI emulation &Even with music CD’s.-. This means setting the relevant sections
of your kernel configuration menu:
¨ ¥
<*> Enhanced IDE/MFM/RLL disk/cdrom/tape/floppy support
< > Include IDE/ATAPI CDROM support
<*> SCSI emulation support
462
42. Kernel Source, Modules, Hardware 42.6. Configuring various devices
No further configuration is needed, and on bootup, you will find messages like:
¨ ¥
scsi0 : SCSI host adapter emulation for IDE ATAPI devices
scsi : 1 host.
Vendor: E-IDE Model: CD-ROM 50X L Rev: 12
Type: CD-ROM ANSI SCSI revision: 02
5 Detected scsi CD-ROM sr0 at scsi0, channel 0, id 0, lun 0
Vendor: HP Model: CD-Writer+ 9300 Rev: 1.0b
Type: CD-ROM ANSI SCSI revision: 02
Detected scsi CD-ROM sr1 at scsi0, channel 0, id 1, lun 0
scsi : detected 2 SCSI generics 2 SCSI cdroms total.
10 sr0: scsi3-mmc drive: 4x/50x cd/rw xa/form2 cdda tray
Uniform CD-ROM driver Revision: 3.10
sr1: scsi3-mmc drive: 32x/32x writer cd/rw xa/form2 cdda tray
§ ¦
If you do have a real SCSI writer, compiling in support for your SCSI card will detect
it in a similar fashion. Then, for this example, the device on which to mount your
CDROM is /dev/scd0 and your CD-Writer, /dev/scd1.
For actually recording a CD , the cdrecord command line program is simple and
robust, although there are also many pretty graphical frontends. To locate your CD
ID, do
¨ ¥
cdrecord -scanbus
§ ¦
which will give a comma separated numeric sequence. You can then use this as the
argument to cdrecord’s dev= option. On my machine I type,
¨ ¥
mkisofs -a -A ’Paul Sheer’ -J -L -r -P PaulSheer \
-p www.icon.co.za/˜psheer/ -o my_iso /my/directory
cdrecord dev=0,1,0 -v speed=10 -isosize -eject my_iso
§ ¦
Serial devices
You don’t need to load any modules to get your mouse and modem to work. Regular
serial device (COM1 through COM4 under DOS/Windows) will auto probe on boot
and are available as /dev/ttyS0 through /dev/ttyS3. A message on boot like,
¨ ¥
Serial driver version 4.27 with MANY_PORTS MULTIPORT SHARE_IRQ enabled
ttyS00 at 0x03f8 (irq = 4) is a 16550A
ttyS01 at 0x02f8 (irq = 3) is a 16550A
§ ¦
463
42.6. Configuring various devices 42. Kernel Source, Modules, Hardware
On the other hand, multi-port serial cards can be difficult to configure. These
devices are in a category all of their own. Most use a chip similar to your builtin serial
port, called the 16550A UART (Universal Asynchronous Receiver Transmitter). The
kernel’s generic serial code supports them, and you will not need a separate driver.
The UART really is the serial port, and comes in the flavours 8250, 16450, 16550,
16550A, 16650, 16650V2, and 16750.
To get these cards working requires using the setserial command. It is used
to configure the kernel’s builtin serial driver. A typical example is an 8 port non-PnP
ISA card with jumpers set to unused IRQ 5 and ports 0x180–0x1BF. Note that unlike
most devices, many serial devices can share the same IRQ &This is because serial devices set
an IO-port to tell which device is sending the interrupt. The CPU just checks every serial device whenever
-
an interrupt comes in. . The card is configured with:
¨ ¥
cd /dev/
./MAKEDEV -v ttyS4
./MAKEDEV -v ttyS5
./MAKEDEV -v ttyS6
5 ./MAKEDEV -v ttyS7
./MAKEDEV -v ttyS8
./MAKEDEV -v ttyS9
./MAKEDEV -v ttyS10
./MAKEDEV -v ttyS11
10 /bin/setserial -v /dev/ttyS4 irq 5 port 0x180 uart 16550A skip_test
/bin/setserial -v /dev/ttyS5 irq 5 port 0x188 uart 16550A skip_test
/bin/setserial -v /dev/ttyS6 irq 5 port 0x190 uart 16550A skip_test
/bin/setserial -v /dev/ttyS7 irq 5 port 0x198 uart 16550A skip_test
/bin/setserial -v /dev/ttyS8 irq 5 port 0x1A0 uart 16550A skip_test
15 /bin/setserial -v /dev/ttyS9 irq 5 port 0x1A8 uart 16550A skip_test
/bin/setserial -v /dev/ttyS10 irq 5 port 0x1B0 uart 16550A skip_test
/bin/setserial -v /dev/ttyS11 irq 5 port 0x1B8 uart 16550A skip_test
§ ¦
You should immediately be able to use these devices as regular ports. Note that
you would expect to see the interrupt in use under /proc/interrupts. For serial
devices this is only true after data actually starts to flow. However, you can check
/proc/tty/driver/serial to get more status information. The setserial man
page contains more about different UART’s and their compatability problems. It also
explains auto-probing of the UART, IRQ and IO-ports (although it is better to be sure
of your card and never use auto-probing).
464
42. Kernel Source, Modules, Hardware 42.7. More on LILO: options
The BootPrompt-HOWTO contains an exhaustive list of things that can be typed at the
boot prompt to do interesting things like NFS root mounts. This is important to read if
only to get an idea of the features that L INUX supports.
Summary:
¨ ¥
cd /usr/src/linux/
make menuconfig
make dep
make clean
5 make bzImage
make modules
make modules_install
cp /usr/src/linux/arch/i386/boot/bzImage /boot/vmlinuz-<version>
cp /usr/src/linux/System.map /boot/System.map-<version>
§ ¦
Finally, edit /etc/lilo.conf and run lilo. Details on each of these steps follow.
465
42.8. Building the kernel 42. Kernel Source, Modules, Hardware
mv linux-2.4.0-test6 linux-2.4.0-test7
ln -sf linux-2.4.0-test7 linux
5 cd linux
make mrproper
§ ¦
Your 2.4.0-test6 kernel source tree is now a 2.4.0-test7 kernel source tree. You
will often want to patch the kernel with features that Linus did not include, like secu-
rity patches, or commercial hardware drivers.
Important is that the following include directories point to the correct directories in the
kernel source tree:
¨ ¥
ls -al /usr/include/{linux,asm} /usr/src/linux/include/asm
lrwxrwxrwx 1 root root 24 Sep 4 13:45 /usr/include/asm -> ../src/linux/include/asm
lrwxrwxrwx 1 root root 26 Sep 4 13:44 /usr/include/linux -> ../src/linux/include/linux
lrwxrwxrwx 1 root root 8 Sep 4 13:45 /usr/src/linux/include/asm -> asm-i386
§ ¦
Before continuing,
you should read the /usr/src/linux/Documentation/Changes file to find out
what is required to build the kernel. If you have a kernel source tree supplied by your
distribution, everything will already be up do date.
42.8.2 Configuring
There are three kernel configuration interfaces. The old line for line y/n interface is
painful to use. For a better text mode interface, you can type
¨ ¥
make menuconfig
§ ¦
to get the graphical configurator. I will assume that you are using the text mode inter-
face.
The configure program enables you to specify an enormous number of features.
It is advisable to skim through all the sections to get a feel for the different
things you can do. Most options are about indicating whether you want a fea-
ture [*] compiled into the kernel image, [M] compiled as a module, or [ ]
not compiled at all. You can also turn off module support altogether from
Loadable module support --->. The kernel configuration is one L INUX pro-
gram that offers lots of help — select < Help > on any feature. The raw help file is
/usr/src/linux/Documentation/Configure.help (nearly a 700 kilobytes) and
is worth reading.
466
42. Kernel Source, Modules, Hardware 42.9. Using packaged kernel source
When you are satisfied with your selection of options, < Exit > and save
your new kernel configuration.
The kernel configuration is saved in a file /usr/src/linux/.config. Next
time you run make menuconfig, it will default to these settings. The file
/usr/src/linux/arch/i386/defconfig contains defaults to use in the absence
of a .config.
Your distribution will probably have a kernel source package ready to build.
This is better than downloading the source yourself, because all the default
build options will be present, for instance, RedHat 7.0 comes with the file
/usr/src/linux-2.2.16/configs/kernel-2.2.16-i586-smp.config
which can be copied over the /usr/src/linux-2.2.16/.config to build a kernel
optimised for SMP with all of RedHat’s defaults enabled. It also comes with a custom
defconfig file to build kernels identical to RedHat’s. Finally, RedHat would have
applied many patches to add features that may be time-comsuming to do yourself.
You should try to enable or compile-in features rather than disable anything,
since the default RedHat kernel supports almost every kernel feature, and later it may
be more convenient to have left it that way. On the other, a minimal kernel will compile
much faster.
Run the following commands to build the kernel, this may take anything from a few
minutes to several hours, depending on what you have enabled. After each command
completes, check the last few messages for errors (or check the return code, $?), rather
than blindly typing the next.
¨ ¥
make dep && \
make clean && \
make bzImage && \
make modules && \
5 make modules_install
§ ¦
The command make modules install would have installed all modules into
/lib/modules/<version> &You may like to clear out this directory at some point and rerun
-
make modules install, since stale modules cause problems with depmod -a.
467
42.10. Building, installing 42. Kernel Source, Modules, Hardware
The other
files resulting from the build are /usr/src/linux/arch/i386/boot/bzImage
and /usr/src/linux/System.map. These must be copied to /boot/, possibly cre-
ating neat symlinks:
¨ ¥
cp /usr/src/linux/arch/i386/boot/bzImage /boot/vmlinuz-<version>
cp /usr/src/linux/System.map /boot/System.map-<version>
ln -sf System.map-<version> /boot/System.map
ln -sf /boot/vmlinuz-<version> vmlinuz
§ ¦
Finally, your lilo.conf may be edited as per Chapter 31. Most people now
forget to run lilo and find their system unbootable. Do run lilo making sure that
you have left your old kernel in as an option, in case you need to return to it. Also
make a boot floppy from your kernel as shown in Section 31.4.
468
Chapter 43
Before The X Window System (from now on called ), U NIX was terminal based,
and had no proper graphical environment, sometimes called a GUI &Graphical User
Interface.-. was designed to fulfill that need and incorporate into graphics all the
power of a networked computer. You may imagine that allowing an application to
put graphics on a screen involves nothing more than creating a user library that can
perform various graphical functions like line drawing, font drawing and so on. To un-
derstand why is more than merely this, consider the example of character terminal
applications: these are programs which run on a remote machine while displaying to
a character terminal and recieving feedback (keystrokes) from that character terminal.
There are two distinct entities at work — firstly the application, and secondly the user’s
character terminal display; these two are connected by some kind of serial or network
link. Now what if the character terminal could display windows, and other graphics
(in addition to text), while giving feedback to the application using a mouse (as well as
a keyboard)? This is what achieves. It is a protocol of commands that are sent and
received between an application and a special graphical terminal called an X Server
(from now on called the server) &The word “server” is confusing, because there are lots of servers
for each client machine, and the user sits on the server side. This is in the opposite sense to what we usually
-
mean by a server. . How the server actually draws graphics on the hardware is irrele-
vant to the developer, all the application needs to know is that if it sends a particular
sequence of bytes down the TCP/IP link, the server will interpret them to mean that
a line, circle, font, box or other graphics entity should be drawn on its screen. In the
other direction, the application needs to know that particular sequences of bytes mean
that a keyboard key was pressed or that a mouse has moved. This TCP communication
is called the X protocol.
469
43.1. The X protocol 43. X
When you are using , you will probably not be aware that this interaction is
happening. The server and the application might very well be on the same machine.
The real power of is evident when they are not on the same machine. Consider for
example that 20 users can be logged onto a single machine and be running different
programs which are displayed on 20 different remote Servers. It is as though a single
machine was given multiple screens and keyboards.
It is for this reason that is called a network transparent windowing system.
The developer of a graphical application can then dispense with having to know
anything about the graphics hardware itself (consider DOS applications where each
had to build in support for many different graphics cards), and also dispense with
having to know what machine the graphics is going to display on.
The precise program that performs this miracle is /usr/X11/bin/X. A typical
sequence of events to get a graphical program to run is as follows. (This is an illustra-
tion. In practice, numerous utilities will perform these functions in a more generalised
and user friendly way.)
Communication between the application and the server is somewhat more com-
plex than the mere drawing of lines and rectangles and reporting of mouse and key
470
43. X 43.1. The X protocol
events. The server has to be able to handle multiple applications connecting from mul-
tiple different machines, where these applications may interact between each other
(think of cutting and paste’ing between applications that are actually running on differ-
ent machines.) Some examples of the fundamental X Protocol requests that an applica-
tion can make to a server are:
In return, the server replies by sending Events back to the application. The application
is required to constantly poll the server for these events. Besides events detailing the
user’s mouse and keyboard input, there are, for example, events that indicate that a
window has been exposed (i.e. a window was on top of another window and was
moved, thus exposing the window beneath it. The application should then send the
appropriate commands needed to redraw the graphics within it), as well as events such
as to indicate that another application has requested a paste from your application etc..
The file /usr/include/X11/Xproto.h contains the full list of protocol requests
and events.
The programmer of an application need not be directly concerned with these
requests. A high level library handles the details of the server interaction. This library
is called the X Library, /home/X11R6/lib/libX11.so.6.
One of the limitations of such a protocol is that one is restricted to the set of
commands that have been defined. overcame this problem by making it extensible
&Being able to add extensions and enhacements without complicating or breaking compatability.- from
the start. These days there are extensions to to allow, for example, the display of 3D
graphics on the server, the interpretation of postscript commands, and many others
that improve graphics appeal and performance. Each extension comes with a new
group of protocol requests and events, as well as a programmers’ library interface
for the developer.
An example of real program is as follows. This is about the simplest an
program is ever going to get. It does the job of displaying a small XPM image file in
a window, and waiting for a key press or mouse click before exiting. You can compile
471
43.1. The X protocol 43. X
it with gcc -o splash splash.c -lX11 -L/usr/X11/lib. (You can see right
away why there are few applications written directly in .) You can see that all
Library functions are prefixed by an X:
¨ ¥
/* splash.c - display an image */
#include <stdlib.h>
#include <stdio.h>
5 #include <string.h>
#include <X11/Xlib.h>
/* XPM */
10 static char *graham_splash[] = {
/* columns rows colors chars-per-pixel */
"28 32 16 1",
" c #34262e", ". c #4c3236", "X c #673a39", "o c #543b44",
"O c #724e4e", "+ c #6a5459", "@ c #6c463c", "# c #92706c",
15 "$ c #92685f", "% c #987e84", "& c #aa857b", "n c #b2938f",
"= c #bca39b", "- c #a89391", "; c #c4a49e", ": c #c4a8a4",
/* pixels */
"--%#%%nnnn#-nnnnnn=====;;=;:", "--------n-nnnnnn=n==;==;=:;:",
"----n--n--n-n-n-nn===:::::::", "-----&------nn-n=n====::::::",
20 "----------------n===;=::::::", "----%&-%--%##%---n===:::::::",
"------%#%+++o+++----=:::::::", "--#-%%#+++oo. oo+#--=:::::::",
"-%%%%++++o.. .++&-==:::::", "---%#+#+++o. oo+&n=::::",
"--%###+$+++Oo. o+#-:=::", "-&%########++Oo @$-==:",
"####$$$+###$++OX .O+&==", "&##$O+OXo+++$#+Oo. ..O&&-",
25 "&##+OX..... .oOO@@... o@+&&", "&###$Oo.o++ ..oX@oo@O$&-",
"n###$$$$O$o ...X.. .XXX@$$$&", "nnn##$$#$OO. .XX+@ .XXX@$$#&",
"nnn&&%####$OX.X$$@. XX$$$$&", "nnnnn&&###$$$OX$$X..XXX@O$&n",
"nnnnnn&&%###$$$$@XXXXX@O$&&n", ";n=;nnnn&&&#$$$$$@@@@@@O$&n;",
";n;=nn;nnnn#&$$$@X@O$@@$$&n;", "=n=;;;n;;nn&&&$$$$OO$$$$$&;;",
30 "n;=n;;=nn&n&&&&&&$$$$$##&&n;", "n;=;;;;;;;;&&&n&&&&&&&&#&n=;",
";n;n;;=n;&;&;&n&&&&&&&#nn;;;", "n;=;;;;;;;;n;&&n&&&n&nnnn;;;",
"n=;;:;;=;;nn;&n;&n&nnnnnnn=;", "nn;;;;;;;;;;;;;;n&nnnnnn===;",
"=nn;;:;n;;;;&&&&n&&nnnnnn;=;", "n====;;;;&&&&&&&nnnnnnnnnn;;"
};
35
int main (int argc, char **argv)
{
int i, j, x, y, width, height, n_colors;
XSetWindowAttributes xswa;
40 XGCValues gcv;
Display *display;
char *display_name = 0;
int depth = 0;
Visual *visual;
45 Window window;
Pixmap pixmap;
XImage *image;
Colormap colormap;
GC gc;
50 int bytes_per_pixel;
unsigned long colors[256];
unsigned char **p, *q;
for (i = 1; i < argc - 1; i++)
if (argv[i])
55 if (!strcmp (argv[i], "-display"))
display_name = argv[i + 1];
display = XOpenDisplay (display_name);
if (!display) {
printf ("splash: cannot open display\n");
60 exit (1);
}
depth = DefaultDepth (display, DefaultScreen (display));
visual = DefaultVisual (display, DefaultScreen (display));
p = (unsigned char **) graham_splash;
65 q = p[0];
width = atoi ((const char *) q);
q = (unsigned char *) strchr (q, ’ ’);
height = atoi ((const char *) ++q);
q = (unsigned char *) strchr (q, ’ ’);
70 n_colors = atoi ((const char *) ++q);
image =
XCreateImage (display, visual, depth, ZPixmap, 0, 0, width, height,
80 8, 0);
472
43. X 43.1. The X protocol
120 /* cope with servers having different byte ordering and depths */
for (q = (unsigned char *) image->data, j = 0; j < height; j++, p++) {
unsigned char *r;
unsigned long c;
r = *p;
125 if (image->byte_order == MSBFirst) {
switch (bytes_per_pixel) {
case 4:
for (i = 0; i < width; i++) {
c = colors[*r++];
130 *q++ = c >> 24;
*q++ = c >> 16;
*q++ = c >> 8;
*q++ = c;
}
135 break;
case 3:
for (i = 0; i < width; i++) {
c = colors[*r++];
*q++ = c >> 16;
140 *q++ = c >> 8;
*q++ = c;
}
break;
case 2:
145 for (i = 0; i < width; i++) {
c = colors[*r++];
*q++ = c >> 8;
*q++ = c;
}
150 break;
case 1:
for (i = 0; i < width; i++)
*q++ = colors[*r++];
break;
155 }
} else {
switch (bytes_per_pixel) {
case 4:
for (i = 0; i < width; i++) {
160 c = colors[*r++];
*q++ = c;
*q++ = c >> 8;
*q++ = c >> 16;
*q++ = c >> 24;
165 }
break;
case 3:
473
43.2. Widget libraries and desktops 43. X
window =
XCreateWindow (display, DefaultRootWindow (display), x, y, width,
200 height, 0, depth, InputOutput, visual,
CWColormap | CWBackPixmap, &xswa);
XSelectInput (display, window, KeyPressMask | ButtonPressMask);
You can learn to program from the documentation in the Window System
sources — See below. The above program is said to be “written directly in X-lib” be-
cause it links only with the lowest level library, libX11.so. The advantage of de-
veloping this way is that your program will work across every variant of U NIX without
any modifications.
To program in is tedious. Therefore most developers will use a higher level widget
library. Most users of GUI’s will be familiar with buttons, menus, text input boxes and
so on. These are called widgets. programmers have to implement these manually.
The reason these were not built into the protocol is to allow different user interfaces
to be built on top of . This flexibility makes the enduring technology that it is. &It
is however questionable whether a single widget library is really the best way to go. Consider that most
applications use only a small amount of the range of features of any widget library. AfterStep is a project that
has build a NextStep clone for Linux. It has a consistent look and feel, is lightweight and fast, and uses no
474
43. X 43.2. Widget libraries and desktops
43.2.1 Background
The X Toolkit (libXt.so) is a widget library that has always come free with . It
is crude looking by todays standards. It doesn’t feature 3D (shadowed) widgets, al-
though it is comes free with . &The excellent xfig application, an X Toolkit application, was
in fact used to do the diagrams in this book.- Motif (libM.so) is a modern full featured wid-
get library that had become an industry standard. Motif is however bloated and slow,
and depends on the toolkit. It has always been an expensive proprietary library. Tk
(tee-kay, libtk.so) is a library that is primarily used with the Tcl scripting language.
It was probably the first platform independent library (running on both Windows, all
U NIX variants, and the Apple Mac). It is however slow and has limited features (this
is progressively changing). Both Tcl and Motif are not very elegant looking.
Around 1996, there was the situation of a lot of widget libraries popping up with
different licenses. V, xforms, and graphix come to mind. (This was when I started to
write Coolwidgets — my own widget library.) There was no efficient, multipurpose,
free, and elegant looking widget library for U NIX. This was a situation that sucked,
and was retarding Free software development.
43.2.2 Qt
At about that time a new GUI library was released. It was called Qt and was developed
by Troll Tech. It was not free, but was an outstanding technical accomplishment from
the point of view that it worked efficiently and cleanly on many different platforms. It
was shunned by some factions of the Free software community because it was written
in C++ &Which is not considered to be the standard development language by the Free Software Foun-
dation on account of it not being completely portable, as well as possibly other reasons.-, and was only
free for non-commercial applications to link with.
Nevertheless, advocates of Qt went ahead and began producing the outstanding
KDE desktop project — a set of higher level development libraries, a window manager,
and many core applications that together comprise the KDE Desktop. The Licensing
issues with Qt have relaxed somewhat, and it is now available under both the GPL and
a proprietary license.
43.2.3 Gtk
At one point, before KDE was substantially complete, Qt antagonists reasoned that
since there were more lines of Qt code, than KDE code, it would be better to develop a
475
43.3. XFree86 43. X
widget library from scratch — but that is an aside. The Gtk widget library was written
especially for gimp (GNU Image Manipulation Program), is GPL’d and written entirely in
C in low level calls (i.e. without the Toolkit), object oriented, fast, clean, extensible
and having a staggering array of features. It is comprised of Glib, a library meant
to extend standard C, providing higher level functions usually akin only to scripting
languages, like hash tables and lists; Gdk, a wrapper around raw Library to give
GNU naming conventions, and give a slightly higher level interface to ; and the
Gtk library itself.
Using Gtk, the Gnome project began, analogous to KDE, but written entirely in C.
43.2.4 GNUStep
OpenStep (based on NeXTStep) was a GUI specification published in 1994 by Sun Mi-
crosystems and NeXT Computers meant for building applications with. It uses the
Objective-C language which is an object orientated extension to C, that is arguably more
suited to this kind of development than C++.
OpenStep requires a PostScript display engine, that is analogous to the Protocol,
but considered superior to because all graphics are independent of the pixel reso-
lution of the screen. In other words, high resolution screens would just improve the
picture quality, and not make the graphics smaller.
The GNUStep project has a working PostScript display engine, and is meant as a
Free replacement to OpenStep.
43.3 XFree86
476
43. X 43.3. XFree86
to start (provided is not already running). If has been configured properly (in-
cluding having /usr/X11R6/bin in your PATH), it will initiate the graphics hardware
and a black and white stippled background will appear with a single as the mouse
cursor. Contrary to intuition, this means that is actually working properly.
To kill the server use the key combination Ctrl-Alt-Backspace.
To switch to the text console, use Ctrl-Alt-F1 . . . Ctrl-Alt-F6.
To switch to the console, use Alt-F7. The seven common virtual consoles of L INUX
are 1–6 as text terminals, and 7 as an terminal (as explained in Section 2.7).
To zoom in or out of your session, do Ctrl-Alt-+ and Ctrl-Alt--. (We are talking here
of the + and - on your keypad only.)
Running X utilities
To run an program, you need to tell it what remote server to connect to. Most pro-
grams take an option -display to specify the server. With running in your
seventh virtual console, type into your first virtual console:
¨ ¥
xterm -display localhost:0.0
§ ¦
The localhost refers to the machine on which the server is running — in this case
our own. The first 0 means the screen which we would like to display on ( supports
multiple physical screens in its specification). The second 0 refers to the root window
we would like to display on. Consider a multi-headed &for example two adjacent monitors that
behave as one continuous screen.- display — we would like to specify which monitor the
application pops up on.
477
43.3. XFree86 43. X
Switching to your session, should reveal a character terminal where you can
type commands.
A better way to specify the display is using the DISPLAY environment variable:
¨ ¥
DISPLAY=localhost:0.0
export DISPLAY
§ ¦
The utilities listed above are pretty ugly and un-intuitive. Try for example
xclock, xcalc, and xedit. For fun, try xbill. Also do a
¨ ¥
rpm -qa | grep ’ˆx’
§ ¦
starts up a second session in the virtual console 8. You can switch to it using Ctrl-
Alt-F8 or Alt-F8.
You can also start up a second server within your current display:
¨ ¥
/usr/X11R6/bin/Xnest :1 &
§ ¦
A smaller server will be started that uses a subwindow as a display device. You can
easily create a third server within that, ad infinitum.
or
¨ ¥
xterm -display localhost:1.0
§ ¦
478
43. X 43.3. XFree86
Manually starting and then running an application is not the way to use . We want
a window manager to run applications properly. The best window manager available
(sic) is icewm, available from icewm.cjb.net <http://icewm.cjb.net/>. Window
managers enclose each application inside a resizable bounding box, and give you the
, and buttons, as well as possibly a task bar and a “start” button that you may
be familiar with. A window manager is just another application that has the addi-
tional task of managing the positions of basic applications on your desktop. Window
managers are usual suffixed by a wm. If you don’t have icewm, the minimalist’s twm
window manager will almost always be installed.
predates the Cut and Paste conventions of Windows and the Mac. requires
a three button mouse, although pushing the two outer buttons simultaneously gives
the same result provided has been configured for this. Practice the following:
Dragging the left mouse button is the common way to select text. This automatically
places the highlighted text into a cut buffer, also sometimes called the clipboard.
Dragging the right mouse button extends the selection, i.e. enlarges or reduces the
selection.
Clicking the middle mouse button pastes the selection. Note that becomes virtually
unusable without being able to paste in this way.
Note that modern Gtk and Qt applications have tried to retain compatibility with these
mouse conventions.
479
43.4. The X distribution 43. X
The official distribution comes as an enormous source package available in tgz for-
mat at www.xfree86.org <http://www.xfree86.org/>. It is traditionally packed as
three tgz files to be unpacked over each other — the total of the three is about 50
megabytes compressed. This package has nothing really to do with the version num-
ber X11R6 — it is a subset of X11R6.
Downloading and installing the distribution is a major undertaking, but should
be done if you are interested in development.
All U NIX distributions come with a compiled and (mostly) configured instal-
lation, hence the official distribution should never be needed except by developers.
43.5 X documentation
Programming
Windows comes with tens of megabytes of documentation. For instance, all the
books describing all of the programming API’s are included inside the distribution.
Most of these are of specialised interest and will not be including in your distribution
by default — download the complete distribution if you want these. You can then
look inside xc/doc/specs (especially xc/doc/specs/X11) to begin learning how
to program under .
Debian also comes with the xbooks package, and Redhat with the
XFree86-doc packages.
Configuration documentation
As you can see, there is documentation for each type of graphics card. To learn how to
configure is a simple matter of reading the QuickStart guide and then checking
the specifics for your card.
480
43. X 43.6. Configuring X
New graphics cards are coming out all the time. XFree86 <http://www.xfree86.
org> contains FAQ’s about cards and the latest binaries, should you not be able to get
your card working from the information below. Please always search the XFree86 site
for info on your card and for newer releases before reporting a problem.
43.6 Configuring X
The above documentation is a lot to read. The simplest possible way to get working
is to decide what mouse you have, and then create a file /etc/X11/XF86Config
(backup your original) containing the following. Adjust the "Pointer" section for
your correct Device and Protocol. If you are running X version 3.3 you should also
comment out the Driver "vga" line. You may also have to switch the line containing
25.175 to 28.32 for some laptop displays:
¨ ¥
Section "Files"
RgbPath "/usr/X11R6/lib/X11/rgb"
FontPath "/usr/X11R6/lib/X11/fonts/misc/"
EndSection
5 Section "ServerFlags"
EndSection
Section "Keyboard"
Protocol "Standard"
AutoRepeat 500 5
10 XkbDisable
XkbKeymap "xfree86(us)"
EndSection
Section "Pointer"
# Protocol "Busmouse"
481
43.6. Configuring X 43. X
15 # Protocol "IntelliMouse"
# Protocol "Logitech"
Protocol "Microsoft"
# Protocol "MMHitTab"
# Protocol "MMSeries"
20 # Protocol "MouseMan"
# Protocol "MouseSystems"
# Protocol "PS/2"
Device "/dev/ttyS0"
# Device "/dev/psaux"
25 EndSection
Section "Monitor"
Identifier "My Monitor"
VendorName "Unknown"
ModelName "Unknown"
30 HorizSync 31.5 - 57.0
VertRefresh 50-90
# Modeline "640x480" 28.32 640 664 760 800 480 491 493 525
Modeline "640x480" 25.175 640 664 760 800 480 491 493 525
EndSection
35 Section "Device"
Identifier "Generic VGA"
VendorName "Unknown"
BoardName "Unknown"
Chipset "generic"
40 # Driver "vga"
Driver "vga"
EndSection
Section "Screen"
Driver "vga16"
45 Device "Generic VGA"
Monitor "My Monitor"
Subsection "Display"
Depth 4
Modes "640x480"
50 Virtual 640 480
EndSubsection
EndSection
§ ¦
Both of these will print out a status line containing clocks: . . . to indicate whether
your choice of 25.175 was correct &This is the speed that pixels can come from your card in
Megahertz, and is the only variable when configuring a 16 colour display.-.
482
43. X 43.6. Configuring X
You should now have a working grey-level display that is actually almost usable. It
has the advantage that it always works.
Proper X configuration
A simple and reliable way to get working is given by the following steps (if this fails,
then you will have to read some of the documentation described above):
2 Run SuperProbe. It will cause you screen to blank, then spit out what graphics
card you have. Leave that info on your screen and switch to a different virtual
terminal. If it fails to recognise your card, it usually means that XFree86 doesn’t
either.
3 Run xf86config. This is the official configuration script. Run through all
the options, being very sure not to guess. You can set your monitor to 31.5,
35.15, 35.5; Super VGA. . . if you have no other information to go on. Ver-
tical sync can be set to 50-90. Select your card from the card database (check the
SuperProbe output), and check which server the program recommends —
this will be one of XF86 SVGA, XF86 S3, XF86 S3V, etc. Whether you “set the
symbolic link” or not, or “modify the /etc/X11/Xserver file” is irrelevant. Note
that you do not need a “RAM DAC” setting with most modern PCI graphics
cards. The same goes for the “Clockchip setting”.
483
43.6. Configuring X 43. X
¨ ¥
Section "<section-name>"
<config-line>
<config-line>
<config-line>
5 EndSection
§ ¦
Search for the "Monitor" section. A little down you will see lots of lines like:
¨ ¥
# 640x480 @ 60 Hz, 31.5 kHz hsync
Modeline "640x480" 25.175 640 664 760 800 480 491 493 525
# 800x600 @ 56 Hz, 35.15 kHz hsync
ModeLine "800x600" 36 800 824 896 1024 600 601 603 625
5 # 1024x768 @ 87 Hz interlaced, 35.5 kHz hsync
Modeline "1024x768" 44.9 1024 1048 1208 1264 768 776 784 817 Interlace
§ ¦
These are timing settings for different monitors and screen resolutions. Choosing
one too fast could blow an old monitor, but will usually give you a lot of garbled
fuzz on your screen. We are going to eliminate all but the three above — do so
by commenting them out with # or deleting the lines entirely. (You may want to
backup the file first.) You could leave it up to to choose the correct mode-line
to match the capabilities of the monitor, but this doesn’t always work. I always
like to explicitly choose a selection of Modelines.
If you don’t find modelines in your XF86Config you can use this as your mon-
itor section:
¨ ¥
Section "Monitor"
Identifier "My Monitor"
VendorName "Unknown"
ModelName "Unknown"
5 HorizSync 30-40
VertRefresh 50-90
Modeline "320x200" 12.588 320 336 384 400 200 204 205 225 Doublescan
ModeLine "400x300" 18 400 416 448 512 300 301 302 312 Doublescan
Modeline "512x384" 20.160 512 528 592 640 384 385 388 404 -HSync -VSync
10 Modeline "640x480" 25.175 640 664 760 800 480 491 493 525
ModeLine "800x600" 36 800 824 896 1024 600 601 603 625
Modeline "1024x768" 44.9 1024 1048 1208 1264 768 776 784 817 Interlace
EndSection
§ ¦
6 Then edit your "Device" section. You can make it as follows for XFree86 version
3.3, and there should be only one of them,
¨ ¥
Section "Device"
Identifier "My Video Card"
VendorName "Unknown"
484
43. X 43.6. Configuring X
BoardName "Unknown"
5 VideoRam 4096
EndSection
§ ¦
For XFree86 version 4 you must add the device driver module. On my laptop,
this is ati:
¨ ¥
Section "Device"
Identifier "My Video Card"
Driver "ati"
VendorName "Unknown"
5 BoardName "Unknown"
VideoRam 4096
EndSection
§ ¦
There are also several options that can be added to the "Device" section to tune
your card. Three possible lines are,
¨ ¥
Option "no_accel"
Option "sw_cursor"
Option "no_pixmap_cache"
§ ¦
Subsection "Display"
ViewPort 0 0
10 Virtual 1024 768
Depth 16
Modes "1024x768" "800x600" "640x480" "512x384" "400x300" "320x240"
EndSubsection
485
43.6. Configuring X 43. X
Subsection "Display"
15 ViewPort 0 0
Virtual 1024 768
Depth 24
Modes "1024x768" "800x600" "640x480" "512x384" "400x300" "320x240"
EndSubsection
20 Subsection "Display"
ViewPort 0 0
Virtual 1024 768
Depth 8
Modes "1024x768" "800x600" "640x480" "512x384" "400x300" "320x240"
25 EndSubsection
EndSection
§ ¦
7 At this point you need to run the program itself. For XFree86 version 3.3,
there will be a separate package for each video card, as well as a separate bi-
nary with the appropriate driver code statically compiled into it. These binaries
will be of the form /usr/X11R6/bin/XF86 cardname. The relevant packages
can be found with the command dpkg -l ’xserver-*’ for Debian , and
rpm -qa | grep XFree86 for RedHat 6 (or RedHat/RPMS/XFree86-* on
your CDROM). You can then run,
¨ ¥
/usr/X11R6/bin/XFree86-<card> -bpp 16
§ ¦
which also sets the display depth to 16, i.e. the number of bits per pixel, which
translates to the number of colours.
For XFree86 version 4, card support is compiled as separate modules named
/usr/X11R6/lib/modules/drivers/cardname drv.o. There is a single bi-
nary executable /usr/X11R6/bin/XFree86 which loads the appropriate mod-
ule based on the Driver "cardname" line in the "Device" section. Having
added this, you can run
¨ ¥
/usr/X11R6/bin/XFree86
§ ¦
where the depth is set from the DefaultDepth 16 line in the "Screen" sec-
tion. What driver to use can be found by greping the modules with the name of
your card. This is similar to what we did with kernel modules on page 457.
8 A good idea is to now create a script /etc/X11/X.sh containing your -bpp
option with the server you would like to run. For example,
¨ ¥
#!/bin/sh
exec /usr/X11R6/bin/<server> -bpp 16
§ ¦
486
43. X 43.7. Visuals
You can then symlink /usr/X11R6/bin/X to this script. It is also worth sym-
linking /etc/X11/X to this script since some configurations look for it there.
There should now be no chance that could be started except in the way you
want. Double check by running X on the command-line by itself.
43.7 Visuals
TrueColor(4) The most obvious way of representing a colour is to use a byte for each of
the red, green and blue values that a pixel is composed of. Your video buffer will
hence have 3 bytes per pixel, or 24 bits. You will need 800 × 600 × 3 = 1440000
bytes to represent a typical 800 by 600 display. Another way is to use two bytes,
with 5 bits for red, 6 for green, and then 5 for blue. This gives you 32 shades of
red and blue, and 64 shades of green (green should have more levels because it
has the most influence over the pixel’s overall brightness). Displays that use 4
bytes usually discard the last byte, and are essentially 24 bit displays. Note also
that most displays using a full 8 bits per colour discard the trailing bits, so there
is often no appreciable difference between a 16 bit display and a 32 bit display —
if you have limited memory, 16 bits is preferable.
PseudoColor(3) If you want to display each pixel with only one byte, and still get a
wide range of colours, the best way is to make that pixel a lookup into a dynamic
table of 24 bit palette values: 256 of them exactly. 8 bit depths work this way. You
will have just as many possible colours, but applications will have to pick what
colours they want to display at once and compete for entries in the colour palette.
StaticGray(0) These are grey level displays usually with 1 byte or 4 bits per pixel, or
monochrome displays with 1 byte per pixel, like the legacy Hercules Graphics
Card (HGC, or MDA — monochrome graphics adapter). Legacy VGA cards can
be set to 640x480 in 16 colour “black-and-white”. is almost usable in this mode
and has the advantage that it always works, regardless of what hardware you
have.
StaticColor(2) This usually refers to 4 bit displays like the old CGA and EGA displays
having a small fixed number of colours.
DirectColor(5) This is rarely used, and refers to displays that have a separate palette
for each of red, green and blue.
GrayScale(1) These are like StaticGray, but the grey levels are programmable like
PseudoColor. This is also rarely used.
487
43.8. The startx and xinit commands 43. X
You can check what visuals your display supports with the xdpyinfo command.
You will notice more than one visual listed, since can effectively support a simple
StaticColor visual with PseudoColor, or a DirectColor visual with TrueColor. The de-
fault visual is listed first, and can be set set using the -cc option as we did above for
the 16 colour server. The argument to the -cc option is the number code above in
braces.
Note that good applications check the list of available visuals and choose an
appropriate one. There are also those that require a particular visual, and some that
take a -visual option on the command-line.
The action of starting an server, then a window manager should obviously be au-
tomated. The classic way to start is to run the xinit command on its own. On
L INUX this has been superseded by
¨ ¥
startx
§ ¦
which is a script that runs xinit after setting some environment variables. These
commands indirectly call a number of configuration scripts in /etc/X11/xinit/ and
your home directory, where you can specify your window manager and startup appli-
cations. See xinit(1) and startx(1) for more info.
init runs mgetty which displays a login: prompt to every attached character ter-
minal. init can also run xdm which displays a graphical login box to every server.
Usually there will only be one server: the one on your very machine.
The interesting lines inside your inittab file are
¨ ¥
id:5:initdefault:
§ ¦
and
¨ ¥
x:5:respawn:/usr/X11R6/bin/xdm -nodaemon
§ ¦
which states that the default run-level is 5 and that xdm should be started at run level
5. This should only be attempted if you are sure that works (by running X on the
488
43. X 43.10. X Font naming conventions
command-line by itself). If it doesn’t then xdm will keep trying to start , effectively
disabling the console. On systems besides RedHat and Debian , these may be run-
levels 2 versus 3, where run-level 5 is reserved for something else. In any event, there
should be comments in your /etc/inittab file to explain your distribution’s con-
vention.
Most applications take a -fn or -font option to specify the font. In this section I’ll
give a partial guide to font naming.
A font name is a list of words and numbers separated by hyphens. A typical
font name is
-adobe-courier-medium-r-normal--12-120-75-75-m-60-iso8859-1. Use
the xlsfonts command to obtain a complete list of fonts.
The font name fields have the following meanings:
489
43.11. Font configuration 43. X
¨ ¥
cooledit -font ’-*-times-medium-r-*--20-*-*-*-p-*-iso8859-1’
cooledit -font ’-*-times-medium-r-*--20-*-*-*-p-*’
cooledit -font ’-*-helvetica-bold-r-*--14-*-*-*-p-*-iso8859-1’
cooledit -font ’-*-helvetica-bold-r-*--14-*-*-*-p-*’
§ ¦
These envoke a newspaper font and an easy reading font respectively. A * means that
the X server can place default vales into those fields. This way you do not have to
specify a font exactly. The showfont command also dumps fonts as ASCII text.
You can rerun this command at any time for good measure.
To tell to use these directories add the following lines to your "Files" section.
A typical configuration will contain,
¨ ¥
Section "Files"
RgbPath "/usr/X11R6/lib/X11/rgb"
FontPath "/usr/X11R6/lib/X11/fonts/misc/:unscaled"
FontPath "/usr/X11R6/lib/X11/fonts/75dpi/:unscaled"
5 FontPath "/usr/X11R6/lib/X11/fonts/Speedo/"
FontPath "/usr/X11R6/lib/X11/fonts/Type1/"
FontPath "/usr/X11R6/lib/X11/fonts/misc/"
FontPath "/usr/X11R6/lib/X11/fonts/75dpi/"
EndSection
§ ¦
Often you will add a directory without wanting to restart . The command to
add a directory to the font path is:
¨ ¥
xset +fp /usr/X11R6/lib/X11/fonts/<new-directory>
§ ¦
490
43. X 43.12. The font server
or reset with
¨ ¥
xset fp default
§ ¦
inside each one. Note that the ttmkfdir is needed to catalogue TrueType fonts as
scalable fonts.
Having all fonts stored on all machines is expensive. Ideally, you would like a large
font database installed on one machine and fonts to be read off this machine, over the
network and on demand. You may also have an that does not support a particular
font type; if it can read the font off the network, builtin support may not be necessary.
The daemon xfs ( font server) facilitates all of this.
xfs reads its own simple configuration file from /etc/X11/fs/config or
/etc/X11/xfs/config. It might contain a similar list of directories:
¨ ¥
client-limit = 10
clone-self = on
catalogue = /usr/X11R6/lib/X11/fonts/misc:unscaled,
/usr/X11R6/lib/X11/fonts/75dpi:unscaled,
5 /usr/X11R6/lib/X11/fonts/ttf,
/usr/X11R6/lib/X11/fonts/Speedo,
491
43.12. The font server 43. X
/usr/X11R6/lib/X11/fonts/Type1,
/usr/X11R6/lib/X11/fonts/misc,
/usr/X11R6/lib/X11/fonts/75dpi
10 default-point-size = 120
default-resolutions = 75,75,100,100
deferglyphs = 16
use-syslog = on
no-listen = tcp
§ ¦
and change your font paths in /etc/X11/XF86Config to include only a minimal set
of fonts:
¨ ¥
Section "Files"
RgbPath "/usr/X11R6/lib/X11/rgb"
FontPath "/usr/X11R6/lib/X11/fonts/misc/:unscaled"
FontPath "unix/:7100"
5 EndSection
§ ¦
Note that no other machines can use your own font server because of the
no-listen = tcp option. Deleting this line (and restarting xfs) allows you to use
¨ ¥
FontPath "inet/127.0.0.1:7100"
§ ¦
instead, which implies an open TCP connection to your font server, along with all
its security implications. Remote machines can use the same setting after changing
127.0.0.1 to your IP address.
Finally note that for XFree86 version 3.3 that does not have TrueType support, there
is a font server xfstt available on www.freshmeat.net <http://www.freshmeat.
net/>.
492
Chapter 44
Unix Security
Note that this Chapter is concerned with machines exposed to the Internet — mail,
DNS and web servers, and the like. For firewalling example goto Section 41.2.
L INUX has been touted as both the most secure and insecure of all operating systems.
The truth is both. Take no heed to advise from the L INUX community, and your
server will be hacked eventually. Follow a few simple precautions and it will be safe
for years without much maintenance.
The attitude of most novice administrators is, “since the U NIX system is so large and
complex, and since there are so many millions of them on the Internet, it is unlikely
that my machine will get hacked.” Of course it won’t necessarily be a person targeting
your organisation that is the problem. It could be a person having written an automatic
scanner that tries to hack every computer in your city. It could also be a person that
is not an expert in hacking at all, that has merely downloaded a small utility to do it
for him. Many seasoned experts write such utilities for public distribution, while so-
called script-kiddies (because the means to execute a script is all the expertise needed)
use these to do real damage.
In this chapter you will get an idea firstly of the kinds of ways a U NIX system
gets hacked. Then you will know what to be cautious of, and how you can minimise
risk.
493
44.1. Common attacks 44. Unix Security
I personally divide attacks into two types: attacks that can be attempted by being a
user on the system, and network attacks that come from outside of system. If a server
is, say, only used for mail and web, shell logins may not be allowed at all, hence the
former type of security breach is less of a concern. Here are some of the ways security
is compromised, just to give an idea of what U NIX security is about. In some cases I
will indicate when it is more of a concern to multi-user systems.
Note also that attacks from users become an issue when a remote attack succeeds and
an a hacker gains user privileges to your system (even as a nobody user). This is an
issue even if you do not host logins.
Consider the following C program. If you don’t understand C that well, it doesn’t
matter — its the concept that is important. (Before doing this example, you should
unplug your computer from the network.)
¨ ¥
#include <stdio.h>
494
44. Unix Security 44.1. Common attacks
5 wait = no
user = root
server = /usr/local/sbin/myechod
log_on_failure += USERID
}
§ ¦
while for inetd add the folloing line to your /etc/inetd.conf file:
¨ ¥
myechod stream tcp nowait root /usr/local/sbin/myechod
§ ¦
Of course the service myechod does not exist. Add the following line to your
/etc/services file,
¨ ¥
myechod 400/tcp # Temporary demo service
§ ¦
You can now telnet localhost 400 and type away happily. As you can see,
the myechod service simply prints lines back to you.
Now someone reading the code will realise that typing more than 256 characters will
write into uncharted memory of the program. How can they use this effect to cause the
program to behave outside of its design? The answer is simple. Should they be able to
write processor instructions into an area of memory that may get executed later, they
can cause the program to do anything at all. The process runs with root privileges,
hence a few instructions sent to the kernel could, for example, cause the passwd file
to be truncated, or the file-system superblock to be erased. A particular technique that
works on a particular program is know as an exploit for a vulnerability. In general such
an attack of this type is known as a buffer overflow attack.
To prevent against such attacks is easy when writing new programs. Simply
make sure that any incoming data is treated as being dangerous. In the above case, the
fgets function should preferably be used, since it limits the number of characters that
could be written to the buffer. There are however many functions that behave in such
a dangerous way: even the strcpy function writes up to a null character which may
not be present; sprintf writes a format string which could be longer than the buffer;
and getwd is another function that also does no bound checking.
However when programs get long and complicated, it becomes difficult to anal-
yse where there may be loopholes that could be exploited indirectly.
495
44.1. Common attacks 44. Unix Security
A program like su must be setuid (see Chapter 14.1). Such a program has to run with
root privileges in order to switch UID’s to another user. The onus is however on su to
not give anyone privileges who isn’t trusted. Hence it requests a password and checks
it against the passwd file before doing anything.
Once again, the logic of the program has to hold up for security to be ensured, as well as
insurance against buffer overflow attacks. Should su have a flaw in the authentication
logic, it would enable someone to change to a UID that they were not privileged to
hold.
Setuid programs should hence be considered with the utmost suspicion. Most setuid
programs try be small and simple, to make it easy to verify the security of their logic.
A vulnerability is more likely to be found in any setuid program that is large and
complex.
Consider if your FTP client connecting to a remote untrusted site. If the site server
returns a response that the FTP client cannot handle (say response that is too long — a
buffer overflow) it could allow malicious code to be executed by the server on behalf
of my machine.
Hence it is quite possible to exploit a security hole in a client program by just waiting
for that program to connect to your site.
If a program creates a temporary file in your /tmp/ directory, and it is possible to pre-
dict the name of the file it is going to create, then it may be possible to create that file in
advance or quickly modify it without the program’s knowledge. Programs that create
temporary files in a predictable fashion, or those that do not set correct permissions
(with exclusive access) to temporary files, are liable to be exploited.
(Slightly more of a concern in systems that host many untrusted user logins.)
496
44. Unix Security 44.1. Common attacks
It is easy to see that a directory with permissions 660 and ownerships root:admin
cannot be accessed by user jsmith if he is outside of the admin group. Not so easy
to see is when you have a 1000 directories and hundreds of users and groups. Who
can access what, when and why becomes complicated, and often requires scripts to be
written to do permission tests and sets. Even a badly set /dev/tty* device can cause
a user’s terminal connection to become vulnerable.
(Slightly more of a concern in systems that host many untrusted user logins.)
There are lots of ways of creating and reading environment variables to either exploit
a vulnerability, or attain some information which will compromise security. Environ-
ment variables should never hold secret information like passwords.
On the other hand, when handling environment variables, programs should con-
sider the data they contain to be potentially malicious, and do proper bounds checking,
and verification of their contents.
(More of a concern in systems that host many untrusted user logins.)
When using telnet, ftp, rlogin or in fact any program at all that authenticates
over the network without encryption, the password is transmitted over the network
in plaintext, i.e. human readable form. These are all the common network utilities that
old U NIX hands were used to using. The sad fact is that what is being transmitted can
easily be read off the wire using the most elementary tools (see tcpdump on page 253).
None of these services should be exposed to the Internet. Use within a local LAN is
safe, provided the LAN is firewalled in, and your local users are trusted.
A denial of service attack is one which does not compromise the system, but prevents
other users from using a service legitimately. This can involve repetitively loading
a service to the point where no one else can use it. In each particular case, logs or
TCP traffic dumps will reveal the point of origin. You can then deny access with a
firewall rule. DoS attacks are becoming a serious concern and are very difficult to
protect against.
497
44.2. Other types of attack 44. Unix Security
The above is far from an exhaustive list. It never ceases to amaze me how new loop-
holes are discovered in program logic. Not all of these exploits can be classified; indeed
it is precisely because new and innovative ways of hacking systems are always being
found, that security needs constant attention.
Security first involves removing known risks, then removing potential risks, then (pos-
sibly) making life difficult for a hacker, then using custom U NIX security paradigms,
and finally being proactively cunning in thwarting hack attempts.
It is especially sad to see naive administrators install packages that are well known to be
vulnerable, and for which “script-kiddy” exploits are readily available on the Internet.
If a security hole is discovered, the package will usually be updated by the distri-
bution vendor or the author. There is a bugtraq <http://www.securityfocus.
com/forums/bugtraq/intro.html> mailing list which announces the latest ex-
ploits, and has many thousands of subscribers world wide. You should get on this
mailing list to be aware of new discoveries. The Linux Weekly News <http://lwn.
net/> is a possible source for security announcements if you only want to do read-
ing once a week. You can then download and install the binary or source distribution
provided for that package. Watching security announcements is critical &I often ask “ad-
ministrators” if they have upgraded there xxx service, and get the response, that either they are not sure if
they need it, do not believe it is vulnerable, do not know if is running, where to get a current package, or
even how to perform the upgrade; as if their ignorance absolves them of their responsibility. If the janitor
were to duck-tape your safe keys to a window pane would you fire him? . -
This goes equally for new systems that you install: never install outdated pack-
ages. RedHat and some others ship updates to their older distributions. This means
that you can install from an old distribution, and then possibly update all the packages
from an “update” package list. This means, from a security point of view, that packages
are as secure as the most current distribution. For instance, at the moment I am able
to install RedHat 6.2 from a six month old CD, then download a list of 6.2 “update”
packages. However RedHat also ships a version 7.0 with a completely difference set of
current packages incompatible with 6.2. On the other hand some other vendors may
“no-longer-support” an older distribution, meaning that those packages will never be
498
44. Unix Security 44.3. Counter measures
updated. In this case you should be sure to install or upgrade with the vendor’s most
current distribution, or manually compile vulnerable packages by yourself.
Over and above this, remember that vendors are sometimes slow to respond to
security alerts. Hence trust the Free software community’s alerts over anything they
may fail to tell you.
Alternatively, if you discover that a service is insecure, you may just want to
disable it (or better still uninstall it) if its not really needed.
Packages that are modified by a hacker can allow him a back door into your system:
so called trojans. Use the package verification commands discussed in Section 24.2 to
check package integrity.
It is easy locate world writable files. There should be only a few in the /dev and /tmp
directories:
¨ ¥
find / -perm -2 ! -type l -ls
§ ¦
Services that are inherently insecure are those that allow the password to be sniffed
over the Internet, or provide no proper authentication to begin with. Any service that
does not encrypt traffic should not be used for authentication over the Internet. These
are ftp, telnet, rlogin, uucp, imap, pop3 and others. They all require a password.
Instead, you should use ssh and scp. There are secure versions of pop and imap
(spop3 and simap), but you may not be able to find client programs. If you really
have to use a service you should limit the networks that are allowed to connect to it, as
described on page 281 and 285.
Old U NIX hands are notorious for exporting NFS shares (/etc/exports) that are
readable (and writable) from the Internet. The group of functions to do Sun Microsys-
499
44.3. Counter measures 44. Unix Security
tems’ port mapping and NFS — the nfs-utils and portmap packages — don’t give
me a warm fuzzy feeling. Don’t use these on machines exposed to the Internet.
Install libsafe. This is a library that wraps all those vulnerable C functions dis-
cussed above, thus testing for a buffer overflow attempt with each call. It is triv-
ial to install and emails the administrator on hack attempts. Goto http://www.bell-
labs.com/org/11356/libsafe.html for more info, or email [email protected]. It ef-
fectively solves 90% of the buffer overflow problem. There is a very slight performance
penalty however.
Disable all services that you are not using. Then try to evaluate whether the
remaining services are really needed. For instance do you really need IMAP? or would
POP3 suffice? IMAP has had a lot more security alerts that POP3 on account of it being
a much more complex service. Is the risk worth it?
xinetd (or inetd) runs numerous services of which only a few are needed.
You should trim your /etc/xinetd.d directory (or /etc/inetd.conf file) to a
minimum. For xinetd, you can add the line disable = yes to the relevant file.
There should only be one or two files enabled. Alternatively, your /etc/inetd.conf
should have only a few lines in it. A real life example is:
¨ ¥
ftp stream tcp nowait root /usr/sbin/tcpd in.ftpd -l -a
pop-3 stream tcp nowait root /usr/sbin/tcpd ipop3d
imap stream tcp nowait root /usr/sbin/tcpd imapd
§ ¦
This advise should be taken quite literally. The rule of thumb is that if you don’t know
what a service does, you should disable it. Take the ntalk service. I myself have no
idea what its for, so I’ll be dammed before I trust my security to it — simply delete that
line. See also Section 29.5.
In the above real life case, the services were additionally limited to permit only
certain networks to connect (see page 281 and 285).
xinetd (or inetd) is not the only problem. There are many other services. Entering
netstat -nlp gives initial output like,
¨ ¥
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
5 tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2043/exim
tcp 0 0 0.0.0.0:400 0.0.0.0:* LISTEN 32582/xinetd
tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 32582/xinetd
tcp 0 0 172.23.80.52:53 0.0.0.0:* LISTEN 30604/named
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 30604/named
10 tcp 0 0 0.0.0.0:6000 0.0.0.0:* LISTEN 583/X
tcp 0 0 0.0.0.0:515 0.0.0.0:* LISTEN 446/
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 424/sshd
udp 0 0 0.0.0.0:1045 0.0.0.0:* 30604/named
500
44. Unix Security 44.3. Counter measures
but doesn’t show that PID 446 is actually lpd. For that just type
ls -al /proc/446/.
You can see that there are actually ten services open: 1, 6, 21, 22, 25, 53, 400,
515, 1045 and 6000. 1 and 6 are kernel ports, while 21 and 400 are FTP and our echo
daemon respectively. Such a large number of open ports provides ample opportunity
for attack.
At this point you should go through each of these services and decide (1) whether you
really need them. Then (2) make sure you have the latest version, finally (3) consult the
packages documentation so that you can limit the networks that are allowed to connect
to those services.
It is interesting that people are want to make assumptions about packages to the tune
of: “this service is so popular it can’t possibly be vulnerable”. The exact opposite is in
fact the truth: the more obscure and esoteric a service is, the less likely that someone
has taken the trouble to find a vulnerability. In the case of named, a number of most
serious vulnerabilities where made public as regards every Bind release prior to 9.
Hence upgrading to the latest version (9.1 at the time of writing) from source was
prudent for all the machines I administered (a most time consuming process).
There is nothing wrong with taking a decision that ordinary users are not allowed to
use even the ping command.
There is much that you can do that is not “security” per se, but will make life consid-
erably more difficult for a hacker, and certainly impossible for a stock standard attack,
501
44.3. Counter measures 44. Unix Security
even if your system is vulnerable. A hack attempt often relies on a system being con-
figured a certain way. Making your system different to the standard can go a long
way.
Read-only partitions: It is allowable to mount your /usr partition (and critical top-
level directories like /bin) read-only since these are, by definition, static data. Of
course anyone with root access can remount it as writable, but a generic attack
script may not know this. Some SCSI disks can be configured as read-only via
dip switches (or so I hear). The /usr partition can be made from an ISO 9660 par-
tition (CDROM file-system) which is read-only by design. You can also mount
your CDROM as a /usr partition: access will be slow, but completely unmodifi-
able. Then finally you can manually modify your kernel code to fail write-mount
attempts on /usr.
Read-only attributes: L INUX has additional file attributes to make a file unmodi-
fiable over and above the usual permissions. These are controlled by the com-
mands chattr and lsattr. It can be used to make a log file append-only
with chatter +a /var/log/messages, or to make files iimmutable with,
chatter +i /bin/login — both a good idea. The command
¨ ¥
chatter -R +i /bin /boot /lib /sbin /usr
§ ¦
is a better idea still. Of course, anyone with superuser privileges can switch them
back.
Periodic system monitoring: It is useful to write your own crond scripts to check if
files have changed. They can check for new setuid programs, permissions, or
changes to binary files; or reset permissions to what you think is secure. Just
remember that cron programs can be modified by anyone who hacks into the
system. A simple command
¨ ¥
find / -mtime 2 -o -ctime 2
§ ¦
will search for all files that have been modified in the last 2 days.
Non-standard packages: If you notice many security alerts for a package, switch to a
different one. There are alternatives to bind, wu-ftpd, sendmail (as covered in
Chapter 30) and almost every service you can think of. You can also try installing
an uncommon or security-specialised distribution. Switching entirely to FreeBSD
is also one way of reducing your risk considerably &This is not a joke.-.
Minimal kernels: Its easy to compile your kernel without module support, with an
absolutely minimal set of features. Loading of trojan modules has been a source
of insecurity in the past. Doing this is gotta make you safer.
502
44. Unix Security 44.3. Counter measures
Non-Intel architecture: Hackers need to learn assembly language to exploit many vul-
nerabilities. The most common assembly language is Intel. Using a non-Intel
platform adds that extra bit of obscurity.
OpenWall project: This has a kernel patch that makes the stack of a process non-
executable (which will thwart most kinds of buffer overflow attempts) and does
some other cute things with the /tmp directory, and process IO.
Hackers have limited resources. Take oneupmanship away and security is about the
cost of hacking a system versus the reward of success. If you feel the machine you
administer is bordering on this category you need to start billing far more for your
hours and doing things like what follows. It is possibly to go to lengths that will make
a L INUX system secure against a large government’s defence budget.
Capabilities: This is a system of security that gives limited kinds of superuser access
to programs that would normally need to be full-blown setuid root executables.
Think: most processes that run with root (setuid) privileges do so because of the
need to access only a single privileged function. For instance, the ping program
does not need complete superuser privileges (do a ls -l /bin/ping and note
the setuid bit). Capabilities are a fine grained set of privileges that say that a
process is able to do particular things that an ordinary user can’t, without ever
having full root access. In the case of ping this would be certain networking
capabilities that only root is normally allowed to do.
DTE: Domain and Type Enforcement is a system that says that when a program gets
executed, that it is categorised, and only allowed to do certain things even if it is
running as root; and that any further programs that it executes are only allowed
to do certain other things. This is real security and there are kernel patches to do
this. The NSA &National Security Agency- (in all their commercial glory) actually
have a L INUX distribution built around DTE.
medusa: Medusa is a security system that causes the kernel to query a user daemon
before letting any process on the system do anything. It is the most ubiquitous
security system out because it is entirely configurable — you can make the user
daemon restrict anything however you like.
503
44.3. Counter measures 44. Unix Security
VXE: Virtual eXecuting Environment says that a program executes in its own protected
space, and executes a Lisp program to check if a system call is allowed. This is
effectively a lot like medusa.
MAC: Mandatory Access Controls. This is also about virtual environments for pro-
cesses. MAC is a POSIX standard.
RSBAC and RBAC: Rule Set Based Access Controls and Role Based Access Controls. These
look like a combination of some of the above (??).
LIDS: Linux Intrusion Detection System Does some meager preventative measures to
restrict module loading, file modifications and process information.
Kernel patches exist to do all of the above. Many of these projects are well out
of the test phase, but are not in the mainstream kernel possibly because developers are
not sure of the most enduring approach to U NIX security. They all have one thing in
common: double checking what a privileged process does, which can only be a good
thing.
Proactive cunningness
Proactive cunningness means attack monitoring and reaction, and intrusion monitor-
ing and reaction. Utilities that do this come under a general class called network in-
trusion detection software. The idea that one might detect and react to a hacker has an
emotional appeal, but it automatically implies that your system is insecure to begin
with — which is probably true, considering the rate at which new vulnerabilities are
being reported. I am weary of so called intrusion detection systems which adminis-
trators implement even before the most elementary of security measures. Really one
must implement all of the above security measures combined before thinking about
intrusion monitoring.
To explain the most basic form of monitoring, consider this: In order to hack a
system, one usually needs to test for open services. To do this, one tries to connect to
every port on the system to see which are open. This is known as a port scan. There
are simple tools to detect a port scan, which will then start a firewall rule that will
deny further access from the offending host (although this can work against you if
the hacker has spoofed your own IP address — is this possibly?). More importantly,
they will report the IP address from which the attack arose. A reverse lookup will
give the domain name, and then a whois query on the appropriate authoritative DNS
registration site, will reveal the physical address and telephone number of the domain
owner.
Port scan monitoring is the most elementary form of monitoring and reaction.
From there up you can find innumerable bizarre tools to try and read into all sorts of
504
44. Unix Security 44.4. Important reading
network and process activity. I leave this to your own research, although you might
want to start with the Snort traffic scanner <http://www.snort.org/>, the Trip-
wire intrusion detection system <http://www.tripwiresecurity.com/> and IDSA
<http://jade.cs.uct.ac.za>.
A point to such monitoring is also as a deterrent to hackers. A network should
be able to find the origins of an attack and thereby trace the attacker. The threat of
discovery makes hacking a far less attractive pastime, and you should look into what
legal recourse you may have to people that try compromise your system.
Above is a practical guide. It gets much more interesting than this. A place to start
is the comp.os.linux.security FAQ. This FAQ gives the most important U NIX security
references avail-
able on the net. You can download it from http://www.memeticcandiru.com/colsfaq.html,
http://www.linuxsecurity.com/docs/colsfaq.html
or http://www.geocities.com/swan daniel/colsfaq.html. The Linux Security <http://www.
linuxsecurity.com/> web page also has a security quick reference card that sum-
marises most everything you need to know in two pages.
How many security reports have you read? How many packages have you upgraded because
of vulnerabilities? How many services have you disabled because you were unsure of their
security? How many access limit rules do you have in your hosts.*/xinietd services?
If your answer to any of these questions is less than 5, you are not being conscientious
about security.
44.6 Soap
The National Security Agency (NSA) of the USA recently did the unthinkable. They
came down from their ivory tower and visited the IT marketplace, flaunting their own
secure version of L INUX . Actually, when one considers that most services clearly
only access a limited part of the system, creating a restricted “virtual” environment for
every network service is so obvious and logical that it amazes me why such a system
is not a standard feature. The very idea of leaving security to daemon authors will
hopefully soon become a thing of the past; at which point, we can hopefully see a
505
44.6. Soap 44. Unix Security
506
Appendix A
Lecture Schedule
The following describes a 36 hour lecture schedule in 12 lessons, two per week, of 3
hours each. The lectures are interactive, following the text very closely, but sometimes
giving straight forward chapters as homework.
The course requires that students have a L INUX system for them to use to do their
homework assignments. Most people were willing to re-partition their home ma-
chines, buy a new hard drive, or use a machine of their employer.
The classroom itself should have 4 to 10 places. It is imperative that each student
have their own machine, since the course is highly interactive. The lecturer need not
have a machine. I myself prefer to write everything on a “white-board”. The machines
should be networked with Ethernet, and configured so that machines can telnet to
each others IP’s. A full L INUX installation is preferred — everything covered by the
lectures must be installed. This would include all services, several desktops, as well as
C and kernel development packages.
L INUX CD’s should also be available for those who need to setup their home
computers.
Most notably, each student should have his own copy of this text.
507
A.2. Student selection A. Lecture Schedule
This lecture layout is designed for seasoned administrators of DOS or Windows sys-
tems, or at least having some kind of programming background, or at the very least
being experienced in assembling hardware and installing operating systems. At the
other end of the scale, “end users” with knowledge of neither command-line interfaces,
programming, hardware assembly, nor networking, would require a far less intensive
lecture schedule, and would certainly not cope with the abstraction of a shell interface.
Of course anyone who has a high intelligence can cover this material quite
quickly, regardless of their IT experience, and it is smoothest where the class is of the
same level. The most controversial method would be to simply place a tape measure
around the cranium (since the latest data puts the correlation between IQ and brain
size at about 0.4).
A less intensive lecture schedule would probably cover about half of the material,
with more personalised tuition, and having more in-class assignments.
Lessons are three hours each. In my own course, these were in the evenings from 6
to 9, having two 10 minute breaks on the hour. It is important that there are a few
days between each lecture for students to internalise the concepts and practice them
by themselves.
The course is completely interactive, following a “type this now class...” genre.
The text is riddled with examples, so these should be followed in sequence. In some
cases, repetitive examples are skipped. Examples are written on the white-board, per-
haps with slight changes for variety. Long examples are not written out: “now class,
type in the example on page...”.
Occasional diversions from the lecturers own experiences are always fun, when
the class gets weary.
The lecturer will also be aware that students get stuck occasionally. I myself check
their screens from time to time, typing in the odd command for them to speed the class
along.
508
A. Lecture Schedule A.3. Lecture Style
Lesson 1
A background to U NIX and L INUX history is explained, crediting the various respon-
sible persons and organisations. The various Copyrights are explained, with emphasis
on the GPL.
Chapter 4 will then occupy the remainder of the first three hours.
Lesson 2
Chapter 5 (regular expressions) will occupy the first hour, then Chapter 7 (shell script-
ing) the remaining time. Lecturers should doubly emphasise to the class the impor-
tance of properly understanding regular expressions, as well as their wide use in U NIX.
Lesson 3
First hour covers Chapter 8. Second hour covers Chapter 9 and 10. Third hour covers
Chapter 11.
Lesson 4
First two hours covers Chapter 12, 13, 14, 15. Third hour covers Chapter 16 and 17.
509
A.3. Lecture Style A. Lecture Schedule
Lesson 5
First hour covers Chapter 22, second hour covers Chapter 24. For the third hour, stu-
dent given to read Chapter 25 through Chapter 26, asking questions for any unclear
points.
Lesson 6
Lectured coverage of Chapter 25 through Chapter 26. Also demonstrated was an at-
tempt to sniff the password of a telnet session using tcpdump. Then the same at-
tempt with ssh.
Lesson 7
Chapters 27 through 29 covered in first and second hour. A DNS server should be up
for students to use. Last hour explains how Internet mail works in theory only as well
as the structure of the exim configuration file.
Lesson 8
First and second hour covers Chapter 30. Students to configure their own mail server.
A DNS server should be present to test MX records for their domain. Last hour covers
31 and 32, excluding anything about modems.
510
A. Lecture Schedule A.3. Lecture Style
Lesson 9
First hour covers Chapter 37. Second and third hours cover Chapter 40. Students to
configure their own name servers with forward and reverse lookups. Note that Samba
was not covered, on account of there being no Windows machines and printers to
properly demonstrate it. An alternative would be to setup printing and file-sharing
using smbmount etc.
Homework: Chapter 41 for homework — students to configure dialup network
for themselves. Read through 42 in preparation for next lesson.
Lesson 10
First and second hours covers Chapter 42. Student to at least configure their own
network card where most hardware devices will not be present on the system. Kernel
build performed. Third hour covers the X Window System in theory and use of the
DISPLAY environment variable to display applications to each others X servers.
Homework: Studying the NFS HOWTO.
Lesson 11
First hour covers configuring of NFS, noting the need for a nameserver with forward
and reverse lookups. Second and third hour covers Chapter 38.
Homework: Download and read the Python tutorial.
Lesson 12
First and second hour cover an introduction the the Python programming language.
Last hour comprised the course evaluation. The final lesson could possibly hold an
examination instead, for this particular course, no certification was offered however.
511
A.3. Lecture Style A. Lecture Schedule
512
Appendix B
These requirements are quoted verbatim from the LPI web page <http://www.lpi.
org/>
Each objective is assigned a weighting value. The weights range roughly from 1 to 8,
and indicate the relative importance of each objective. Objectives with higher weights
will be covered by more exam questions.
This is a required exam for certification level I. It covers fundamental system adminis-
tration activities that are common across all flavors of Linux.
513
B. LPI Certification Cross-reference
514
B. LPI Certification Cross-reference
Verify the integrity of filesystems, monitor free space and inodes, fix simple filesystem problems.
Includes commands fsck, du, df.
Mount and unmount filesystems manually, configure filesystem mounting on bootup, configure
user-mountable removable file systems. Includes managing file /etc/fstab.
Setup disk quota for a filesystem, edit user quota, check user quota, generate reports of user
quota. Includes quota, edquota, repquota, quotaon commands.
Set permissions on files, directories, and special files, use special permission modes such as suid
and sticky bit, use the group field to grant file access to workgroups, change default file creation
mode. Includes chmod and umask commands. Requires understanding symbolic and numeric
permissions.
Change the owner or group for a file, control what group is assigned to new files created in a
directory. Includes chown and chgrp commands.
Create hard and symbolic links, identify the hard links to a file, copy files by following or not
following symbolic links, use hard and symbolic links for efficient system administration.
Obj 8: Find system files and place files in the correct location
Weight of objective: 2
Understand the filesystem hierarchy standard, know standard file locations, know the purpose
of various system directories, find commands and files. Involves using the commands: find,
locate, which, updatedb . Involves editing the file: /etc/updatedb.conf
515
B. LPI Certification Cross-reference
Guide the system through the booting process, including giving options to the kernel at
boot time, and check the events in the log files. Involves using the commands: dmesg
(lilo). Involves reviewing the files: /var/log/messages, /etc/lilo.conf, /etc/conf.modules —
/etc/modules.conf
Securely change the runlevel of the system, specifically to single user mode, halt (shutdown) or
reboot. Make sure to alert users beforehand, and properly terminate processes. Involves using
the commands: shutdown, init
Use and administer the man facility and the material in /usr/doc/. Includes finding relevant
man pages, searching man page sections, finding commands and manpages related to one, con-
figuring access to man sources and the man system, using system documentation stored in
/usr/doc/ and related places, determining what documentation to keep in /usr/doc/.
Find and use Linux documentation at sources such as the Linux Documentation Project, vendor
and third-party websites, newsgroups, newsgroup archives, mailing lists.
Write documentation and maintain logs for local conventions, procedures, configuration and
configuration changes, file locations, applications, and shell scripts.
Provide technical assistance to users via telephone, email, and personal contact.
516
B. LPI Certification Cross-reference
Obj 1: Manage users and group accounts and related system files
Weight of objective: 7
Add, remove, suspend user accounts, add and remove groups, change user/group info in
passwd/group databases, create special purpose and limited accounts. Includes commands
useradd, userdel, groupadd, gpasswd, passwd, and file passwd, group, shadow, and gshadow.
Modify global and user profiles to set environment variable, maintain skel directories for new
user accounts, place proper commands in path. Involves editing /etc/profile and /etc/skel/.
Obj 3: Configure and use system log files to meet administrative and security needs
Weight of objective: 3
Configure the type and level of information logged, manually scan log files for notable activ-
ity, arrange for automatic rotation and archiving of logs, track down problems noted in logs.
Involves editing /etc/syslog.conf
Obj 4: Automate system administration tasks by scheduling jobs to run in the future
Weight of objective: 4
Use cron to run jobs at regular intervals, use at to run jobs at a specific time, manage cron and at
jobs, configure user access to cron and at services
Plan a backup strategy, backup filesystems automatically to various media, perform partial and
manual backups, verify the integrity of backup files, partially or fully restore backups.
517
B. LPI Certification Cross-reference
518
B. LPI Certification Cross-reference
Involves using the commands and programs: dpkg, dselect, apt, apt-get, alien . Involves review-
ing or editing the files and directories: /var/lib/dpkg/* .
519
B. LPI Certification Cross-reference
Learn which functionality is available through loadable kernel modules, and manually load and
unload the modules as appropriate. Involves using the commands: lsmod, insmod, rmmod,
modinfo, modprobe. Involves reviewing the files: /etc/modules.conf — /etc/conf.modules (*
depends on distribution *), /lib/modules/{kernel-version}/modules.dep .
Obtain and install approved kernel sources and headers (from a repository at your site, CD, ker-
nel.org, or your vendor); Customize the kernel configuration (i.e., reconfigure the kernel from
the existing .config file when needed, using oldconfig, menuconfig or xconfig); Make a new
Linux kernel and modules; Install the new kernel and modules at the proper place; Reconfigure
and run lilo. N.B.: This does not require to upgrade the kernel to a new version (full source
nor patch). Requires the commands: make (dep, clean, menuconfig, bzImage, modules, mod-
ules install), depmod, lilo. Requires reviewing or editing the files: /usr/src/linux/.config ,
/usr/src/linux/Makefile, /lib/modules/{kernelversion}/modules.dep, /etc/conf.modules —
/etc/modules.conf, /etc/lilo.conf .
Edit text files using vi. Includes vi navigation, basic modes, inserting, editing and deleting text,
finding text, and copying text.
Monitor and manage print queues and user print jobs, troubleshoot general printing problems.
Includes the commands: lpc, lpq, lprm and lpr . Includes reviewing the file: /etc/printcap .
Submit jobs to print queues, convert text files to postscript for printing. Includes lpr command.
Install a printer daemon, install and configure a print filter (e.g.: apsfilter, magicfilter). Make
local and remote printers accessible for a Linux system, including postscript, non-postscript,
and Samba printers. Involves the daemon: lpd . Involves editing or reviewing the files and
directories: /etc/printcap , /etc/apsfilterrc , /usr/lib/apsfilter/filter/*/ , /etc/magicfilter/*/ ,
/var/spool/lpd/*/
520
B. LPI Certification Cross-reference
Topic 2.10: X
521
B. LPI Certification Cross-reference
figure menus for the window manager, select and configure the desired x-terminal (xterm, rxvt,
aterm etc.), verify and resolve library dependency issues for X applications, export an X-display
to a client workstation. Commands: Files: .xinitrc, .Xdefaults, various .rc files.
Demonstrate an understanding of network masks and what they mean (i.e. determine a network
address for a host based on its subnet mask), understand basic TCP/IP protocols (TCP, UDP,
ICMP) and also PPP, demonstrate an understanding of the purpose and use of the more common
ports found in /etc/services (20, 21, 23, 25, 53, 80, 110, 119, 139, 143, 161), demonstrate an correct
understanding of the function and application of a default route. Execute basic TCP/IP tasks:
FTP, anonymous FTP, telnet, host, ping, dig, traceroute, whois.
Obj 2: (superseded)
Obj 3: TCP/IP Troubleshooting and Configuration
Weight of objective: 10
Define the chat sequence to connect (given a login example), setup commands to be run au-
tomatically when a PPP connection is made, initiate or terminate a PPP connection, initiate or
terminate an ISDN connection, set PPP to automatically reconnect if disconnected.
522
B. LPI Certification Cross-reference
Configure which services are available through inetd, use tcpwrappers to allow or deny ser-
vices on a host-by-host basis, manually start, stop, and restart internet services, configure ba-
sic network services including telnet and ftp. Includes managing inetd.conf, hosts.allow, and
hosts.deny.
523
B. LPI Certification Cross-reference
use setgid on dirs to keep group ownership consistent, change a user’s password, set expiration
dates on user’s passwords, obtain, install and configure ssh
524
Appendix C
RHCE Certification
Cross-reference
These courses are beneath the scope of this book. They cover L INUX from a user and
desktop perspective. Although they include administrative tasks, they keep away from
technicalities. They often prefer graphical configuration programs to do administrative
tasks. One of the objectives of one of these courses configuring Gnome panel applets,
another is learning the pico text editor.
RH300
This certification seems to be for administrators of non-L INUX systems who want to
extend their knowledge. The requirements below lean toward understanding available
L INUX alternatives and features, rather than expecting the user to actually configure
anything complicated.
525
C. RHCE Certification Cross-reference
526
C. RHCE Certification Cross-reference
- Concepts of the /etc/passwd and /etc/groups files, and /etc/skel and con-
tents.
- Editing bashrc, .bashrc, /etc/profile, /etc/profile.d
- Generaly using of linuxconf.
- Using cron, anacron, editing /etc/crontab and
/var/spool/cron/<username>. tmpwatch, logrotate and locate cron
jobs.
- syslogd, klogd, /etc/syslog.conf, swatch, logcheck.
- rpm concepts and usage. Checksums, file listing, forcing, dependencies, query-
ing verifying querying tags, provides and requires. FTP and HTTP installs,
rpmfind, gnorpm and kpackage.
- Building .src.rpm files. Customising and rebuilding packages.
- /usr/sbin/up2date
- Documentation sources.
Unit 4: Kernel
- /proc file-system concepts and purpose of various subdirectories. Tuning param-
eters with /etc/sysctl.conf
- Disk quotas. quota, quotaon, quotaoff, edquota, repquota, quotawarn,
quotastats.
- System startup scripts initialisation sequences. inittab, switching run-
levels. Conceptual understanding of various /etc/rc.d/ files. SysV scripts,
chkconfig, ntsysv, tksysv, ksysv.
- Configuring software RAID. Using raidtools to activate and test RAID devices.
- Managing modules. modprobe, depmod, lsmod, insmod, rmmod commands.
kernelcfg. Editing /etc/conf.modules, aliasing and optioning modules.
- Concepts of kernel source, .rpm versions, kernel versioning system. Configuring,
compiling and installing kernels.
527
C. RHCE Certification Cross-reference
Unit 7: Security
- Using tcp wrappers. User and host based access restrictions. PAM access. Port
restriction with ipchains.
- PAM concepts. Editing /etc/pam.d, /etc/security config files. PAM docu-
mentation.
528
C. RHCE Certification Cross-reference
RH220 is the networking module. It covers services sparsely, possibly intending that
the student learn only the bare bones of what is necessary to configure a service. It
covers the esoteric pppd login facility — only ever used by ISP’s.
Unit 1: DNS
A treatment of bind, analogous to Topic 1.13, Obj 5 of LPI (Page 523). Expects and
exhaustive understanding of the Domain Name System, an understanding of SOA, NS,
A, CNAME, PTR, MX and HINFO records, ability to create master domain servers from
scratch, caching only servers, and configure round robin load sharing.
529
C. RHCE Certification Cross-reference
Unit 2: Samba
Overview of SMB services and concepts. Configuring Samba for file and print sharing.
Using Samba client tools. Using linuxconf and swat. Editing /etc/smb.conf.
Types of shares. Wins support. Setting authentication method. Using client utilities.
Unit 3: NIS
Conceptual understanding of NIS. Configure NIS master and slave. Use client utilities.
LDAP concepts. OpenLDAP package, slapd, ldapd, slurpd and config files.
Unit 5: Apache
Configuring virtual hosts. Adding MIME types. Manipulating directory access and di-
rectory aliasing. Allowing restricting CGI access. Setup user and password databases.
Understanding of important modules.
Setup basic pppd server. Adding dial-in user accounts. Restricting users. dhcpd
and BOOTP, config files and concepts. Configuring with netcfg, netconfig or
linuxconf. using pump. Editing /etc/dhcpd.conf.
RH250 is the security module. It goes through basic administration from a security
perspective. Much of this would be obvious to someone with a thorough knowledge
of U NIX. Its probably good that RedHat has placed so much emphasise on security.
530
C. RHCE Certification Cross-reference
I myself rarely use any of what is required by this module, since it mostly applies to
large systems with many logins, and there are few such systems in the field.
Unit 1: Introduction
User accounts concepts, restricting access based on groups. Editing pam config files.
/etc/nologin; editing /etc/security/ files. Console group, cug; configuring
and using clobberd and sudo. Checking logins in log files. last.
Understand encryption terms: Public/Private Key, GPG, One-way hash, MD5. xhost,
xauth. ssh concepts and features. Password cracking concepts.
Use PAM to set resource limits. Monitor process memory usage and CPU consump-
tion; top. gtop, kpm, xosview, xload, xsysinfo. last, ac, accton, lastcomm.
Monitoring logs with swatch.
ipchains and ruleset concepts. Adding, deleting, listing, flushing rules. Forward-
ing, many-to-one and one-to-one masquerading. Kernels options for firewall support.
Static and dynamic routing with concepts. /etc/sysconfig/static-routes. Us-
ing linuxconf and netcfg to edit routes. tcp wrappers.
531
C. RHCE Certification Cross-reference
532
Appendix D
Linux Advocacy
Frequently-Asked-Questions
Please consult the various Internet resources listed for up to date information.
What is Linux?
Linux is the core of a free Unix operating system for the PC and other hardware plat-
forms. Developement of this operating system started in 1984; called the GNU project
of the Free Software Foundation (FSF). The Linux core (or kernel) is named after its
author, Linus Torvalds. It began development in 1991 - the first usable releases where
made in 1993. Linux is often called GNU/Linux because much of the OS is comprised
of the efforts of the GNU project.
Unix systems have been around since the 1960’s and are a proven standard in
industry. Linux is said to be POSIX compliant, meaning that it confirms to a certain
definite computing standard laid down by academia and industry. This means that
Linux is largely compatible with other Unix systems (the same program can be easily
ported to run on another Unix system with few (sometimes no) modifications) and will
network seamlessly with other Unix systems.
533
D.1. Linux Overview D. Linux Advocacy FAQ
Some commercial Unix systems are IRIX (for the Silicon Graphics); Solaris or
SunOS for Sun Microsystem’s SPARC workstations; HP Unix for Hewlett Packard’s
servers; SCO for the PC; OSF for the DEC Alpha machine and AIX for the Pow-
erPC/RS6000.
Some freely available Unix systems are NetBSD, FreeBSD and OpenBSD and also
enjoy widespread popularity.
Unix systems are multitasking and multiuser systems - meaning that multiple
concurrent users running multiple concurrent programs can connect to and use the
same machine.
What are Unix systems used for? What can Linux do?
Unix systems are the backbone of the Internet. Heavy industry, mission critical ap-
plications, and universities have always used Unix systems. High end servers and
multiuser mainframes are traditionally Unix based. Today Unix systems are used by
large ISP’s through to small businesses as a matter of course. A Unix system is the
standard choice when a hardware vendor comes out with a new computer platform
because Unix is most amenable to being ported. Unix systems are used as database,
file, and Internet servers. Unix is used for visualization and graphics rendering (like
some Hollywood productions). Industry and universities use Unix systems for sci-
entific simulations, and Unix clusters for number crunching. The embedded market
(small computers without operators that exist inside appliances) has recently turned
toward Linux systems which are being produced in their millions.
The wide spread use of Unix is not well advertised because of a failure on the part
of the media, and because Unix systems are unjustifiably thought to be more expensive
and complicated, and therefore not suited for mainstream audiences.
Linux itself can operate as a web, file, smb (WinNT), Novell, printer, ftp, mail,
sql, masquerading, firewall, and pop server to name but a few. It can do anything that
any other network server can do faster and more reliably.
Linux’s up and coming graphical user interfaces are of the most functional and
aesthetically pleasing ever to have graced the computer screen. Linux has now moved
into the world of the desktop.
Linux runs on
• 386/486/Pentium Processors.
534
D. Linux Advocacy FAQ D.1. Linux Overview
• Sun Sparc workstations, including sun4c and sun4m as well as well as Sun4d and
Sun4u. Multiprocessors machines are supported as well as full 64 bit support on
the Ultrasparc.
• PowerPC machines.
• IA 64.
• ETRAX-100 Processor.
Other projects are in various stages of completion - eg, you may get Linux up
and running on many other hardware platforms, but it would take some time and
expertise to install, and you may not have graphics capabilities. Every month or so
one sees support announced for some new esoteric hardware platform. Watch the
Linux Weekly News lwn.net <http://lwn.net/> to catch these.
There are hundreds of web pages devoted to Linux. There are thousands of web pages
devoted to different free software packages. A net search will reveal the enormous
amount of info available.
535
D.1. Linux Overview D. Linux Advocacy FAQ
– www.linux.org.uk <http://www.linux.org.uk/>
– www.linux.org <http://www.linux.org/>
– Linux International <http://www.li.org/>
which is the home page of the free software foundation and explains their pur-
pose and the philosophy of software that can be freely modified and redis-
tributed.
• Three large indexes of reviewed free and proprietary Linux software are:
– www.freshmeat.net <http://www.freshmeat.net/>
• The Linux weekly news brings up to date info covering a wide range of Linux
issues:
– lwn.net <http://lwn.net/>
536
D. Linux Advocacy FAQ D.1. Linux Overview
What are Debian, RedHat, Caldera and Suse etc. Explain the different
Linux distributions?
Linux is really just the ‘kernel’ of the operating system. Linux in itself is just a 1
megabyte file that runs the rest of the system. Its function is to interface with hard-
ware, multitask and run real programs which do tangible things. All applications,
network server programs, and utilities that go into a full Linux machine are really just
free software programs recompiled to run on Linux - most existed even before Linux.
They are not part of Linux and can (and do) actually work on any other of the Unix
systems mentioned above.
Hence many efforts have been taken to package all of the utilities needed for a
Unix system into a single collection, usually on a single easily installable CD.
Each of these efforts combines hundreds of ‘packages’ (eg the Apache web server
is one package, the Netscape web browser is another) into a Linux ‘distribution’.
Some of the popular Linux distributions are:
There are now about 200 distributions of Linux. Some of these are single floppy
routers or rescue disks, others are modifications of popular existing distributions,
while others have a specialised purpose, like real-time work or high security.
537
D.1. Linux Overview D. Linux Advocacy FAQ
This listing contains the top 20 contributors by number of projects contributed to:
The above is a very rough touble. It does however serve to give an approximate
idea of the spread of contributions.
If you are a private individual with no Unix expertise available to help you when you
come into problems, and you are not interested in learning about the underlying work-
538
D. Linux Advocacy FAQ D.2. Linux, GNU and Licensing
This section covers questions about the nature of free software and the concepts of
GNU
The Linux kernel is distributed under the GNU General Public License (GPL), available
from the Free Software Foundation:
Most (95% ?) of all other software in a typical Linux distribution is also under the
GPL or the LGPL (see below).
There are many other types of free software licenses. Each of these is based on
particular commercial or moral outlooks. Their acronyms are as follows (as defined by
the Linux Software Map database) in no particular order:
• ftp://metalab.unc.edu/pub/Linux/LICENSE
539
D.2. Linux, GNU and Licensing D. Linux Advocacy FAQ
What is GNU?
GNU is an acronym for Gnu’s Not Unix. A gnu is a large beast and is the motif of the
Free Software Foundation (FSF). GNU is a ‘recursive’ acronym.
Richard Stallman is the founder of the FSF and the creator of the GNU General
Public License. One of the purposes of the FSF is to promote and develop free alter-
natives to proprietary software. The GNU project is an effort to create a free Unix-like
operating system from scratch and was started in 1984.
GNU represents this free software licensed under the GNU General Public Li-
cense. GNU software is software designed to meet a higher set of standards than its
proprietary counterparts.
GNU has also become a movement in the computing world. When the word
GNU is mentioned, it usually evokes feelings of extreme left wing genius’s who pro-
duce free software in their spare time that is far superior to anything even large cor-
porations can come up with through years of dedicated development. It also means
distributed and open development, encouraging peer review, consistency, and. GNU
means doing things once in the best way possible, providing solutions instead of quick
fixes, and looking exhaustively at possibilities instead of going for the most brightly
coloured or expedient approach.
GNU also means a healthy disrespect for the concept of a deadline and a release
schedule.
Proprietary software is often looked down upon in the free software world for many
reasons:
• is buggy.
540
D. Linux Advocacy FAQ D.2. Linux, GNU and Licensing
• cannot be fixed.
• costs far more than it is worth.
• can do anything behind your back without you knowing.
• is insecure.
• tries to be better than other proprietary software without meeting real technical
needs.
• wastes a lot of time duplicating the effort of other proprietary software.
• often does not build on existing software because of licensing issues.
GNU software on the other hand is open for anyone to scrutinize it. Users can
(and do) freely fix and enhance software for their own needs, then allow others the
benefit of their extensions. Many developers of different expertise collaborate to find
the best way of doing things. Open industry and academic standards are adhered
to, to make software consistent and compatible. Collaborated effort between different
developers means that code is shared and effort is not replicated. Users have close and
direct contact with developers ensuring that bugs are fixed quickly and users needs
are met. Because source code can be viewed by anyone, developers write code more
carefully and are more inspired and more meticulous.
Possibly the most important reason for the superiority of Free software is peer
review. Sometimes this means that development takes longer as more people quibble
of the best way of doing things. However most of the time it results in a more reliable
product.
Another partial reason for this superiority is that GNU software is often written
by people from academic institutions who are in the centre of IT research, and are most
qualified to dictate software solutions. In other cases authors write software for their
own use out of their own dissatisfaction for existing proprietry software - a powerful
motivation.
541
D.2. Linux, GNU and Licensing D. Linux Advocacy FAQ
can change the software or use pieces of it in new free programs; and that
you know you can do these things.
If Linux is free, where do companies have the right to make money off
selling CD’s?
This is not possible. Because of the legal terms of the GPL, for Linux to be distributed
under a different copyright would require the consent of all 200+ persons that have
ever contributed to the Linux source code. These people come from such a variety of
places, that such a task is logistically infeasible. Even if it did happen, new developers
would probably rally in defiance and continue work on the kernel as it is. This free
kernel would amass more followers and would quickly become the standard, with or
without Linus.
There are many kernel developers who have sufficient knowledge to do the job of
Linus. Most probably a team of core developers would take over the task if Linus
no longer worked on the kernel. Linux might even split into different development
teams if a disagreement did break out about some programming issue. It may rejoin
later on. This is a process that many GNU software packages are continually going
through to no ill effect. It doesn’t really matter much from the end user’s perspective
since GNU software by its nature always tends to gravitate towards consistancy and
542
D. Linux Advocacy FAQ D.3. Linux Distributions
improvement, one way are the other. It is also doesn’t matter to the end user because
the end user has selected a popular Linux distribution packaged by someone who has
already dealt with these issues.
Open Source is a new catch phrase that is ambiguous in meaning but is often used syn-
onymously with Free. It sometimes refers to any proprietary vendor releasing source
code to their package, even though that source code is not ‘free’ in the sense of users be-
ing able to modify it and redistribute it. Sometimes it means ‘public domain’ software
which anyone can modify, but which can be incorporated into commercial packages
where later versions will be unavailable in source form.
Open Source advocates vie for the superiority of the Open Source development
model.
GNU supporters don’t like to use the term ‘Open Source’. ‘Free’ software, in the
sense of ‘freedom’ to modify and redistribute, is the preferred term and necessitates a
copyright license along the same vein as the GPL. Unfortunately, its not a marketable
term because it requires this very explanation which tends to bore people who couldn’t
really care about licensing issues.
Free software advocates vie for the ethical responsibility of making source code
available, and encouraging others to do the same.
This section covers questions that about how Linux software is packaged and dis-
tributed, and how to obtain Linux.
If everyone is constantly modifying the source, isn’t this bad for the
consumer? How is the user protected from bogus software?
You as the user are not going to download arbitrary untested software any more than
you would if you were using Win95.
543
D.3. Linux Distributions D. Linux Advocacy FAQ
When you get Linux, it will be inside a standard distribution, probably on a CD.
Each of these packages is selected by the distribution vendors to be a genuine and
stable release of that package. This is the responsibility taken on by those who create
Linux distributions.
Note that there is no ‘corporate body’ that overseas Linux. Everyone is on there
own mission. BUT, a package will not find its way into a distribution unless someone
feels that it is a useful one. For people to feel it is useful means that they have to have
used it over a period of time, and in this way only good, thoroughly reviewed software
gets included.
Maintainers of packages ensure that official releases are downloadable from their
home pages, and will upload original versions onto well established ftp servers.
It is not the case that any person is free to modify original distributions of pack-
ages and thereby hurt the names of the maintainers of that package.
For those who are paranoid that the software that they have downloaded is not
the genuine article distributed by the maintainer of that software, digital signatures can
verify the packager of that software. Cases where vandals have managed to substitute
a bogus package for a real one are extremely rare, and entirely preventable.
There are so many different Linux versions - is this not confusion and
incompatibility?
The Linux kernel is now on 2.4.3 as of this writing. The only other stable release
of the kernel was the previous 2.2 series which was the standard for more than year.
The Linux kernel version does not effect the Linux user. Linux programs will
work regardless of the kernel version. Kernel versions speak of features, not compati-
bility.
Each Linux distribution has its own versioning system. RedHat has just released
version 7.0 of its distribution, Caldera, 2.2, Debian, 2.1, and so forth. Each new incar-
nation of a distribution will have newer versions of packages contained therein, and
better installation software. There may also have been subtle changes in the filesystem
layout.
The Linux Unix C library implementation is called glibc. When RedHat brought
out version 5.0 of its distribution, it changed to glibc from the older ‘libc5’ library.
Because all packages require this library, this was said to introduce incompatibility. It
is true however that multiple versions of libraries can coexist on the same system and
hence no serious compatibility problem was ever introduced in this transition. Other
544
D. Linux Advocacy FAQ D.3. Linux Distributions
vendors have since followed suite in making the transition to glibc (also known as
libc6).
The Linux community has also produced a document called the Linux Filesys-
tem Standard. Most vendors try to be compliant with this standard, and hence Linux
systems will look very similar from one distribution to another.
The different distributions are NOT like different operating systems (compare Sun vs
IRIX). They are very similar and share binary compatibility (provided that they are for
the same type of processor of course) - i.e. Linux binaries compiled on one system will
work on another. Utilities also exist to convert packages meant for one distribution
to be installed on a different distribution. Some distributions are however created for
specific hardware and hence their packages will only run on that hardware. However
all software specifically written for Linux will recompile without any modifications
on another Linux platform in addition to compiling with ‘few’ modifications on other
Unix systems.
The rule is basically this: if you have three packages that you would need to get
working on a different distribution, then it is trivial to make the adjustments to do
this. If you have a hundred packages that you need to get working, then it becomes a
problem.
If you are an absolute beginner and don’t really feel like thinking about what distribu-
tion to get, one of the most popular and easiest to install is Mandrake. RedHat is also
supported quite well in industry.
545
D.3. Linux Distributions D. Linux Advocacy FAQ
RedHat: The most popular. What’s nice about RedHat is that almost all devel-
opers provide RedHat rpm’s (the file that a RedHat package comes in) (Mandrake and
other distributions are also rpm based). Debian deb files are usually provided, but not
as often as rpm.
Slackware: This was the first Linux distribution and is supposed to be the most
current (software is always the latest). Its a pain to install and manage, although school
kids who don’t know any better love it.
TurboLinux, SUSE and some others are also very popular. You can find reviews
on the Internet.
There are many other popular distributions worth mentioning. Especially worth
while are distributions developed in your own country that specialise in the support
the local language.
Once you have decided on a distribution (see previous question), you need to down-
load that distribution or buy/borrow it on CD. Commercial distributions may contain
proprietary software that you may not be allowed to install multiple times. However,
Mandrake, RedHat, Debian and Slackware are all committed to freedom and hence
will not have any software that is non-redistributable. Hence if you get one of these on
CD, feel free to install it as many times as you like.
Note that the GPL does not say that GNU software is without cost. You are
allowed to charge for the service of distributing, installing and maintaining software.
It is the no-prohibition to redistribute and modify GNU software that is meant by the
word free.
An international mirror for Linux distributions is
• ftp://metalab.unc.edu/pub/Linux/distributions/
You would have to have a lot of free time to download from this link though, so
rather use our (if you are South African) local mirrors on ftp.is.co.za, ftp.linux.co.za
and ftp.sdn.co.za. Some universities also have mirrors: ftp.wits.co.za (Wits) and
ftp.sun.ac.za (Stellenbosch).
(Its a good idea to browse around all these servers to get a feel of what software
is available. Also check the date of the file, since some software can really sit around
for years and there may be a more recent version available elsewhere on the internet.)
Downloading from these ftp sites is going to take a long time unless you have
a really fast link. Hence rather ask around who locally sells Linux on CD. Also make
546
D. Linux Advocacy FAQ D.4. Linux Support
sure you have the LATEST VERSION of whatever it is you’re buying or downloading.
Under no circumstance install from a distribution that has been superseded by a newer
version.
It helps to think more laterally when trying to get information about Linux:
This section explains where to get free and commercial help with Linux.
Linux is supported by the community that uses Linux. With commercial systems, users
are too stingy to share their knowledge because they feel that they owe nothing for
having spent it on software.
Linux users on the other hand are very supportive of other Linux users. A person
can get FAR BETTER SUPPORT from the Internet community that they would from
their commercial software vendors. Most packages have email lists where the very
developers are be available for questions. Most cities have mailing lists (Gauteng and
the Western Cape in South Africa have ones) where responses to email questions are
answered within hours. The new Linux user discovers that help abounds and that
547
D.4. Linux Support D. Linux Advocacy FAQ
there is never want for a friendly discussion about any computing problem they may
have. Remember that Linux is YOUR operating system.
Newsgroups provide assistance where Linux issues are discussed and help is
given to new users - there are many such newsgroups. Using a newsgroup has the
benefit of the widest possible audience.
The web is also an excellent place for support. Because users constantly interact
and discuss Linux issues, 99% of the problems a user is likely to have would have
already been documented or covered in mailing list archives, often obviating the need
to ask anyone at all.
Finally, many professional companies will provide assistance at comparable
hourly rates.
For the Cape Linux Users Group (CLUG): Send a one line message
subscribe clug-tech
548
D. Linux Advocacy FAQ D.5. Linux Compared to Other Systems
This section discusses the relative merits of different Unix’s and NT.
It has long since been agreed that Linux several times the install base of any Unix.
This is a question nobody really knows. Various estimates have been put forward
based on statistical considerations. 10-20 million is the current figure. As Linux begins
to dominate the embedded market that number will soon supercede the number of all
other operating systems combined.
What is clear is that the number of Linux users is doubling consistently every
year. This is evident from user interest and industry involvement in Linux - journal
subscriptions, web hits, media attention, support requirements, software ports etc.
It is a well established fact that over 25% of all web servers are Linux, simply
because it is easy to survey online machines.
Although Linux is free (or at most R100 for a CD), a good knowledge of Unix is re-
quired to install and configure a reliable server. This tends to cost you in time or sup-
port charges.
On the other hand, your Win2000/NT workstation has to be licensed.
Many arguments have been put forward regarding server costs that fail to take
into account the completely lifetime of the server. This has resulted in contrasting
reports that either claim that Linux costs nothing, or claim that it is impossible to use
because of the expense of the expertise required. Neither of these extreme views are
true.
The total cost of a server includes the following:
549
D.5. Linux Compared to Other Systems D. Linux Advocacy FAQ
• Cost of hardware
• Cost of installation
• Cost of support
• Cost of maintenance
• Cost of repair
• Linux can run many services (mail, file, web) off the same server rather than
having dedicated servers - this can be a tremendous saving.
When all these factors are considered, any company will probably make a truly
enormous saving by choosing a Linux server over a commercial operating system.
What is the TOTAL cost of installing and running a Linux system com-
pared to a proprietary Unix system?
550
D. Linux Advocacy FAQ D.5. Linux Compared to Other Systems
Linux will typically perform 50% to 100% better than other operating systems on the
same hardware. There are no commercial exceptions to this rule for a basic PC.
There have been a great many misguided attempts to show that Linux performs
better or worse than other platforms. I have never read a completely conclusive study.
Usually these studies are done with one or other competing system having better ex-
pertise at its disposal, and are hence grossly biased. In some supposedly independent
tests, Linux tended to outperform NT as a web server, file server and database server
by an appreciable margin.
In our experience (from both discussions and development), Linux’s critical op-
erations are always pedantically optimised - far more than would normally be encour-
aged in a commercial organisation. Hence if your hardware is not performing the
absolute best it can, it’s by a very small margin.
Its also probably not worth while debating these kinds of speed issues where
there are so many other good reasons to prefer Linux.
Linux is supposed to lack proper SMP support and therefore not be as scalable as other
OS’s. This is somewhat true and has been the case until kernel 2.4 was released in
January 2001.
Linux has a proper journalling filesystem called ReiserFS. This ultimately means
that in the event of a power failure, there is very little chance that the file-system would
ever be corrupted, and or that manual intervention would be required to fix the file-
system.
551
D.5. Linux Compared to Other Systems D. Linux Advocacy FAQ
Does Linux only support 2 Gig of memory and 128 Meg of swap?
Linux supports a full 64 Gig (sixty four gigabytes) of memory, with 1 Gig of unshared
memory per process.
If you really need this much memory, you should be using a 64 bit system, like a
DEC Alpha, or Sun UltraSparc machine.
On 64 bit systems, Linux supports more memory than most first world govern-
ments can afford to buy.
Linux supports as much swap space as you like. For technical reasons, however,
the swap space used to require division into separate partitions of 128 Meg each.
The principles underlying OS development have not changed since the concept of an
OS was invented some 30+ years ago. It is really academia that develop the theoretical
models for computer science – industry only implements these.
It has been claimed that UNIX is antiquated. This would be a fair criticism if
critics had taken into account any of the available technology when developing their
own systems. It is quite obvious that NT was developed to be somewhat compatible
with Win95, and hence probably owes limitations to the original MSDOS.
UNIX has a one-administrator, many-users security model. NT is supposed to
have improved upon this: if you know of any worthwhile examples of effective use of
multiple administrators under NT, please let me know. On the other hand there are
Linux systems which have been fiddled to appear as multiple Unix systems on one
machine, each with their own administrator, web server, mail server etc. This is quite
remarkable.
FreeBSD is like a Linux distribution in that it also relies on a large number of GNU
packages. Most of the packages available in Linux distributions are also available for
FreeBSD.
FreeBSD is not merely a kernel but also a distribution, a development model, an
operating system standard, and a community infrastructure. FreeBSD should actually
be compared to Debian and NOT to Linux.
The arguments comparing the FreeBSD kernel to the Linux kernel center around
the differences between how various kernel functions are implemented. Depending on
552
D. Linux Advocacy FAQ D.6. Technical
the area you look at, either Linux or FreeBSD will have a better implementation. On
the whole, FreeBSD is thought to have a better architecture, although Linux has had
the benefit of being ported to many platforms and has a great many more features and
supports far more hardware. It is questionable whether the performance penalties we
are talking about are of real concern in most practical situations.
GPL advocates take issue with FreeBSD because its licensing allows a commercial
organisation to use FreeBSD without disclosing additions to the source code.
None of this offsets the fact that either of these systems are preferable to propri-
etary ones.
D.6 Technical
Yes. This will allow you to browse the installation documentation on the CD.
Yes, Linux will occupy two or more partitions, while Win95 will sit in one of the pri-
mary partitions. At boot time, a boot prompt will ask you to select which operating
system you would like to boot into.
A useful distribution of packages that includes the X Window System (Unix’s graphical
environment) will occupy less than 1 gigabyte. A network server that does not have
to run X can get away with about 300 megabytes. Linux can run on as little as a single
stiffy disk - thats 1.4 megabytes - and still perform various network services.
553
D.6. Technical D. Linux Advocacy FAQ
Linux runs on many different hardware platforms, as explained above. The typical
user should purchase an entry level PC with at least 16 megabytes of RAM if they are
going to run the X Window System smoothly (Unix’s graphical environment).
A good Linux machine is a PII 300 (or AMD, K6, Cyrix etc.) with 64 megabytes
of RAM and a 2 megabyte graphics card (i.e. capable of run 1024x768 screen resolution
in 15/16 bit color). 1 gigabyte of free disk space is necessary.
If you are using scrap hardware, an adequite machine for the X Window System
should not have less than a 486-100MHz processor and 8 megabytes of RAM. Network
servers can run on a 386 with 4 megabytes of RAM, and a 200 megabyte hard drive.
Note that recently some distributions are coming out with Pentium only compila-
tions. This means that your old 386 will no longer work. You will then have to compile
your own kernel for the processor you are using, and possibly recompile packages.
About 90% of all hardware available for the PC is supported under Linux. In general,
well established brand names will always work, these will tend to cost more though.
New graphics/network cards are always being released onto the market, If you buy
one of these, you may have to wait many months before support becomes available (if
ever).
• Hardware-HOWTO <http://users.bart.nl/˜patrickr/
hardware-howto/Hardware-HOWTO.html>
This may not be up to date, so its best to go to the various references listed in this
document and get the latest info.
Linux has read and write support for all these file systems. Hence your other partitions
will be readable from Linux. In addition, Linux has support for a wide range of other
file systems like those of OS2, Amiga and other Unix systems.
554
D. Linux Advocacy FAQ D.6. Technical
Linux contains a highly advanced DOS emulator. It will run almost any 16 or 32 bit
DOS application. It runs a great number of 32 bit DOS games as well.
The DOS emulator package for Linux is called dosemu. It will typically run ap-
plications much faster than normal DOS because of Linux’s faster file system access
and system calls.
It can run in an X window just like a dos window under Win95.
Yes. WineLib is a part of the Wine package (see below) and allows Windows C appli-
cations to be recompiled to work under Linux. Apparently this works extremely well
with virtually no changes to the source code being necessary.
555
D.6. Technical D. Linux Advocacy FAQ
I have heard that Linux does not suffer from virus attacks. Is it true
that there is no threat of viruses with Unix systems?
A virus is a program that replicates itself by modifying the system on which it runs.
It may do other damage. Virus’s are small programs that exploit social engineering,
logistics, and the inherent flexibility of a computer system to do undesirable things.
Because a Unix system does not allow this kind of flexibility in the first place,
there is categorically NO such thing as a virus for it. For example, Unix inherently
restricts access to files outside of the users’s privilege space, hence a virus would have
nothing to infect.
However, although Linux cannot itself execute a virus, it may be able to pass on
a virus meant for a Windows machine should a Linux box act as a mail or file server.
To avoid this problem, numerous virus detection software for Linux is now becoming
available. This is what is meant by virus-software-for-Linux.
On the other hand, conditions sometimes allow for an intelligent hacker to target
a machine and eventually gain access. The hacker may also mechanically try to attack
a large number of machines using custom written programs. The hacker may go one
step further to cause those machines that are compromised to begin executing those
same programs. At some point this crosses the definition of what is called a ”worm”.
A worm is a thwart of security that exploits the same security hole recursively through
a network. See the question on security below.
At some point in the future, a large number of users may be using the same
proprietary desktop application that has some security vulnerability in it. If this were
to support a virus, it would only be able to damage the users restricted space, but then
it would be the application that is insecure, and not Linux per se.
One should also remember that with Linux, a sufficient understanding of the
system is possible to easily to detect and repair the corruption, without have to do
anything drastic, like re-installing or buying expensive virus detection software.
556
D. Linux Advocacy FAQ D.6. Technical
There are various issues that make it more and less secure:
Because GNU software is open source, any hacker can easily research the internal
workings of critical system services.
On the one hand, they may find a flaw in these internals that can be indirectly
exploited to compromised the security of a server. In this way, Linux is LESS secure
because security holes can be discovered by arbitrary individuals.
On the other hand, they may find a flaw in these internals that they can report
to the authors of that package, who will quickly (sometimes within hours) correct the
insecurity and release a new version on the internet. This makes Linux MORE secure
because security holes are discovered and reported by a wide network of program-
mers.
It is therefore questionable whether free software is more secure or not. I per-
sonally prefer to have access to the source code so that I know what my software is
doing.
Another issue is that Linux servers are often installed by lazy people who do not
take the time to follow the simplest of security guidelines, even though these guide-
lines are widely available and easy to follow. Such systems are sitting ducks and are
often attacked. (See the question of virus above.)
A further issue is that when a security hole is discovered, system administrators
fail to head the warnings announced to the Linux community. By not upgrading that
service, they leave open a window to opportunistic hackers.
It is possible to make a Linux system completely air tight by following a few sim-
ple guidelines, like being careful about what system services you expose, not allowing
passwords to be compromised, and installing utilities that close common security ex-
ploits.
Because of the community nature of Linux users, there is openness and honesty
with regard to security issues. It is not found, for instance, that security holes are
covered up by maintainers for commercial reasons. In this way you can trust Linux
far more than commercial institutions that think they have a lot to loose by disclosing
flaws in their software.
557
D.6. Technical D. Linux Advocacy FAQ
558
Appendix E
Most of the important components of a Free U NIX system (like L INUX ) were devel-
oped by the http://www.gnu.org/ <FreeSoftwareFoundation> (FSF). Further, most
of a typical L INUX distribution comes under the FSF’s copyright, called the GNU
General Public License. It is therefore important to study this license in full to under-
stand the ethos of Free &Meaning the freedom to be modified and redistributed.- development,
and the culture under which L INUX continues to evolve.
Preamble
The licenses for most software are designed to take away your freedom to share
and change it. By contrast, the GNU General Public License is intended to guarantee
your freedom to share and change free software–to make sure the software is free for all
its users. This General Public License applies to most of the Free Software Foundation’s
software and to any other program whose authors commit to using it. (Some other Free
559
E. The GNU General Public License Version 2
Software Foundation software is covered by the GNU Library General Public License
instead.) You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our Gen-
eral Public Licenses are designed to make sure that you have the freedom to distribute
copies of free software (and charge for this service if you wish), that you receive source
code or can get it if you want it, that you can change the software or use pieces of it in
new free programs; and that you know you can do these things.
software To protect your rights, we need to make restrictions that forbid anyone
to deny you these rights or to ask you to surrender the rights. These restrictions trans-
late to certain responsibilities for you if you distribute copies of the software, or if you
modify it.
software For example, if you distribute copies of such a program, whether gratis
or for a fee, you must give the recipients all the rights that you have. You must make
sure that they, too, receive or can get the source code. And you must show them these
terms so they know their rights.
software We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy, distribute and/or
modify the software.
Also, for each author’s protection and ours, we want to make certain that ev-
eryone understands that there is no warranty for this free software. If the software is
modified by someone else and passed on, we want its recipients to know that what
they have is not the original, so that any problems introduced by others will not reflect
on the original authors’ reputations.
Finally, any free program is threatened constantly by software patents. We wish
to avoid the danger that redistributors of a free program will individually obtain patent
licenses, in effect making the program proprietary. To prevent this, we have made it
clear that any patent must be licensed for everyone’s free use or not licensed at all.
The precise terms and conditions for copying, distribution and modification fol-
low.
0. This License applies to any program or other work which contains a notice placed
by the copyright holder saying it may be distributed under the terms of this Gen-
eral Public License. The ”Program”, below, refers to any such program or work,
560
E. The GNU General Public License Version 2
and a ”work based on the Program” means either the Program or any deriva-
tive work under copyright law: that is to say, a work containing the Program
or a portion of it, either verbatim or with modifications and/or translated into
another language. (Hereinafter, translation is included without limitation in the
term ”modification”.) Each licensee is addressed as ”you”.
Activities other than copying, distribution and modification are not covered by
this License; they are outside its scope. The act of running the Program is not
restricted, and the output from the Program is covered only if its contents consti-
tute a work based on the Program (independent of having been made by running
the Program). Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program’s source code as you
receive it, in any medium, provided that you conspicuously and appropriately
publish on each copy an appropriate copyright notice and disclaimer of war-
ranty; keep intact all the notices that refer to this License and to the absence of
any warranty; and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and you may at
your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion of it, thus form-
ing a work based on the Program, and copy and distribute such modifications or
work under the terms of Section 1 above, provided that you also meet all of these
conditions:
a) You must cause the modified files to carry prominent notices stating that
you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in whole or
in part contains or is derived from the Program or any part thereof, to be
licensed as a whole at no charge to all third parties under the terms of this
License.
c) If the modified program normally reads commands interactively when run,
you must cause it, when started running for such interactive use in the most
ordinary way, to print or display an announcement including an appropri-
ate copyright notice and a notice that there is no warranty (or else, saying
that you provide a warranty) and that users may redistribute the program
under these conditions, and telling the user how to view a copy of this Li-
cense. (Exception: if the Program itself is interactive but does not normally
print such an announcement, your work based on the Program is not re-
quired to print an announcement.)
561
E. The GNU General Public License Version 2
its terms, do not apply to those sections when you distribute them as separate
works. But when you distribute the same sections as part of a whole which is a
work based on the Program, the distribution of the whole must be on the terms
of this License, whose permissions for other licensees extend to the entire whole,
and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest your rights to
work written entirely by you; rather, the intent is to exercise the right to control
the distribution of derivative or collective works based on the Program.
In addition, mere aggregation of another work not based on the Program with
the Program (or with a work based on the Program) on a volume of a storage
or distribution medium does not bring the other work under the scope of this
License.
3. You may copy and distribute the Program (or a work based on it, under Section 2)
in object code or executable form under the terms of Sections 1 and 2 above pro-
vided that you also do one of the following:
The source code for a work means the preferred form of the work for making
modifications to it. For an executable work, complete source code means all
the source code for all modules it contains, plus any associated interface defi-
nition files, plus the scripts used to control compilation and installation of the ex-
ecutable. However, as a special exception, the source code distributed need not
include anything that is normally distributed (in either source or binary form)
with the major components (compiler, kernel, and so on) of the operating sys-
tem on which the executable runs, unless that component itself accompanies the
executable.
If distribution of executable or object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the source code
from the same place counts as distribution of the source code, even though third
parties are not compelled to copy the source along with the object code.
562
E. The GNU General Public License Version 2
4. You may not copy, modify, sublicense, or distribute the Program except as expressly
provided under this License. Any attempt otherwise to copy, modify, sublicense
or distribute the Program is void, and will automatically terminate your rights
under this License. However, parties who have received copies, or rights, from
you under this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not signed it. However,
nothing else grants you permission to modify or distribute the Program or its
derivative works. These actions are prohibited by law if you do not accept this
License. Therefore, by modifying or distributing the Program (or any work based
on the Program), you indicate your acceptance of this License to do so, and all
its terms and conditions for copying, distributing or modifying the Program or
works based on it.
6. Each time you redistribute the Program (or any work based on the Program), the
recipient automatically receives a license from the original licensor to copy, dis-
tribute or modify the Program subject to these terms and conditions. You may not
impose any further restrictions on the recipients’ exercise of the rights granted
herein. You are not responsible for enforcing compliance by third parties to this
License.
7. If, as a consequence of a court judgment or allegation of patent infringement or for
any other reason (not limited to patent issues), conditions are imposed on you
(whether by court order, agreement or otherwise) that contradict the conditions
of this License, they do not excuse you from the conditions of this License. If
you cannot distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may not
distribute the Program at all. For example, if a patent license would not permit
royalty-free redistribution of the Program by all those who receive copies directly
or indirectly through you, then the only way you could satisfy both it and this
License would be to refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under any particular
circumstance, the balance of the section is intended to apply and the section as a
whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any patents or other
property right claims or to contest validity of any such claims; this section has
the sole purpose of protecting the integrity of the free software distribution sys-
tem, which is implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed through that
system in reliance on consistent application of that system; it is up to the au-
thor/donor to decide if he or she is willing to distribute software through any
other system and a licensee cannot impose that choice.
This section is intended to make thoroughly clear what is believed to be a conse-
quence of the rest of this License.
563
E. The GNU General Public License Version 2
8. If the distribution and/or use of the Program is restricted in certain countries either
by patents or by copyrighted interfaces, the original copyright holder who places
the Program under this License may add an explicit geographical distribution
limitation excluding those countries, so that distribution is permitted only in or
among countries not thus excluded. In such case, this License incorporates the
limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions of the Gen-
eral Public License from time to time. Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or con-
cerns.
Each version is given a distinguishing version number. If the Program specifies
a version number of this License which applies to it and ”any later version”, you
have the option of following the terms and conditions either of that version or
of any later version published by the Free Software Foundation. If the Program
does not specify a version number of this License, you may choose any version
ever published by the Free Software Foundation.
10. If you wish to incorporate parts of the Program into other free programs whose
distribution conditions are different, write to the author to ask for permission.
For software which is copyrighted by the Free Software Foundation, write to the
Free Software Foundation; we sometimes make exceptions for this. Our decision
will be guided by the two goals of preserving the free status of all derivatives of
our free software and of promoting the sharing and reuse of software generally.
NO WARRANTY
564
E. The GNU General Public License Version 2
If you develop a new program, and you want it to be of the greatest possible use
to the public, the best way to achieve this is to make it free software which everyone
can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to
the start of each source file to most effectively convey the exclusion of warranty; and
each file should have at least the ”copyright” line and a pointer to where the full notice
is found.
<one line to give the program’s name and a brief idea of what it does.>
Copyright (C) 19yy <name of author>
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this when it starts
in an interactive mode:
565
E. The GNU General Public License Version 2
The hypothetical commands ‘show w’ and ‘show c’ should show the appropri-
ate parts of the General Public License. Of course, the commands you use may be
called something other than ‘show w’ and ‘show c’; they could even be mouse-clicks
or menu items–whatever suits your program.
You should also get your employer (if you work as a programmer) or your school,
if any, to sign a ”copyright disclaimer” for the program, if necessary. Here is a sample;
alter the names:
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may consider it
more useful to permit linking proprietary applications with the library. If this is what
you want to do, use the GNU Library General Public License instead of this License.
566
Index
. .el, 24
postgres’s internal tables, 401 .forward, 523
LATEX .gif, 24
bibliography source file, 24 .gz, 24
LATEX, 26 .htm, 24
TEX, 26 .h, 24
--help, 19 .info, 24
.1, 26 .inputrc, 521
.C, 24 .in, 24
.Xdefaults, 521 .i, 24
.Z, 26 .jpg, 25
.alias, 24 .lj, 25
.au, 24 .log, 25
.avi, 24 .lsm, 25
.awk, 24 .lyx, 25
.a, 23 .man, 25
.bash login, 521 .mf, 25
.bash logout, 521 .pbm, 25
.bash profile, 521 .pcf, 25
.bib, 24 .pcx, 25
.bmp, 24 .pdf, 25
.bz2, 24 .pfb, 25
.cc, 24 .php, 25
.cf, 24 .pl, 25
.cgi, 24 .profile, 521
.conf, 24 .ps, 25
.cpp, 24 .py, 25
.csh, 24 .rpm, 25
.cxx, 24 .sgml, 25
.c, 24 .sh, 25
.db, 24 .so, 25
.deb, 24 .spd, 25
.diff, 24 .tar, 25
.dir, 24 .tcl, 25
.dvi, 24 .texinfo, 25
567
INDEX INDEX
.texi, 25 INSTALL, 26
.tex, 26 NEWS, 27
.tfm, 26 README, 26
.tga, 26 TCP wrappers, 522
.tgz, 26 THANKS, 27
.tiff, 26 TODO, 26
.ttf, 26 VERSION, 27
.txt, 26 alien, 519
.voc, 26 apt-get, 519
.wav, 26 apt, 519
.xinitrc, 521 aterm, 521
.xpm, 26 at, 517
.y, 26 awk program, 24
.zip, 26 bash, 521
/etc/HOSTNAME, 522 bash functions, 521
/etc/X11/XF86Config, 521 bzip2, 24
/etc/conf.modules, 516 cat, 11
/etc/fstab, 515 cd, 28
/etc/host.conf, 522, 523 compress, 26
/etc/hostname, 522 configure, 24
/etc/hosts.allow, 522 cron, 517
/etc/hosts.deny, 522 cut, 514
/etc/hosts, 522, 523 df, 515
/etc/inetd.conf, 522 dhcpd, 522
/etc/ld.so.conf, 519 dig, 522
/etc/lilo.conf, 516 dmesg, 516
/etc/modules.conf, 516, 519 dnsdomainname, 522
/etc/named.boot, 523 domainname, 522
/etc/named.conf, 523 dpkg, 519
/etc/networks, 522 driver, 291
/etc/nsswitch.conf, 523 dselect, 519
/etc/printcap, 520 du, 515
/etc/profile, 517 errors address, 290
/etc/resolv.conf, 522, 523 exim group, 290
/etc/services, 522 exim user, 290
/etc/skel/, 517 expand, 514
/etc/smb.conf, 523 file, 291
/etc/syslog.conf, 517 find, 515
/usr/doc/, 516 fmt, 514
/var/log/messages, 516 fsck, 515
AUTHORS, 26 ftp, 522
BUGS, 26 gpasswd, 517
COPYING, 26 grep, 519
ChangeLog, 26 groupadd, 517
568
INDEX INDEX
569
INDEX INDEX
570
INDEX INDEX
571
INDEX INDEX
SCSI, 16 training, 2
CERT CPU, 15
LPI, 524 creating
certification files, 11
LPI, 3 Creating tables
RHCE, 3 postgres, 402
change directory
cd, 11 data
characters file, 7
filenames, 11 database, 397, 398
CMOS Database file, 24
boot sequence, 18 database table directory, 399
builtin devices, 18 Debian
configuration, 18 deb, 3
Harddrive auto-detection, 18 Debian package, 24
hardware clock, 18 definition
column typing spam, 297
postgres, 402 Delete/dropping a column
COM1, 16 postgres, 404
COM2, 16 Delete/dropping a table
combating postgres, 404
spam, 297 delivery, 287
command history Device independent file, 24
LPI, 513 directories, 11
command line copy, 514
LPI, 513 Directors
command-line options, 19 exim, 292
commands, 8 disk drive
computer IDE, 16
programming, 3 SCSI, 16
concatenate disk partitions
cat, 11 LPI, 514
configuration distinguishing directories
exim, 288 ls, 12
CMOS, 18 distribute, 560
Configuration file, 24 DLL, 25
consecutive, 1 DMA, 518
copy DMA channels, 15
directories, 514 documentation
files, 514 postgres, 398
wildcards, 514 reference, 2
copying, 560 tutorial, 2
course dumping and restoring tables
notes, 2 postgres, 406
572
INDEX INDEX
573
INDEX INDEX
574
INDEX INDEX
575
INDEX INDEX
576
INDEX INDEX
relay ISA, 15
mail, 290 SMTP, 288
untrusted hosts, 290 source code
requirements GPL, 562
LPI, 3 spam, 299
RHCE, 3 combating, 297
responsibilities definition, 297
administrator, 299 prevention, 298
spam, 299 Realtime Blocking List, 299
Return, 8 responsibilities, 299
RHCE Speed font, 25
certification, 3 spooling, 287
requirements, 3 spooling mailserver, 288
ribbon SQL, 397
IDE, 16 introduction, 402
SCSI, 16 SQL programming language, 397
rmail, 288 SQL requests, 397
ROM, 13 SQL server, 397
ROM BIOS, 18 SQL92 standard, 398
Routers start stop scripts
exim, 292 postgres, 399
routing, 287 static
library, 23
SCSI streams
CDROM, 16 LPI, 514
disk drive, 16 Structured Query Language, 397
ribbon, 16 SUID bit
termination, 16 LPI, 523
SCSI BIOS suid-rights
LPI, 518 LPI, 521
Serial ports, 16 SWIG, 24
server program
postgres, 398 TARGA, 26
shadowed passwords Tcl/Tk, 25
LPI, 524 TCP
shell LPI, 522
prompt, 8 TCP wrappers
shell commands LPI, 523
LPI, 513 telephone assistance
Shell script, 24 LPI, 516
simple filesystem problems template database
LPI, 515 postgres, 400
Slackware, 26 termination
slots SCSI, 16
577
INDEX INDEX
verification
header, 299
version numbers
GPL, 564
video card
LPI, 521
Video format, 24
web page, 24
LPI, 513
websites
LPI, 516
Why?
578