The Linux System: Practice Exercises
The Linux System: Practice Exercises
CHAPTER
Practice Exercises
20.1 Dynamically loadable kernel modules give flexibility when drivers are
added to a system, but do they have disadvantages too? Under what
circumstances would a kernel be compiled into a single binary file, and
when would it be better to keep it split into modules? Explain your
answer.
Answer:
There are two principal drawbacks with the use of modules. The first is
size: module management consumes unpageable kernel memory, and
a basic kernel with a number of modules loaded will consume more
memory than an equivalent kernel with the drivers compiled into the
kernel image itself. This can be a very significant issue on machines
with limited physical memory.
The second drawback is that modules can increase the complexity
of the kernel bootstrap process. It is hard to load up a set of modules
from disk if the driver needed to access that disk itself a module that
needs to be loaded. As a result, managing the kernel bootstrap with
modules can require extra work on the part of the administrator: the
modules required to bootstrap need to be placed into a ramdisk image
that is loaded alongside the initial kernel image when the system is
initialized.
In certain cases it is better to use a modular kernel, and in other
cases it is better to use a kernel with its device drivers prelinked.
Where minimizing the size of the kernel is important, the choice will
depend on how often the various device drivers are used. If they are
in constant use, then modules are unsuitable. This is especially true
where drivers are needed for the boot process itself. On the other hand,
if some drivers are not always needed, then the module mechanism
allows those drivers to be loaded and unloaded on demand, potentially
offering a net saving in physical memory.
Where a kernel is to be built that must be usable on a large vari-
ety of very different machines, then building it with modules is clearly
preferable to using a single kernel with dozens of unnecessary drivers
733
734 Chapter 20 The Linux System
increase the startup time for a program, as the linking must now be
done at run time rather than at compile time. Dynamic linkage can
also sometimes increase the maximum working set size of a program
(the total number of physical pages of memory required to run the
program). In a shared library, the user has no control over where in
the library binary file the various functions reside. Since most func-
tions do not precisely fill a full page or pages of the library, loading
a function will usually result in loading in parts of the surrounding
functions, too. With static linkage, absolutely no functions that are not
referenced (directly or indirectly) by the application need to be loaded
into memory.
Other issues surrounding static linkage include ease of distribution:
it is easier to distribute an executable file with static linkage than with
dynamic linkage if the distributor is not certain whether the recipient
will have the correct libraries installed in advance. There may also be
commercial restrictions against redistributing some binaries as shared
libraries. For example, the license for the UNIX “Motif” graphical envi-
ronment allows binaries using Motif to be distributed freely as long
as they are statically linked, but the shared libraries may not be used
without a license.
20.5 Compare the use of networking sockets with the use of shared memory
as a mechanism for communicating data between processes on a single
computer. What are the advantages of each method? When might each
be preferred?
Answer:
Using network sockets rather than shared memory for local commu-
nication has a number of advantages. The main advantage is that the
socket programming interface features a rich set of synchronization fea-
tures. A process can easily determine when new data has arrived on a
socket connection, how much data is present, and who sent it. Processes
can block until new data arrives on a socket, or they can request that a
signal be delivered when data arrives. A socket also manages separate
connections. A process with a socket open for receive can accept multi-
ple connections to that socket and will be told when new processes try
to connect or when old processes drop their connections.
Shared memory offers none of these features. There is no way for a
process to determine whether another process has delivered or changed
data in shared memory other than by going to look at the contents
of that memory. It is impossible for a process to block and request a
wakeup when shared memory is delivered, and there is no standard
mechanism for other processes to establish a shared memory link to an
existing process.
However, shared memory has the advantage that it is very much
faster than socket communications in many cases. When data is sent
over a socket, it is typically copied from memory to memory multiple
times. Shared memory updates require no data copies: if one process
updates a data structure in shared memory, that update is immedi-
ately visible to all other processes sharing that memory. Sending or
receiving data over a socket requires that a kernel system service call be
738 Chapter 20 The Linux System
position of the cylinder: more data can be squeezed into the longer
tracks nearer the edge of the disk than at the center of the disk. For
an operating system to optimize the rotational position of data on such
disks, it would have to have complete understanding of this geometry,
as well as the timing characteristics of the disk and its controller. In gen-
eral, only the disk’s internal logic can determine the optimal scheduling
of I/Os, and the disk’s geometry is likely to defeat any attempt by the
operating system to perform rotational optimizations.