OS Chapter6
OS Chapter6
Q.1 What is Demand paging? Explain data structure used for it.
Machines whose memory architecture is based on pages and whose CPU has restartable
instructions can support a kernel that implements a demand paging algorithm, swapping pages of
memory between main memory and a swap device.
Demand paging systems free processes from size limitations otherwise imposed by the amount of
physical memory available on a machine. For instance, machines that contain 1 or 2 megabytes
of physical memory can execute processes whose sizes are 4 or 5 megabytes.
The kernel still imposes a limit on the virtual size of a process, dependent on the amount of
virtual memory the machine can address. Since a process may not fit into physical memory, the
kernel must load its relevant portions into memory dynamically and execute it even though other
parts are not loaded. Demand paging is transparent to user programs except for the virtual size
permissible to a process.
Processes tend to execute instructions in small portions of their text space, such as program loops
and frequently called subroutines, and their data references tend to cluster in small subsets of the
total data space of the process. This is known as the principle of "locality."
The kernel contains 4 major data structures to support low-level memory management functions
and demand paging: page table entries, disk block descriptors, the page frame data table (called
pfdata for short), and the swap-use table. The kernel allocates space for the pfdata table once for
the lifetime of the system but allocates memory pages for the other structures dynamically.
The pfdata table describes each page of physical memory and is indexed by page number. The
fields of an entry are
• The page state, indicating that the page is on a swap device or executable file, that
DMA is currently underway for the page (reading data from a swap device), or that
the page can be reassigned.
• The number of processes that reference the page. The reference count equals the
number of valid page table entries that reference the page. It may differ from the
number of processes that share regions containing the page.
• The logical device (swap or file system) and block number that contains a copy of the
page.
• Pointers to other pfdata table entries on a list of free pages and on a hash queue of
pages.
The kernel links entries of the pfdata table onto a free list and a hashed list, analogous to the
linked lists of the buffer cache. The free list is a cache of pages that are available for
reassignment, but a process may fault on an address and still find the corresponding page intact
on the free list.
The free list thus allows the kernel to avoid unnecessary read operations from the swap device.
The kernel allocates new pages from the list in least recently used order. The kernel also hashes
the pfdata table entry according to its (swap) device number and block number. Thus, given a
device and block number, the kernel can quickly locate a page if it is in memory.
To assign a physical page to a region the kernel removes a free page frame entry from the head of
the free list, updates its swap device and block numbers, and puts it onto the correct hash queue.
The swap-use table contains an entry for every page on a swap device. The entry consists of a
reference count of how many page table entries point to a page on a swap device.
Q.2 Explain working of Page Stealer Process.
Q.3 Explain page fault. Explain handling of validity of page fault.
The system can incur two types of page faults: validity faults and protection faults. Because the
fault handlers may have to read a page from disk to memory and sleep during the I/O operation,
fault handlers are an exception to the general rule that interrupt handlers cannot sleep. However,
because the fault handler sleeps in the context of the process that caused the memory fault, the
fault relates to the running process; hence, no arbitrary processes are put to sleep.
Validity Fault Handler:
If a process attempts to access a page whose valid bit is not set, it incurs a validity fault and the
kernel invokes the validity fault handler (Figure 7.10). The valid bit is not set for pages outside
the virtual address space of a process, nor is it set for pages that are part of the virtual address
space but do not currently have a physical page assigned to them. The hardware supplies the
kernel with the virtual address that was accessed to cause the memory fault, and the kernel finds
the page table entry and disk block descriptor for the page.
The page that caused the fault is in one of five states:
3. In an executable file,
There are three parts to the description of the swapping algorithm: managing space on the swap
device. Swapping processes out of main memory, and swapping processes into main memory.
Allocation of Swap Space
The swap device is a block device in a configurable section of a disk. Whereas the kernel
allocates space for files one block at a time, it allocates space on the swap device in groups of
contiguous blocks.
Space allocated for files is used statically; since it will exist for a long time, the allocation
scheme is flexible to reduce the amount of fragmentation and, hence, unallocatable space in the
file system. But the allocation of space on the swap device is transitory, depending on the pattern
of process scheduling.
A process that resides on the swap device will eventually migrate back to main memory, freeing
the space it had occupied on the swap device. Since speed is critical and the system can do I/O
faster in one multiblock operation than in several single block operations, the kernel allocates
contiguous space on the swap device without regard for fragmentation.
Because the allocation scheme for the swap device differs from the allocation scheme for file
systems, the data structures that catalog free space differ too. The kernel maintains free space for
file systems in a linked list of free blocks, accessible from the file system super block, but it
maintains the free space for the swap device in an in-core table, called a map.
Maps, used for other resources besides the swap device, allow a first-fit allocation of contiguous
“blocks” of a resource.
A map is an array where each entry consists of an address of an allocatable resource and the
number of resource units available there; the kernel interprets the address and units according to
the type of map.
Figure 7.6 illustrates an initial swap map that consists of 10,000 blocks starting at address 1.
As the kernel allocates and frees resources, it updates the map so that it continues to contain
accurate information about free resources.
Q. 5 Explain the swapping of processes between swap space and main memory.