Unit II Complete Os
Unit II Complete Os
1 Memory Management:
• It is the functionality of an operating system which handles or manages primary memory and moves
processes back and forth between main memory and disk during execution.
• It keeps track of each and every memory location, regardless of either it is allocated to some process
or it is free. It checks how much memory is to be allocated to processes.
• It decides which process will get memory at what time. It tracks whenever some memory gets freed
or unallocated and correspondingly it updates the status.
• The set of all logical addresses generated by a program is referred to as a logical address space.
• The set of all physical addresses corresponding to these logical addresses is referred to as a physical
address space.
• The runtime mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device.
• Address binding is the process of mapping from one address space to another address space.
• Logical address is address generated by CPU during execution whereas Physical Address refers to
location in memory unit (the one that is loaded into memory).
• The user deals with only logical address (Virtual address). The logical address undergoes translation
by the MMU or address translation unit in particular.
• The output of this process is the appropriate physical address or the location of code/data in RAM.
• Compile Time
o If you know that during compile time where process will reside in memory then absolute
address is generated.
o Loading the executable as a process in memory is very fast.
o But if the generated address space is preoccupied by other process, then the program crashes
and it becomes necessary to recompile the program to change the address space.
• Load time
o If it is not known at the compile time where process will reside then relocatable address will
be generated.
o Loader translates the relocatable address to absolute address.
o The base address of the process in main memory is added to all logical addresses by the
loader to generate absolute address.
o In this if the base address of the process changes then we need to reload the process again.
• Execution time
o The instructions are in memory and are being processed by the CPU.
o Additional memory may be allocated or deallocated at this time.
o This is used if process can be moved from one memory to another during execution
• Describes the maximum number of processes that a single-processor system can accommodate
efficiently
• The primary factor affecting the degree of multiprogramming is the amount of memory available to
be allocated to executing processes.
2.1.4 Swapping
• It is a mechanism in which a process can be swapped temporarily out of main memory to secondary
storage and make that memory available to other processes.
• At some later time, the system swaps back the process from the secondary storage to main memory.
• Though performance is usually affected by swapping process but it helps in running multiple and
big processes in parallel.
The choice between Static or Dynamic Loading is to be made at the time of computer program being
developed.
• If you have to load your program statically, then at the time of compilation, the complete programs
will be compiled and linked without leaving any external program or module dependency.
• If you are writing a dynamically loaded program, then your compiler will compile the program and
for all the modules which you want to include dynamically, only references will be provided and
rest of the work will be done at the time of execution.
• When static linking is used, the linker combines all other modules needed by a program into a single
executable program to avoid any runtime dependency.
• When dynamic linking is used, it is not required to link the actual module or library with the
program, rather a reference to the dynamic module is provided at the time of compilation and
linking.
2.2 Contiguous Memory allocation schemes:
• all the available memory space remain together in one place. It means freely available memory
partitions are not scattered here and there across the whole memory space.
• both the operating system and the user must reside in the main memory. The main memory is
divided into two portions one portion is for the OS and other is for the user program.
• when any user process request for the memory a single section of the contiguous memory block is
given to that process according to its need.
• a single process is allocated in that fixed sized single partition. But this will increase the degree of
multiprogramming means more than one process in the main memory that bounds the number of
fixed partition done in memory.
• approach is to allocate the first free partition or hole large enough which can accommodate the
process. It finishes after finding the first suitable free partition.
• Advantage
• Disadvantage
o The remaining unused memory areas left after allocation become waste if it is too smaller.
Thus request for larger memory requirement cannot be accomplished.
• It begins as first fit to find a free partition. When called next time it starts searching from where it
left off, not from the beginning.
• deals with allocating the smallest free partition which meets the requirement of the requesting
process.
• This algorithm first searches the entire list of free partitions and considers the smallest hole that is
adequate. It then tries to find a hole which is close to actual process size needed.
• Advantage
o Memory utilization is much better than first fit as it searches the smallest free partition first
available.
• Disadvantage
o It is slower and may even tend to fill up memory with tiny useless holes.
• approach is to locate largest available free portion so that the portion left will be big enough to be
useful. It is the reverse of best fit.
• Advantage
• Disadvantage
o If a process requiring larger memory arrives at a later stage then it cannot be accommodated
as the largest hole is already split and occupied.
• the quick fit algorithm suggests maintaining the different lists of frequently used sizes.
• it is not practically suggestible because the procedure takes so much time to create the different lists
and then expending the holes to load a process.
The system keeps tracks of the free disk blocks for allocating space to files when they are created. Also, to
reuse the space released from deleting the files, free space management becomes crucial. The system
maintains a free space list which keeps track of the disk blocks that are not allocated to some file or
directory. The free space list can be implemented mainly as:
2.3.1 Bitmap
A Bitmap or Bit Vector is series or collection of bits where each bit corresponds to a disk block. The bit can
take two values: 0 and 1: 0 indicates that the block is allocated and 1 indicates a free block.
Advantages
• Simple to understand.
• Finding the first free block is efficient. It requires scanning the words (a group of 8 bits) in a bitmap
for a non-zero word. (A 0-valued word has all bits 0). The first free block is then found by scanning
for the first 1 bit in the non-zero word.
In this approach, the free disk blocks are linked together i.e. a free block contains a pointer to the next free
block. The block number of the very first disk block is stored at a separate location on disk and is also
cached in memory. A drawback of this method is the I/O required for free space list traversal.
2.3.3 Groupimg
This approach stores the address of the free blocks in the first free block. The first free block stores the
address of some, say n free blocks. Out of these n blocks, the first n-1 blocks are actually free and the last
block contains the address of next free n blocks.
An advantage of this approach is that the addresses of a group of free disk blocks can be found easily.
2.3.4 Counting
This approach stores the address of the first free disk block and a number n of free contiguous disk blocks
that follow the first block.
Every entry in the list would contain:
2.4 Fragmentation:
In a computer storage system, as processes are loaded and removed from memory, the free memory space is
broken into small pieces. In this way memory space used inefficiently, so the capacity or performance of
the system may degrade. In most of the cases, memory space is wasted. Sometimes it may happen that
memory blocks cannot be allocated to processes due to their small size and memory blocks remain unused.
This problem is known as fragmentation.
Cause:
• User processes are loaded and removed from the main memory. At the time of process loading and
swapping there are many spaces left which are not capable to load any other process due to their
size.
• Due to the dynamical allocation of main memory processes, main memory is available but its space
is not sufficient to load any other process.
External fragmentation happens when there’s a sufficient quantity of area within the memory to satisfy the
memory request of a method. However the process’s memory request cannot be fulfilled because the
memory offered is during a non-contiguous manner. Either you apply first-fit or best-fit memory allocation
strategy it’ll cause external fragmentation.
• It is a storage allocation scheme in which secondary memory can be addressed as though it were
part of main memory.
• The addresses a program may use to reference memory are distinguished from the addresses the
memory system uses to identify physical storage sites, and program generated addresses are
translated automatically to the corresponding machine addresses.
• The size of virtual storage is limited by the addressing scheme of the computer system and amount
of secondary memory is available not by the actual number of the main storage locations.
• It is a technique that is implemented using both hardware and software. It maps memory addresses
used by a program, called virtual addresses, into physical addresses in computer memory.
2.6 Paging:
• Concept
Paging is a memory management technique in which process address space is broken into blocks of the
same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of the process is
measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called frames and
the size of a frame is kept the same as that of a page to have optimum utilization of the main memory and to
avoid external fragmentation.
• Address Translation
Page address is called logical address and represented by page number and the offset.
Frame address is called physical address and represented by a frame number and the offset.
A data structure called page map table is used to keep track of the relation between a page of a process to a
frame in physical memory.
• Working
When the system allocates a frame to any page, it translates this logical address into a physical address and
create entry into the page table to be used throughout execution of the program.
When a process is to be executed, its corresponding pages are loaded into any available memory frames.
Suppose you have a program of 8Kb but your memory can accommodate only 5Kb at a given point in time,
then the paging concept will come into picture. When a computer runs out of RAM, the operating system
(OS) will move idle or unwanted pages of memory to secondary memory to free up RAM for other
processes and brings them back when needed by the program.
This process continues during the whole execution of the program where the OS keeps removing idle pages
from the main memory and write them onto the secondary memory and bring them back when required by
the program.
Page Table is a data structure used by the virtual memory system to store the mapping between logical
addresses and physical addresses. Logical addresses are generated by the CPU for the pages of the
processes therefore they are generally used by the processes. Physical addresses are the actual frame
address of the memory. They are generally used by the hardware or more specifically by RAM subsystems.
2.6.2 TLB
To overcome this size issue, the entire page table was kept in main memory but the problem here is two
main memory references are required:
To overcome this problem a high-speed cache is set up for page table entries called a Translation Lookaside
Buffer (TLB). Translation Lookaside Buffer (TLB) is a special cache used to keep track of recently used
transactions. TLB contains page table entries that have been most recently used. Given a virtual address, the
processor examines the TLB if a page table entry is present (TLB hit), the frame number is retrieved and
the real address is formed. If a page table entry is not found in the TLB (TLB miss), the page number is
used to index the process page table. TLB first checks if the page is already in main memory, if not in main
memory a page fault is issued then the TLB is updated to include the new page entry.
A demand paging system is quite similar to a paging system with swapping where processes reside in
secondary memory and pages are loaded only on demand, not in advance. When a context switch occurs,
the operating system does not copy any of the old program’s pages out to the disk or any of the new
program’s pages into the main memory Instead, it just begins executing the new program after loading the
first page and fetches that program’s pages as they are referenced.
While executing a program, if the program references a page which is not available in the main memory
because it was swapped out a little ago, the processor treats this invalid memory reference as a page fault
and transfers control from the program to the operating system to demand the page back into the memory.
Advantages
Disadvantages
• Number of tables and the amount of processor overhead for handling page interrupts are greater
than in the case of the simple paged management techniques.
• Process requests consecutive pages. OS loads following pages into memory as well.
• Saves time when large contiguous structures are used. Wastes memory and time case pages not
needed.
• To reduce large number of page faults that occurs at process startup. If pre paged pages are unused,
I/O and memory was wasted.
It is a paging scheme which consists of two or more levels of page tables in a hierarchical manner. The
entries of the level 1 page table are pointers to a level 2 page table and entries of the level 2 page tables are
pointers to a level 3 page table and so on. The entries of the last level page table are stores actual frame
information. Level 1 contain single page table and address of that table is stored in PTBR (Page Table Base
Register).
In multilevel paging whatever may be levels of paging all the page tables will be stored in main memory.
So it requires more than one memory access to get the physical address of page frame. One access for each
level needed. Each page table entry except the last level page table entry contains base address of the next
level page table.
Disadvantage
• Extra memory references to access address translation tables can slow programs down by a factor of
two or more. Use translation look aside buffer (TLB) to speed up address translation by storing page
table entries.
Inverted Page Table is the global page table which is maintained by the Operating System for all the
processes. In inverted page table, the number of entries is equal to the number of frames in the main
memory. It can be used to overcome the drawbacks of page table. There is always a space reserved for the
page regardless of the fact that whether it is present in the main memory or not. However, this is simply the
wastage of the memory if the page is not present.
We can save this wastage by just inverting the page table. We can save the details only for the pages which
are present in the main memory. Frames are the indices and the information saved inside the block will be
Process ID and page number.
2.7 Page replacement algorithms:
Page replacement algorithms are the techniques using which an Operating System decides which memory
pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a
page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are
not available or the number of free pages is lower than required pages.
When the page that was selected for replacement and was paged out, is referenced again, it has to read in
from disk, and this requires for I/O completion. This process determines the quality of the page replacement
algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
2.7.1 FIFO
• Oldest page in main memory is the one which will be selected for replacement.
• Easy to implement, keep a list, replace pages from the tail and add new pages at the head.
2.7.1.1 Belady’s anomaly
Generally, on increasing the number of frames to a process’ virtual memory, its execution becomes faster as
less number of page faults occur. Sometimes the reverse happens, i.e. more number of page faults occur
when more frames are allocated to a process. This most unexpected result is termed as Belady’s Anomaly.
Bélády’s anomaly is the name given to the phenomenon where increasing the number of page frames
results in an increase in the number of page faults for a given memory access pattern.
2.7.2 LRU
• Page which has not been used for the longest time in main memory is the one which will be selected
for replacement.
• Easy to implement, keep a list, replace pages by looking back into time.
2.7.3 Optimal
• An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms.
• Replace the page that will not be used for the longest period of time. Use the time when a page is to
be used.
2.7.4 NRU
Referenced Bit = 0,
Modified Bit = 0
Periodically...
Modification to FIFO
If
Else
Repeat
This time, its ref. bit will be 0 and we’ll select it.
2.7.6 Clock
Current position
2.7.7 NFU
Problem:
2.7.8 WS Clock
If Reference Bit = 0
If page is clean
If page is modified
In the previous sections we have explained how paging works and have given a few of the basic page
replacement algorithms and shown how to model them. But knowing the bare mechanics is not enough. To
design a system, you have to know a lot more to make it work well. In the following sections we will look
at other issues that operating system designers must consider in order to get good performance from a
paging system.
Local algorithms effectively correspond to allocating every process a fixed fraction of the memory. Global
algorithms dynamically allocate page frames among the runnable processes. Thus the number of page
frames assigned to each process varies in time.
In general, global algorithms work better, especially when the working set size can vary over the lifetime of
a process. If a local algorithm is used and the working set grows, thrashing will result, even if there are
plenty of free page frames. If the working set shrinks, local algorithms waste memory. If a global algorithm
is used, the system must continually decide how many page frames to assign to each process. One way is to
monitor the working set size as indicated by the aging bits, but this approach does not necessarily prevent
thrashing. The working set may change size in microseconds, whereas the aging bits are a crude measure
spread over a number of clock ticks.
If a global algorithm is used, it may be possible to start each process up with some number of pages
proportional to the process' size, but the allocation has to be updated dynamically as the processes run. One
way to manage the allocation is to use the PFF (Page Fault Frequency) algorithm. It tells when to increase
or decrease a process' page allocation but says nothing about which page to replace on a fault. It just
controls the size of the allocation set.
The page size is often a parameter that can be chosen by the operating system. Even if the hardware has
been designed with, for example, 512-byte pages, the operating system can easily regard pages 0 and 1, 2
and 3, 4 and 5, and so on, as 1-KB pages by always allocating two consecutive 512-byte page frames for
them. Determining the best page size requires balancing several competing factors. As a result, there is no
overall optimum.
• To start with, there are two factors that argue for a small page size.
o A randomly chosen text, data, or stack segment will not fill an integral number of pages. On
the average, half of the final page will be empty. The extra space in that page is wasted.
o Another argument for a small page size becomes apparent if we think about a program
consisting of eight sequential phases of 4 KB each. With a 32-KB page size, the program
must be allocated 32 KB all the time. With a 16-KB page size, it needs only 16 KB. With a
page size of 4 KB or smaller, it requires only 4 KB at any instant. In general, a large page
size will cause more unused program to be in memory than a small page size.
o small pages mean that programs will need many pages, hence a large page table. A 32-KB
program needs only four 8-KB pages, but 64 512-byte pages.
o On some machines, the page table must be loaded into hardware registers every time the
CPU switches from one process to another. On these machines having a small page size
means that the time required to load the page registers gets longer as the page size gets
smaller.
But a problem arises that not all the pages are shareable. Generally, read-only pages are shareable, for
example, program text; but data pages are not shareable. With shared pages, a problem occurs, whenever
two or more than two processes (multiple processes) share some code.
• Increase the Page Size : This may lead to an increase in fragmentation as not all applications require
a large page size
• Provide Multiple Page Sizes : This allows applications that require larger page sizes the opportunity
to use them without an increase in fragmentation
2.9 Thrashing:
When a program need space larger than RAM or it need space when RAM is full, Operating System will try
to allocate space from secondary memory and behaves like it has that much amount of memory by serving
to that program. This concept is called virtual memory.
If the page fault and then swapping happening very frequently at higher rate, then operating system has to
spend more time to swap these pages. This state is called thrashing. Because of this, CPU utilization is
going to be reduced.
2.10 Segmentation
A process is divided into Segments. The chunks that a program is divided into which are not necessarily all
of the same sizes are called segments. Segmentation gives user’s view of the process which paging does not
give. Here the user’s view is mapped to physical memory.
There is no simple relationship between logical addresses and physical addresses in segmentation. A table
stores the information about all such segments and is called Segment Table.
Segment Table
• It maps two-dimensional Logical address into one-dimensional Physical address. It’s each table
entry has:
o Base Address: It contains the starting physical address where the segments reside in
memory.
o Limit: It specifies the length of the segment.
• Segment offset (d): Number of bits required to represent the size of the segment.
Advantages of Segmentation
• No Internal fragmentation.
• Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation
• As processes are loaded and removed from the memory, the free memory space is broken into little
pieces, causing External fragmentation.