Os Unit - 4
Os Unit - 4
To improve both the utilization of the CPU and the speed of its response to users, the computer
must keep several processes in memory (sharing). It requires memory management schemes, in
which selection of these schemes for a system depends on many factors.
Since main memory is usually too small to accommodate all the data and programs permanently,
the computer system must provide secondary storage to back up main memory.
Many algorithms require hardware support, although recent designs have closely integrated the
hardware and operating system.
Memory is central to the operation of a modern computer system. It consists of a large array of
words or bytes, each with its own address. The CPU fetches instructions from memory according
to the value of the program counter.
Main memory usually into two partitions like Kernal Memory and User Memory.
Input Queue: Collection of processes on disk that are waiting to be brought into memory to run
the program.
In most cases, a user program will go through several steps before being run.
ADDRESS BINDING
Most of the systems allow a user process to reside in any part of the physical memory. Although
the address space of the computer starts at 00000, the first address of the user process does not
need to be 00000.
• Compile time: If memory location of a process known priori, absolute code (addresses
are physical) can be generated.
• Load time: Must generate relocatable code (addresses are virtual) if memory location is
not known at compile time.
• Execution time: Binding delayed until runtime, if a process moves during execution
from one to another memory segment.
Keep in memory only those instructions and data that are needed at any given time.
Dynamic Linking and Dynamic Loading
Linking and Loading are utility programs that play an important role in the execution of a
program. Linking intakes the object codes generated by the assembler and combines them to
generate the executable module. On the other hand, the loading loads this executable module to
the main memory for execution.
It is similar to dynamic loading Here, though, linking, rather than loading, is postponed until
execution time. This feature is usually used with system libraries, such as language subroutine
libraries.
Without this facility, each prοgram on a system must include a copy of its language library (οr at
least the rοutinеs referenced by the prοgram) in the еxеcutablе imagе.
This rеquirеmеnt wastеs bοth disk spacе and main mеmοry.
With dynamic linking, a stub is includеd in the image for each library routine reference.
The set of all logical addresses generated by a program is a Logical-address space; the set of all
physical addresses corresponding to these logical addresses is a Physical-address space.
The run-time mapping from virtual to physical addresses is done by a hardware device called the
memory-management unit (MMU).
Suppose the base is at 14000, then an attempt by the user to address location 0 is relocated
dynamically to 14000; thus access to location 356 is mapped to 14356.
It is important to note that the user program never sees the real physical addresses. The Program
can create a pointer to location 356 and store it in the memory and then manipulate it after that
compare it with other addresses as number 356.
The base register is now called a relocation register. The value in the relocation register is
added to every address generated by a user process at the time it is sent to memory.
Swapping is a mechanism in which a process can be swapped temporarily out of main memory
(or move) to secondary storage (disk) and make that memory available to other processes. At
some later time, the system swaps back the process from the secondary storage to main memory.
Though performance is usually affected by swapping process but it helps in running multiple and
big processes in parallel and that's the reason Swapping is also known as a technique for
memory compaction.
EXAMPLE:
Assume a multiprogramming environment with a round-robin CPU-scheduling algorithm. When
a quantum expires, the memory manager will start to swap out the process that just finished, and
to swap in another process to the memory space that has been freed
• Swap-in: A swap-in process in which a process moves from secondary storage / hard
disk to main memory (RAM).
• Swap out: Swap out takes a process out of the main memory and places it in secondary
memory.
Swapping requires a backing store. The backing store is commonly a fast disk. It must be large
enough to accommodate copies of all memory images for all users, and it must provide direct
access to these memory images.
The system maintains a ready queue consisting of all processes whose memory images are on the
backing store or in memory and are ready to run.
Whenever a CPU scheduler decided to execute a process, it calls the dispatcher, which checks to
see whether the next process in the queue is in memory. If there is no free memory region, the
dispatcher swaps out a process and swaps in the desired process.
Advantages of Swapping:
• Swapping can help to make more room and allow your programs to run more smoothly.
• Using a swap file, you can ensure that each program has its own dedicated chunk of
memory, which can help improve overall performance.
• Improve the degree of multi-programming.
• Better RAM utilization.
Disadvantages of Swapping:
• If the computer system is turned off during high paging activity, the user may lose all
information related to the program.
• The number of page faults increases, which can reduce overall processing performance.
• When you make a lot of transactions, users lose information and computers lose power.
The primary role of the memory management system is to satisfy requests for memory
allocation. Sometimes this is implicit, as when a new process is created. At other times,
processes explicitly request memory. Either way, the system must locate enough unallocated
memory and assign it to the process.
A process can be allocated into memory using one of the following methods:
Contiguous memory allocation is a technique where the operating system allocates a contiguous
block of memory to a process. This memory is allocated in a single, continuous chunk, making it
easy for the operating system to manage and for the process to access the memory. Contiguous
memory allocation is suitable for systems with limited memory sizes and where fast access to
memory is important.
Fixed Partitioning − In fixed partitioning, the memory is divided into fixed-size partitions, and
each partition is assigned to a process. This technique is easy to implement but can result in
wasted memory if a process does not fit perfectly into a partition.
• Simplicity
• Efficiency
• Low fragmentation
• Limited flexibility
• Memory wastage
• Difficulty in managing larger memory sizes
• External Fragmentation
Non-contiguous memory allocation, on the other hand, is a technique where the operating system
allocates memory to a process in non-contiguous blocks. The blocks of memory allocated to the
process need not be contiguous, and the operating system keeps track of the various blocks
allocated to the process. Non-contiguous memory allocation is suitable for larger memory sizes
and where efficient use of memory is important.
Non-contiguous memory allocation can be done in two ways
Paging − In paging, the memory is divided into fixed-size pages, and each page is assigned to a
process. This technique is more efficient as it allows the allocation of only the required memory
to the process.
• Internal Fragmentation
• Increased Overhead
• Slower Access
Fragmentation is an unwanted problem in the operating system in which the processes are
loaded and unloaded from memory, and free memory space is fragmented. Processes can't be
assigned to memory blocks due to their small size, and the memory blocks stay unused.
User processes are loaded and unloaded from the main memory, and processes are kept in
memory blocks in the main memory. Many spaces remain after process loading and swapping
that another process cannot load due to their size. Main memory is available, but its space is
insufficient to load another process because of the dynamical allocation of main memory
processes.
There are mainly two types of fragmentation in the operating system. These are as follows:
1. Internal Fragmentation
2. External Fragmentation
Internal Fragmentation:
When a process is allocated to a memory block, and if the process is smaller than the amount of
memory requested, a free space is created in the given memory block. Due to this, the free space
of the memory block is unused, which causes internal fragmentation.
External Fragmentation
External fragmentation happens when a dynamic memory allocation method allocates some
memory but leaves a small amount of memory unusable. The quantity of available memory is
substantially reduced if there is too much external fragmentation. There is enough memory space
to complete a request, but it is not contiguous. It's known as external fragmentation.
Types of Memory Allocation Partitioning Algorithms
• First-Fit: This is a fairly straightforward technique where we start at the beginning and
assign the first hole, which is large enough to meet the needs of the process.
• Best-Fit: The goal of this greedy method, which allocates the smallest hole that meets the
needs of the process, is to minimize any memory that would otherwise be lost due to
internal fragmentation in the event of static partitioning.
• Worst-Fit: This is opposition to best fit. The largest hole is chosen to be assigned to the
incoming process once the holes are sorted based on size.
PAGING in OS
A computer can add more memory than the amount of memory physically on the system is
known as Virtual memory.
Paging is a memory management technique, in which process address space is broken into
blocks of same size called Pages.
The size of the process can be measured in number of Pages. Main memory is divided int small
fixed size blocks physically called frames.
The size of the main memory can be measured in number of Frames. Paging restore processes
from secondary storage into the main memory and the primary memory will also be split up into
frames.
Page size = Frame size
One page of the method should be saved in one of the memory frames. The pages can be
positioned anywhere in the memory, but the objective is always to discover contiguous frames or
gaps.
The Frame has the same size as that of a Page. A frame is basically a place where a (logical)
page can be (physically) placed.
Each process is mainly divided into parts where the size of each part is the same as the page size.
There is a possibility that the size of the last part may be less than the page size.
Pages of a process are brought into the main memory only when there is a requirement otherwise
they reside in the secondary storage.
One page of a process is mainly stored in one of the frames of the memory. Also, the pages can
be stored at different locations of the memory but always the main priority is to find contiguous
frames.
The CPU always generates a logical address. In order to access the main memory always a
physical address is needed.
where, Page Number is used to specify the specific page of the process from which the CPU
wants to read the data. and it is also used as an index to the page table. and Page offset is mainly
used to specify the specific word on the page that the CPU wants to read.
Page Table in OS
The Page table mainly contains the base address of each page in the Physical memory. The base
address is then combined with the page offset in order to define the physical memory address
which is then sent to the memory unit.
Thus page table mainly provides the corresponding frame number (base address of the frame)
where that page is stored in the main memory.
• Page offset(d)
• Frame Number(f)
where, Frame number is used to indicate the specific frame where the required page is stored.
and Page Offset indicates the specific word that has to be read from that page.
Paging Hardware:
The PTBR in the above diagram means page table base register and it basically holds the base
address for the page table of the current process. It is a processor register and is managed by the
operating system. Commonly, each process running on a processor needs its own logical address
space.
Advantages of Paging
Disadvantages of Paging
SEGMENTATION in OS
Like Paging, Segmentation is another non-contiguous memory allocation technique. The process
is divided into modules for better visualization.
Segmentation is a variable size partitioning scheme means secondary memory and main
memory are divided into partitions of unequal size.
The size of partitions depends on the length of modules. The partitions of secondary memory are
called as segments.
A Program is basically a collection of segments. And a segment is a logical unit such as main
program, procedure, function, variables, objects, stack, arrays etc.
A Program is basically a collection of segments. And a segment is a logical unit such as:
• main program
• procedure
• function
• method
• object
• local variable and global variables.
• symbol table
• common block
• stack
• arrays
Types of Segmentation
A computer system that is using segmentation has a logical address space that can be viewed as
multiple segments. And the size of the segment is of the variable that is it may grow or shrink.
As we had already told you that during the execution each segment has a name and length. And
the address mainly specifies both thing name of the segment and the displacement within the
segment.
• Segment Number(s)
• Segment Offset (d)
Segment Table in OS
The segment base mainly contains the starting physical address where the segments reside in the
memory.
Segment Limit:
The segment limit is mainly used to specify the length of the segment.
Segmentation Hardware:
Segment Table Base Register (STBR) is used to point the segment table's location in the
memory.
Segment Table Length Register (STLR) indicates the number of segments used by a program.
The segment number s is legal if s<STLR.
Advantages of Segmentation
Non-contiguous memory allocation separates the operation into blocks either pages or segments.
Segmented paging is a scheme that implements the combination of segmentation and paging.
Process is first divided into segments and then each segment is divided into pages which are then
stored in the frames of main memory.
A page table exists for each segment that keeps track of the frames storing the pages of that
segment. Each page table occupies one frame in the main memory.
Number of entries in the page table of a segment = Number of pages that segment is divided.
A segment table exists that keeps track of the frames storing the page tables of segments.
Number of entries in the segment table of a process = Number of segments that process is divided.
The base address of the segment table is stored in the segment table base register.
CPU always generates a logical address. A physical address is needed to access the main
memory.
• Segment Number
• Page Number
• Page Offset
• Frame Number
• Page Offset
Segment Number specifies the specific segment from which CPU wants to reads the data.
Page Number specifies the specific page of that segment from which CPU wants to read the data.
Page Offset specifies the specific word on that page that CPU wants to read.
The frame number combined with the page offset forms the required physical address.
For the generated page offset, corresponding word is located in the page and read.
Advantages
• Segment table contains only one entry corresponding to each segment.
• It reduces memory usage.
• The size of Page Table is limited by the segment size.
• It solves the problem of external fragmentation.
Disadvantages
• Segmented paging suffers from internal fragmentation.
• The complexity level is much higher as compared to paging.
DEMAND PAGING
A process is a collection of pages, where each page is a collection of Instructions. CPU executes
a process if it resides in main memory only. Some times the size of a process is more than main
memory.
Virtual memory is a technique that allows the execution of processes that may not be
completely in main memory. It is the separation of user logical memory from physical memory.
A demand-paging system is similar to a Paging system with Swapping. It allows the system to
swap out pages that are not currently in use, freeing up memory for other processes.
Also called as Lazy Swapper. Thus, it avoids reading into memory pages that will not be used
anyway, decreasing the swap time and the amount of physical memory needed.
Swapper that deals with the individual pages of a process are referred to as Pager.
OS checks if a page is not available in the main memory in its active state; then a request may be
made to the CPU for that page. Thus, for this purpose, it has to generate an interrupt.
To implement demand paging some form of hardware support is required to keep track of pages
which are on the dick and those are in memory. This is done using valid-invalid bit scheme.
Page table consist of a valid – invalid bit for each virtual page of the process.
Page fault occurs if the process tries to access a page that was not swapped in memory.
Page Fault Handling
The demanded page is not present in main memory then we call it as Page Fault.
To load in memory:
- Thus, a page has to be replaced to create a room for the required page.
In case of a Page Fault, OS might have to replace one of the existing pages with the newly
needed page. Different page replacement algorithms suggest different ways to decide which page
to replace. The target for all algorithms is to reduce the number of page faults.
Page replacement is needed in the operating systems that use virtual memory using Demand
Paging.
1. FIFO (first-in-first-out)
2. LRU (least recently used)
3. Optimal page replacement
4. MFU (most frequently used)
FIFO (first-in-first-out)
This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.
This is the first basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames
is filled up with the help of first frames present. This concept is fulfilled with the help of
Demand Paging.
Example:
Terms to Remember:
Reference String:
The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation:
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy. So, with the help of First in First Out Page
Replacement Algorithm we remove the frame which contains the page is older among the pages.
By removing the older page we give access for the new frame to occupy the empty space created
by the First in First out Page Replacement Algorithm.
LRU (least recently used)
This is the last basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames
is filled up with the help of first frames present. This concept is fulfilled with the help of
Demand Paging.
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.
Example:
Terms to Remember:
Reference String:
The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation:
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we have. The
problem occurs when there is no space for occupying of pages. We have already known that we
would replace the Page which is not used in the Longest Dimension of time in future.
When Page Fault occurs this problem arises, then the Least Recently Used (LRU) Page
Replacement Algorithm comes into picture. The Least Recently Used (LRU) Page Replacement
Algorithms works on a certain principle.
The principle is: Replace the page with the page which is less dimension of time recently used
page in the past.
Example:
Terms to Remember:
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 35 = 65%
Explanation:
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we have. The
problem occurs when there is no space for occupying of pages. We have already known that we
would replace the Page which is not used in the Longest Dimension of time in past or can be said
as the Page which is very far away in the past.