0% found this document useful (0 votes)
32 views

Os Unit - 4

os

Uploaded by

22wh1a1231
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Os Unit - 4

os

Uploaded by

22wh1a1231
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT – IV

Memory Management and Virtual Memory - Logical versus Physical Address


Space, Swapping, Contiguous Allocation, Paging, Segmentation, Segmentation with
Paging, Demand Paging, Page Replacement, Page Replacement Algorithms.

INTRODUCTION TO MEMORY MANAGEMENT


The main purpose of a computer system is to execute programs. Where these programs,
together with the data they access, must be in main memory (at least partially) during execution.

To improve both the utilization of the CPU and the speed of its response to users, the computer
must keep several processes in memory (sharing). It requires memory management schemes, in
which selection of these schemes for a system depends on many factors.

Since main memory is usually too small to accommodate all the data and programs permanently,
the computer system must provide secondary storage to back up main memory.

Selection of a memory-management method for a specific system depends on many factors,


especially on the hardware design of the system.

Many algorithms require hardware support, although recent designs have closely integrated the
hardware and operating system.

Memory is central to the operation of a modern computer system. It consists of a large array of
words or bytes, each with its own address. The CPU fetches instructions from memory according
to the value of the program counter.

Main memory usually into two partitions like Kernal Memory and User Memory.

Input Queue: Collection of processes on disk that are waiting to be brought into memory to run
the program.

In most cases, a user program will go through several steps before being run.

ADDRESS BINDING
Most of the systems allow a user process to reside in any part of the physical memory. Although
the address space of the computer starts at 00000, the first address of the user process does not
need to be 00000.

Assigning memory addresses to program objects like instructions and data.


Address Binding can happen at three different stages:

• Compile time: If memory location of a process known priori, absolute code (addresses
are physical) can be generated.
• Load time: Must generate relocatable code (addresses are virtual) if memory location is
not known at compile time.
• Execution time: Binding delayed until runtime, if a process moves during execution
from one to another memory segment.

Keep in memory only those instructions and data that are needed at any given time.
Dynamic Linking and Dynamic Loading
Linking and Loading are utility programs that play an important role in the execution of a
program. Linking intakes the object codes generated by the assembler and combines them to
generate the executable module. On the other hand, the loading loads this executable module to
the main memory for execution.

Dynamic Loading in OS:


It is a process of loading the system library at run time. To execute a program it is necessary that
the еntirе program and all data of a prοcеss to be in physical memory.
The sizе of a prοcеss has thus bееn limited tο the sizе οf physical memory.
To οbtain better mеmοry-spacе utilization, we can use dynamic With dynamic loading, a routine
is not loaded until it is called. All rοutinеs are kеpt on disk in a rеlοcatablе load format. The main
prοgram is loaded into mеmοry and is еxеcutеd.

Dynamic linking in OS:

It is similar to dynamic loading Here, though, linking, rather than loading, is postponed until
execution time. This feature is usually used with system libraries, such as language subroutine
libraries.
Without this facility, each prοgram on a system must include a copy of its language library (οr at
least the rοutinеs referenced by the prοgram) in the еxеcutablе imagе.
This rеquirеmеnt wastеs bοth disk spacе and main mеmοry.

With dynamic linking, a stub is includеd in the image for each library routine reference.

LOGICAL - VERSUS PHYSICAL-ADDRESS SPACE

An address generated by the CPU is commonly referred to as a Logical address, whereas an


address seen by the memory unit-that is, the one loaded into the memory-address register of the
memory-is commonly referred to as a Physical address.
In simple, Physical address is Hardware address and Logical address is Virtual address or
relative address.
The compile-time and load-time address-binding methods generate identical logical and physical
addresses.

The set of all logical addresses generated by a program is a Logical-address space; the set of all
physical addresses corresponding to these logical addresses is a Physical-address space.
The run-time mapping from virtual to physical addresses is done by a hardware device called the
memory-management unit (MMU).

Suppose the base is at 14000, then an attempt by the user to address location 0 is relocated
dynamically to 14000; thus access to location 356 is mapped to 14356.

It is important to note that the user program never sees the real physical addresses. The Program
can create a pointer to location 356 and store it in the memory and then manipulate it after that
compare it with other addresses as number 356.

The base register is now called a relocation register. The value in the relocation register is
added to every address generated by a user process at the time it is sent to memory.

Difference between Logical Address and Physical Address


SWAPPING IN OS

Swapping is a mechanism in which a process can be swapped temporarily out of main memory
(or move) to secondary storage (disk) and make that memory available to other processes. At
some later time, the system swaps back the process from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in running multiple and
big processes in parallel and that's the reason Swapping is also known as a technique for
memory compaction.

EXAMPLE:
Assume a multiprogramming environment with a round-robin CPU-scheduling algorithm. When
a quantum expires, the memory manager will start to swap out the process that just finished, and
to swap in another process to the memory space that has been freed

There are two steps to changing the operating system:

• Swap-in: A swap-in process in which a process moves from secondary storage / hard
disk to main memory (RAM).
• Swap out: Swap out takes a process out of the main memory and places it in secondary
memory.

Swapping requires a backing store. The backing store is commonly a fast disk. It must be large
enough to accommodate copies of all memory images for all users, and it must provide direct
access to these memory images.
The system maintains a ready queue consisting of all processes whose memory images are on the
backing store or in memory and are ready to run.

Whenever a CPU scheduler decided to execute a process, it calls the dispatcher, which checks to
see whether the next process in the queue is in memory. If there is no free memory region, the
dispatcher swaps out a process and swaps in the desired process.

Advantages of Swapping:

• Swapping can help to make more room and allow your programs to run more smoothly.
• Using a swap file, you can ensure that each program has its own dedicated chunk of
memory, which can help improve overall performance.
• Improve the degree of multi-programming.
• Better RAM utilization.

Disadvantages of Swapping:

• If the computer system is turned off during high paging activity, the user may lose all
information related to the program.
• The number of page faults increases, which can reduce overall processing performance.
• When you make a lot of transactions, users lose information and computers lose power.

MEMORY ALLOCATION METHODS

The primary role of the memory management system is to satisfy requests for memory
allocation. Sometimes this is implicit, as when a new process is created. At other times,
processes explicitly request memory. Either way, the system must locate enough unallocated
memory and assign it to the process.

A process can be allocated into memory using one of the following methods:

• Contiguous Memory Allocation


• Non-Contiguous Memory Allocation
Contiguous Memory Allocation

Contiguous memory allocation is a technique where the operating system allocates a contiguous
block of memory to a process. This memory is allocated in a single, continuous chunk, making it
easy for the operating system to manage and for the process to access the memory. Contiguous
memory allocation is suitable for systems with limited memory sizes and where fast access to
memory is important.

Contiguous memory allocation can be done in two ways:

Fixed Partitioning − In fixed partitioning, the memory is divided into fixed-size partitions, and
each partition is assigned to a process. This technique is easy to implement but can result in
wasted memory if a process does not fit perfectly into a partition.

Dynamic Partitioning − In dynamic partitioning, the memory is divided into variablesize


partitions, and each partition is assigned to a process. This technique is more efficient as it allows
the allocation of only the required memory to the process, but it requires more overhead to keep
track of the available memory.

Advantages of Contiguous Memory Allocation:

• Simplicity
• Efficiency
• Low fragmentation

Disadvantages of Contiguous Memory Allocation:

• Limited flexibility
• Memory wastage
• Difficulty in managing larger memory sizes
• External Fragmentation

Non-contiguous Memory Allocation

Non-contiguous memory allocation, on the other hand, is a technique where the operating system
allocates memory to a process in non-contiguous blocks. The blocks of memory allocated to the
process need not be contiguous, and the operating system keeps track of the various blocks
allocated to the process. Non-contiguous memory allocation is suitable for larger memory sizes
and where efficient use of memory is important.
Non-contiguous memory allocation can be done in two ways

Paging − In paging, the memory is divided into fixed-size pages, and each page is assigned to a
process. This technique is more efficient as it allows the allocation of only the required memory
to the process.

Segmentation − In segmentation, the memory is divided into variable-sized segments, and


each segment is assigned to a process. This technique is more flexible than paging but requires
more overhead to keep track of the allocated segments.

Non-contiguous memory allocation is a memory management technique that divides memory


into non-contiguous blocks, allowing processes to be allocated memory that is not necessarily
contiguous.

Advantages of Non-Contiguous Memory Allocation

• Reduced External Fragmentation


• Increased Memory Utilization
• Flexibility
• Memory Sharing

Disadvantages of Non-Contiguous Memory Allocation

• Internal Fragmentation
• Increased Overhead
• Slower Access

Fragmentation in Operating System

Fragmentation is an unwanted problem in the operating system in which the processes are
loaded and unloaded from memory, and free memory space is fragmented. Processes can't be
assigned to memory blocks due to their small size, and the memory blocks stay unused.

User processes are loaded and unloaded from the main memory, and processes are kept in
memory blocks in the main memory. Many spaces remain after process loading and swapping
that another process cannot load due to their size. Main memory is available, but its space is
insufficient to load another process because of the dynamical allocation of main memory
processes.
There are mainly two types of fragmentation in the operating system. These are as follows:

1. Internal Fragmentation
2. External Fragmentation

Internal Fragmentation:

When a process is allocated to a memory block, and if the process is smaller than the amount of
memory requested, a free space is created in the given memory block. Due to this, the free space
of the memory block is unused, which causes internal fragmentation.

External Fragmentation

External fragmentation happens when a dynamic memory allocation method allocates some
memory but leaves a small amount of memory unusable. The quantity of available memory is
substantially reduced if there is too much external fragmentation. There is enough memory space
to complete a request, but it is not contiguous. It's known as external fragmentation.
Types of Memory Allocation Partitioning Algorithms

• First-Fit: This is a fairly straightforward technique where we start at the beginning and
assign the first hole, which is large enough to meet the needs of the process.

• Best-Fit: The goal of this greedy method, which allocates the smallest hole that meets the
needs of the process, is to minimize any memory that would otherwise be lost due to
internal fragmentation in the event of static partitioning.

• Worst-Fit: This is opposition to best fit. The largest hole is chosen to be assigned to the
incoming process once the holes are sorted based on size.

PAGING in OS

A computer can add more memory than the amount of memory physically on the system is
known as Virtual memory.

Paging is a memory management technique, in which process address space is broken into
blocks of same size called Pages.

The size of the process can be measured in number of Pages. Main memory is divided int small
fixed size blocks physically called frames.

The size of the main memory can be measured in number of Frames. Paging restore processes
from secondary storage into the main memory and the primary memory will also be split up into
frames.
Page size = Frame size

One page of the method should be saved in one of the memory frames. The pages can be
positioned anywhere in the memory, but the objective is always to discover contiguous frames or
gaps.

Basic Method of Paging:

Paging permits the physical address space of a process to be non-contiguous. It is a fixed-size


partitioning scheme. In the Paging technique, the secondary memory and main memory are
divided into equal fixed-size partitions.

The Frame has the same size as that of a Page. A frame is basically a place where a (logical)
page can be (physically) placed.
Each process is mainly divided into parts where the size of each part is the same as the page size.
There is a possibility that the size of the last part may be less than the page size.

Pages of a process are brought into the main memory only when there is a requirement otherwise
they reside in the secondary storage.

One page of a process is mainly stored in one of the frames of the memory. Also, the pages can
be stored at different locations of the memory but always the main priority is to find contiguous
frames.

Translation of Logical Address into Physical Address

The CPU always generates a logical address. In order to access the main memory always a
physical address is needed.

The logical address generated by CPU always consists of two parts:


• Page Number(p)
• Page Offset (d)

where, Page Number is used to specify the specific page of the process from which the CPU
wants to read the data. and it is also used as an index to the page table. and Page offset is mainly
used to specify the specific word on the page that the CPU wants to read.

Page Table in OS

The Page table mainly contains the base address of each page in the Physical memory. The base
address is then combined with the page offset in order to define the physical memory address
which is then sent to the memory unit.

Thus page table mainly provides the corresponding frame number (base address of the frame)
where that page is stored in the main memory.

The physical address consists of two parts:

• Page offset(d)
• Frame Number(f)

where, Frame number is used to indicate the specific frame where the required page is stored.
and Page Offset indicates the specific word that has to be read from that page.

Paging Hardware:
The PTBR in the above diagram means page table base register and it basically holds the base
address for the page table of the current process. It is a processor register and is managed by the
operating system. Commonly, each process running on a processor needs its own logical address
space.

Advantages of Paging

• Paging mainly allows to storage of parts of a single process in a non-contiguous fashion.


• With the help of Paging, the problem of external fragmentation is solved.
• Paging is one of the simplest algorithms for memory management.

Disadvantages of Paging

• In Paging, sometimes the page table consumes more memory.


• Internal fragmentation is caused by this technique.
• There is an increase in time taken to fetch the instruction since now two memory accesses are
required.

SEGMENTATION in OS

Like Paging, Segmentation is another non-contiguous memory allocation technique. The process
is divided into modules for better visualization.

Segmentation is a variable size partitioning scheme means secondary memory and main
memory are divided into partitions of unequal size.

The size of partitions depends on the length of modules. The partitions of secondary memory are
called as segments.

A Program is basically a collection of segments. And a segment is a logical unit such as main
program, procedure, function, variables, objects, stack, arrays etc.

A Program is basically a collection of segments. And a segment is a logical unit such as:

• main program
• procedure
• function
• method
• object
• local variable and global variables.
• symbol table
• common block
• stack
• arrays
Types of Segmentation

Given below are the types of Segmentation:

• Virtual Memory Segmentation


With this type of segmentation, each process is segmented into n divisions and the most
important thing is they are not segmented all at once.
• Simple Segmentation
With the help of this type, each process is segmented into n divisions and they are all
together segmented at once exactly but at the runtime and can be non-contiguous.

Basic Method of Segmentation

A computer system that is using segmentation has a logical address space that can be viewed as
multiple segments. And the size of the segment is of the variable that is it may grow or shrink.
As we had already told you that during the execution each segment has a name and length. And
the address mainly specifies both thing name of the segment and the displacement within the
segment.

The logical address always consists of two parts:

• Segment Number(s)
• Segment Offset (d)

Segment Table in OS

In the segment table each entry has:


Segment Base/base address:

The segment base mainly contains the starting physical address where the segments reside in the
memory.

Segment Limit:

The segment limit is mainly used to specify the length of the segment.

Segmentation Hardware:

Segment Table Base Register (STBR) is used to point the segment table's location in the
memory.

Segment Table Length Register (STLR) indicates the number of segments used by a program.
The segment number s is legal if s<STLR.

Advantages of Segmentation

• There is no Internal Fragmentation.


• Segmentation generally allows us to divide the program into modules that provide better
visualization.
• Segments are of variable size.
Disadvantages of Segmentation

• Maintaining a segment table for each process leads to overhead


• This technique is expensive.
• The time is taken in order to fetch the instruction increases since now two memory
accesses are required.
• Segments are of unequal size in segmentation and thus are not suitable for swapping.
• This technique leads to external fragmentation

Difference between Paging and Segmentation in OS:


SEGMENTATION WITH PAGING

Non-contiguous memory allocation separates the operation into blocks either pages or segments.
Segmented paging is a scheme that implements the combination of segmentation and paging.

Process is first divided into segments and then each segment is divided into pages which are then
stored in the frames of main memory.

A page table exists for each segment that keeps track of the frames storing the pages of that
segment. Each page table occupies one frame in the main memory.

Basic Method of Segmentation with Paging

Number of entries in the page table of a segment = Number of pages that segment is divided.

A segment table exists that keeps track of the frames storing the page tables of segments.

Number of entries in the segment table of a process = Number of segments that process is divided.

The base address of the segment table is stored in the segment table base register.

CPU always generates a logical address. A physical address is needed to access the main
memory.

CPU generates a logical address consisting of three parts-

• Segment Number
• Page Number
• Page Offset

Physical address consists of two parts-

• Frame Number
• Page Offset

Segment Number specifies the specific segment from which CPU wants to reads the data.
Page Number specifies the specific page of that segment from which CPU wants to read the data.
Page Offset specifies the specific word on that page that CPU wants to read.
The frame number combined with the page offset forms the required physical address.
For the generated page offset, corresponding word is located in the page and read.
Advantages
• Segment table contains only one entry corresponding to each segment.
• It reduces memory usage.
• The size of Page Table is limited by the segment size.
• It solves the problem of external fragmentation.
Disadvantages
• Segmented paging suffers from internal fragmentation.
• The complexity level is much higher as compared to paging.

DEMAND PAGING
A process is a collection of pages, where each page is a collection of Instructions. CPU executes
a process if it resides in main memory only. Some times the size of a process is more than main
memory.

Virtual memory is a technique that allows the execution of processes that may not be
completely in main memory. It is the separation of user logical memory from physical memory.

Virtual memory is commonly implemented by Demand Paging.


Demand Paging is a memory management technique used by operating systems to optimize the
use of memory resources, where CPU load the required pages of a process into the main memory
instead of loading the entire process.

A demand-paging system is similar to a Paging system with Swapping. It allows the system to
swap out pages that are not currently in use, freeing up memory for other processes.

Also called as Lazy Swapper. Thus, it avoids reading into memory pages that will not be used
anyway, decreasing the swap time and the amount of physical memory needed.

Swapper that deals with the individual pages of a process are referred to as Pager.

OS checks if a page is not available in the main memory in its active state; then a request may be
made to the CPU for that page. Thus, for this purpose, it has to generate an interrupt.

To implement demand paging some form of hardware support is required to keep track of pages
which are on the dick and those are in memory. This is done using valid-invalid bit scheme.

Page table consist of a valid – invalid bit for each virtual page of the process.

Page fault occurs if the process tries to access a page that was not swapped in memory.
Page Fault Handling
The demanded page is not present in main memory then we call it as Page Fault.

OS looks at PCB to decide:

- if invalid reference found - abort

- if not found in memory – load page

To load in memory:

- get the empty frame from secondary memory.

- swap pages into frame via dick operation.

- reset page table to indicate page.

Now present in memory:

- set validation bit = V

Restart the instruction that cause page fault.


PAGE REPLACEMENT
Page replacement is a process of swapping out an existing page from the frame of a main
memory and replacing it with the required page.

Page replacement is required when-

- All the frames of main memory are already occupied.

- Thus, a page has to be replaced to create a room for the required page.

In case of a Page Fault, OS might have to replace one of the existing pages with the newly
needed page. Different page replacement algorithms suggest different ways to decide which page
to replace. The target for all algorithms is to reduce the number of page faults.

PAGE REPLACEMENT ALGORITHMS


In an Operating Systems, the Page Replacement is referred to a scenario in which a page from
the main memory should be replaced by a page from secondary memory. Page replacement
occurs due to Page faults.

Page replacement is needed in the operating systems that use virtual memory using Demand
Paging.

The following are the Page Replacement algorithms:

1. FIFO (first-in-first-out)
2. LRU (least recently used)
3. Optimal page replacement
4. MFU (most frequently used)

FIFO (first-in-first-out)
This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.

This is the first basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames
is filled up with the help of first frames present. This concept is fulfilled with the help of
Demand Paging.
Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a memory


with three frames and calculate number of page faults by using FIFO (First In First Out) Page
replacement algorithms.

Terms to Remember:

Page Not Found - - - > Page Fault / Page Miss

Page Found - - - > Page Hit

Reference String:

Number of Page Hits = 8

Number of Page Faults = 12

The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66

The Page Hit Percentage = 8 *100 / 20 = 40%

The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%

Explanation:

First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy. So, with the help of First in First Out Page
Replacement Algorithm we remove the frame which contains the page is older among the pages.
By removing the older page we give access for the new frame to occupy the empty space created
by the First in First out Page Replacement Algorithm.
LRU (least recently used)
This is the last basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames
is filled up with the help of first frames present. This concept is fulfilled with the help of
Demand Paging.

After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.

Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 4, 0 for a memory


with three frames and calculate number of page faults by using OPTIMAL Page replacement
algorithms.

Terms to Remember:

Page Not Found - - - > Page Fault / Page Miss

Page Found - - - > Page Hit

Reference String:

Number of Page Hits = 8

Number of Page Faults = 12

The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66

The Page Hit Percentage = 8 *100 / 20 = 40%

The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation:

First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy.

Here, we would fill the empty spaces with the pages we and the empty frames we have. The
problem occurs when there is no space for occupying of pages. We have already known that we
would replace the Page which is not used in the Longest Dimension of time in future.

Optimal page replacement


This is the last basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames
is filled up with the help of first frames present. This concept is fulfilled with the help of
Demand Paging.

When Page Fault occurs this problem arises, then the Least Recently Used (LRU) Page
Replacement Algorithm comes into picture. The Least Recently Used (LRU) Page Replacement
Algorithms works on a certain principle.

The principle is: Replace the page with the page which is less dimension of time recently used
page in the past.

Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a memory


with three frames and calculate number of page faults by using Least Recently Used (LRU) Page
replacement algorithms.

Terms to Remember:

Page Not Found - - - > Page Fault / Page Miss


Page Found - - - > Page Hit
Reference String:
Number of Page Hits = 7

Number of Page Faults = 13

The Ratio of Page Hit to the Page Fault = 7 : 12 - - - > 0.5833 : 1

The Page Hit Percentage = 7 * 100 / 20 = 35%

The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 35 = 65%

Explanation:

First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy.

Here, we would fill the empty spaces with the pages we and the empty frames we have. The
problem occurs when there is no space for occupying of pages. We have already known that we
would replace the Page which is not used in the Longest Dimension of time in past or can be said
as the Page which is very far away in the past.

MFU (most frequently used)


MFU replaces the page that has been accessed the most. This algorithm aims to prioritize pages
based on their frequency of usage.

You might also like