0% found this document useful (0 votes)
38 views

OS Unit-3

The document discusses different memory management techniques used in operating systems including swapping, contiguous memory allocation, paging, segmentation paging, and frame allocation methods. Swapping allows processes to move between main memory and disk storage. Contiguous allocation gives each process a single contiguous block of memory. Paging divides processes and memory into pages and frames. Segmentation paging combines segmentation and paging by dividing memory into variable segments which are further divided into fixed size pages. Frame allocation methods include equal allocation, proportional allocation, and priority allocation based on process needs.

Uploaded by

Madhu Sudhan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

OS Unit-3

The document discusses different memory management techniques used in operating systems including swapping, contiguous memory allocation, paging, segmentation paging, and frame allocation methods. Swapping allows processes to move between main memory and disk storage. Contiguous allocation gives each process a single contiguous block of memory. Paging divides processes and memory into pages and frames. Segmentation paging combines segmentation and paging by dividing memory into variable segments which are further divided into fixed size pages. Frame allocation methods include equal allocation, proportional allocation, and priority allocation based on process needs.

Uploaded by

Madhu Sudhan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT-III

Memory Management is the process of controlling and coordinating computer memory,


assigning portions known as blocks to various running programs to optimize the overall
performance of the system.

It is the most important function of an operating system that manages primary memory. It
helps processes to move back and forward between the main memory and execution disk. It
helps OS to keep track of every memory location, irrespective of whether it is allocated to
some process or it remains free.

Swapping in Operating System


Swapping is a memory management scheme in which any process can be temporarily
swapped from main memory to secondary memory so that the main memory can be made
available for other processes. It is used to improve main memory utilization. In secondary
memory, the place where the swapped-outprocess is stored is called swap space.
The purpose of the swapping in operating system is to access the data present in the hard
disk and bring it to RAM so that the application programs can use it. The thing to remember is
that swapping is used only when data is not present in RAM.
Although the process of swapping affects the performance of the system, it helps to run larger
and more than one process. This is the reason why swapping is also referred to as memory
compaction.
The concept of swapping has divided into two more concepts: Swap-in and Swap-out.
o Swap-out is a method of removing a process from RAM and adding it tothe hard disk.
o Swap-in is a method of removing a program from a hard disk and putting it back into the
main memory or RAM.
Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take to
transfer from main memory to secondary memory.
1. User process size is 2048Kb
2. Data transfer rate is 1Mbps = 1024 kbps
3. Time = process size / transfer rate
4. = 2048 / 1024
5. = 2 seconds
6. = 2000 milliseconds
7. Now taking swap-in and swap-out time, the process will take 4000 milliseconds.
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore,
processes do not have to wait very long before they are executed.
4. It improves the main memory utilization.
Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related to
the program in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number
of Page Fault and decrease the overall processing performance.
Contiguous Memory Allocation
In the Contiguous Memory Allocation, each process is contained in a single contiguous
section of memory. In this memory allocation, all the available memory space remains
together in one place which implies that the freely available memory partitions are not
spread over here and there across the whole memory space.

In Contiguous memory allocation which is a memory management technique, whenever


there is a request by the user process for the memory then a single section of the contiguous
memory block is given to that process according to its requirement. Contiguous Memory
allocation is achieved just by dividing the memory into the fixed-sized partition.

The memory can be divided either in the fixed-sized partition or in the variable-sized
partition in order to allocate contiguous space to user processes.

Fixed-size Partition Scheme


This technique is also known as Static partitioning. In this scheme, the systemdivides the
memory into fixed-size partitions. The partitions may or may not be the same size. The size
of each partition is fixed as indicated by the name of thetechnique and it cannot be changed.

In this partition scheme, each partition may contain exactly one process. There is a problem
that this technique will limit the degree of multiprogramming because the number of partitions
will basically decide the number of processes.
Whenever any process terminates then the partition becomes available for another process.
Example
Let's take an example of fixed size partitioning scheme, we will divide a memory size of 15
KB into fixed-size partitions:

It is important to note that these partitions are allocated to the processes as they arrive and
the partition that is allocated to the arrived process basically depends on the algorithm
followed.
Paging
In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the contiguous
frames or holes.

Pages of the process are brought into the main memory only when they are required otherwise
they reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as
same as frame size.

Segmentation Paging
Pure segmentation is not very popular and not being used in many of the operating systems.
However, Segmentation can be combined with Paging to get the best features out of both the
techniques.

In Segmented Paging, the main memory is divided into variable size segments which are
further divided into fixed size pages.
1. Pages are smaller than segments.
2. Each Segment has a page table which means every program has multiplepage tables.
3. The logical address is represented as Segment Number (base address), Pagenumber and
page offset.
Segment Number → It points to the appropriate Segment Number.

Page Number → It Points to the exact page within the segment


Page Offset → Used as an offset within the page frame

Each Page table contains the various information about every page of the segment. The
Segment Table contains the information about every segment. Each segment table entry points
to a page table entry and every page table entry is mapped to one of the page within a segment.

Translation of logical address to physical address

The CPU generates a logical address which is divided into two parts: Segment Number and
Segment Offset. The Segment Offset must be less than the segment limit. Offset is further
divided into Page number and Page Offset. To map the exact page number in the page table, the
page number is added into the page table base.
The actual frame number with the page offset is mapped to the main memory to get the
desired word in the page of the certain segment of the process.

Advantages of Segmented Paging


1. It reduces memory usage.
2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.
Disadvantages of Segmented Paging
1. Internal Fragmentation will be there.
2. The complexity level will be much higher as compare to paging.
3. Page Tables need to be contiguously stored in the memory.

Allocation of Frames
The main memory in the operating system is divided into frames. These frames store the
process and once the process is stored as a frame the CPU can execute the process. Therefore
the operating system has to allocate a sufficient number of frames for each corresponding
process. Therefore there exist various algorithms that are used by the operating system in order
to allocate the frame. There are mainly five ways to allocate the frame.
• Equal frame allocation
• Proportional frame
• allocation Priority frame
• allocation Global
• replacement allocationLocal
replacement allocation
Equal frame allocation
As the name suggests the process will be allocated equally among all the available processes
in the operating system. There is a disadvantage in equal frame allocation, for example, there is
a process that requires more frames for allocation for execution and there are only a set number
of frames that are present for allocation that will allocate an insufficient number of frames for
the process execution. And the same can happen if the process requires less frame than the
fixed set of frames for allocation. Let’s say the main memory has 40 frames for allocation and
the first process requires only 10 frames and the next process requires 30 frames for execution.
In this case, the first process will waste 10 frames in the allocation and the next process will
have insufficient 10 frames for allocation. This problem was solved by the proportional frame
allocation.
Proportional frame allocation
The equal frame has two drawbacks that either it will waste the frames or it may have an
insufficient number of frames. The proportional frame allocation will allocate the frame on the
basis of the size that is required for execution and the total number of the frames that the main
memory has. The proportional frame allocation will allocate frames based on the size of the
frame. But the disadvantage in the proportional frame allocation is that there is no priority in
theallocation of the frame and it will allocate the frame based on the size the problem is solved
by priority frame allocation.
Priority frame allocation
In the proportional frame allocation, the frames were allocated on the basis of size. But in
the priority frame allocation, the process will be allocated on basis ofthe priority of the process
and the number of frame allocation. If the process is of high priority and has more number of
frames the process will be allotted that many frames in the main memory and the process with
low priority will be allocated next. The frame allocation can occur based on both priority and
the sizeof the frames required by the process.
Global replacement allocation
To understand the global replacement allocation you have to know what is paging in the
operating system. When the operating system is executing programs then it will require pages
which contains the memory required for allocation, some pages are stored in the primary
memory of the operating system which is required for operation but when the operating system
does not find the required frames in pages then it causes a page fault and calls for the pages
from the secondary memory and that is the page replacement. The global replacement
allocation takes care of the frame that is being allocated in pages. The global replacement
algorithm tells us that the process having a low priority can give frames to the process with
higher priority so that fewer number of page faults occur.
Local replacement allocation
As the global replacement allocation tells us that the frames can be stored in any priority
process the local replacement allocation tells us that the frames of the pages will be stored on
the same page, unlike global. This also does not influence the behavior of the process behavior
as it did in the global replacement allocation.

Thrashing in Operating System


In case, if the page fault and swapping happens very frequently at a higher rate, then the
operating system has to spend more time swapping these pages. This state in the operating
system is termed thrashing. Because of thrashing the CPUutilization is going to be reduced.
Let's understand by an example, if any process does not have the number of frames that it
needs to support pages in active use then it will quickly page fault. And at this point, the
process must replace some pages. As all the pages of the process are actively in use, it must
replace a page that will be needed again right away. Consequently, the process will quickly
fault again, and again, and again, replacing pages that it must bring back in immediately. This
high paging activityby a process is called thrashing.
During thrashing, the CPU spends less time on some actual productive work spend more
time swapping.

Figure: Thrashing

Causes of Thrashing

Thrashing affects the performance of execution in the Operating system. Also, thrashing
results in severe performance problems in the Operating system.

When the utilization of CPU is low, then the process scheduling mechanism triesto load many
processes into the memory at the same time due to which degree of Multiprogramming can be
increased. Now in this situation, there are more processes in the memory as compared to the
available number of frames in the memory. Allocation of the limited amount of frames to each
process.

Whenever any process with high priority arrives in the memory and if the frame is not freely
available at that time then the other process that has occupied the frame is residing in the
frame will move to secondary storage and after that this free frame will be allocated to higher
priority process.

We can also say that as soon as the memory fills up, the process starts spending a lot of time
for the required pages to be swapped in. Again the utilization of the CPU becomes low
because most of the processes are waiting for pages.
Thus a high degree of multiprogramming and lack of frames are two main causes of thrashing
in the Operating system.

File System
A file is a named collection of related information that is recorded on secondary storage such
as magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits,
bytes, lines or records whose meaning is defined by the files creator and user.
File Structure
A File Structure should be according to a required format that the operating system can
understand.
• A file has a certain defined structure according to its type.
• A text file is a sequence of characters organized into lines.
• A source file is a sequence of procedures and functions.
• An object file is a sequence of bytes organized into blocks that are understandable by
the machine.
• When operating system defines different file structures, it also contains the code to
support these file structure. Unix, MS-DOS support minimum number of file structure.

File Type
File type refers to the ability of the operating system to distinguish different types of file such
as text files source files and binary files etc. Many operating systems support many types of
files. Operating system like MS-DOS and UNIX have the following types of files −

Ordinary files

• These are the files that contain user information.


• These may have text, databases or executable program.
• The user can apply various operations on such files like add, modify, delete or even
remove the entire file.

Directory files

• These files contain list of file names and other information related to these files.

Special files
• These files are also known as device files.
• These files represent physical device like disks, terminals, printers, networks,
tape drive etc.
These files are of two types −
• Character special files − data is handled character by character as in caseof terminals
or printers.
• Block special files − data is handled in blocks as in the case of disks andtapes.
File Access Mechanisms
File access mechanism refers to the manner in which the records of a file may be accessed.
There are several ways to access files −
• Sequential access
• Direct/Random access
• Indexed sequential access
Sequential access

A sequential access is that in which the records are accessed in some sequence, i.e., the
information in the file is processed in order, one record after the other. This access method is
the most primitive one. Example: Compilers usuallyaccess files in this fashion.

Direct/Random access

• Random access file organization provides, accessing the records directly.


• Each record has its own address on the file with by the help of which it can be directly
accessed for reading or writing.
• The records need not be in any sequence within the file and they need not be in adjacent
locations on the storage medium.

Indexed sequential access

• This mechanism is built up on base of sequential access.


• An index is created for each file which contains pointers to various blocks.
• Index is searched sequentially and its pointer is used to access the file directly.
Protection in File System

In computer systems, alot of user’s information is stored, the objective of the operating system
is to keep safe the data of the user from the improper access to the system. Protection can be
provided in number of ways. For a single laptop system, we might provide protection by
locking the computer in a desk drawer or file cabinet. For multi-user systems, different
mechanisms are used for the protection.
Types of Access :
The files which have direct access of the any user have the need of protection. The files which
are not accessible to other users doesn’t require any kind of protection. The mechanism of the
protection provide the facility of the controlledaccess by just limiting the types of access to the
file. Access can be given or not given to any user depends on several factors, one of which is
the type of access required. Several different types of operations can be controlled:
• Read –
Reading from a file.
• Write –
Writing or rewriting the file.
• Execute –
Loading the file and after loading the execution process starts.
• Append –
Writing the new information to the already existing file, editing must be end at the end of
the existing file.
• Delete –
Deleting the file which is of no use and using its space for the another data.
• List –
List the name and attributes of the file.
Operations like renaming, editing the existing file, copying; these can also be controlled. There
are many protection mechanism. each of them mechanism have different advantages and
disadvantages and must be appropriate for the intended application.
Access Control:
There are different methods used by different users to access any file. The general way of
protection is to associate identity-dependent access with all the files and directories an list
called access-control list (ACL) which specify the names of the users and the types of access
associate with each of the user. The main problem with the access list is their length. If we
want to allow everyone to read a file, we must list all the users with the read access. This
technique has two undesirable consequences:
Constructing such a list may be tedious and unrewarding task, especially if we do not know in
advance the list of the users in the system.
Previously, the entry of the any directory is of the fixed size but now it changes to the variable
size which results in the complicates space management. These problems can be resolved by
use of a condensed version of the access list. To condense the length of the access-control list,
many systems recognize three classification of users in connection with each file:
• Owner –
Owner is the user who has created the file.
• Group –
A group is a set of members who has similar needs and they are sharing the same file.
• Universe –
In the system, all other users are under the category called universe.
The most common recent approach is to combine access-control lists with the normal general
owner, group, and universe access control scheme. For example: Solaris uses the three
categories of access by default but allows access-control lists to be added to specific files and
directories when more fine-grained access control is desired.
Other Protection Approaches:
The access to any system is also controlled by the password. If the use of password are is
random and it is changed often, this may be result in limit the effective access to a file.
The use of passwords has a few disadvantages:
• The number of passwords are very large so it is difficult to remember the large
passwords.
• If one password is used for all the files, then once it is discovered, all files are
accessible; protection is on all-or-none basis.

File System Implementation in Operating System


A file is a collection of related information. The file system resides on secondary storage and
provides efficient and convenient access to the disk by allowing data to be stored, located, and
retrieved.
File system organized in many layers :

• I/O Control level –


Device drivers acts as interface between devices and Os, they help to transfer data
between disk and main memory. It takes block number a input and as output it gives
low level hardware specific instruction.
/li>
• Basic file system –
It Issues general commands to device driver to read and write physical blocks on
disk.It manages the memory buffers and caches. A block in buffer can hold the
contents of the disk block and cache stores frequentlyused file system metadata.
• File organization Module –
It has information about files, location of files and their logical and physical
blocks.Physical blocks do not match with logical numbers of logical block numbered
from 0 to N. It also has a free space which tracksunallocated blocks.
• Logical file system –
It manages metadata information about a file i.e includes all details about a file except
the actual contents of file. It also maintains via file control blocks. File control block
(FCB) has information about a file – owner, size, permissions, location of file
contents.
Advantages :
1. Duplication of code is minimized.
2. Each file system can have its own logical file system.
Disadvantages :
If we access many files at same time then it results in low performance. We can
implement file system by using two types data structures :
1. On-disk Structures –
Generally they contain information about total number of disk blocks, free disk blocks,
location of them and etc. Below given are different on-disk structures :
1. Boot Control Block –
It is usually the first block of volume and it contains information needed to boot an
operating system.In UNIX it is called boot block and in NTFS it is called as partition
boot sector.
2. Volume Control Block –
It has information about a particular partition ex:- free block count, block size and
block pointers etc.In UNIX it is called super block and in NTFS it is stored in master
file table.
3. Directory Structure –
They store file names and associated inode numbers.In UNIX, includes file names
and associated file names and in NTFS, it is stored in master file table.
4. Per-File FCB –
It contains details about files and it has a unique identifier number to allow
association with directory entry. In NTFS it is stored in master file table.

2. In-Memory Structure :
They are maintained in main-memory and these are helpful for file system management for
caching. Several in-memory structures given below :
5. Mount Table –
It contains information about each mounted volume.
6. Directory-Structure cache –
This cache holds the directory information of recently accessed
directories.
7. System wide open-file table –
It contains the copy of FCB of each open file.
8. Per-process open-file table –
It contains information opened by that particular process and it maps with
appropriate system wide open-file.
Directory Implementation :
1. Linear List –
It maintains a linear list of filenames with pointers to the data blocks.It is time-
consuming also.To create a new file, we must first search the directory to be sure that
no existing file has the same name then we add a file at end of the directory.To delete a
file, we search the directory for the named file and release the space.To reuse the
directory entry either we can mark the entry as unused or we can attach it to a list of free
directories.
2. Hash Table –
The hash table takes a value computed from the file name and returns a pointer to the
file. It decreases the directory search time. The insertion and deletion process of files
is easy. The major difficulty is hash tables are its generally fixed size and hash tables
are dependent on hash function on that size.

Free Space Management


A file system is responsible to allocate the free blocks to the file therefore it hasto keep track
of all the free blocks present in the disk. There are mainly two approaches by using which, the
free blocks in the disk are managed.
1. Bit Vector
In this approach, the free space list is implemented as a bit map vector. It contains the number
of bits where each bit represents each block.

If the block is empty then the bit is 1 otherwise it is 0. Initially all the blocks are empty
therefore each bit in the bit map vector contains 1.

LAs the space allocation proceeds, the file system starts allocating blocks to the files and
setting the respective bit to 0.
2. Linked List
It is another approach for free space management. This approach suggests linking together all
the free blocks and keeping a pointer in the cache which points to the first free block.

Therefore, all the free blocks on the disks will be linked together with a pointer. Whenever a
block gets allocated, its previous free block will be linked to its next free block.

File Recovery Process

File recovery process can be briefly described as drive or folder scanning to find deleted entries
in Root Folder (FAT) or Master File Table (NTFS) then for the particular deleted entry,
defining clusters chain to be recovered and then copying contents of these clusters to the newly
created file.
Different file systems maintain their own specific logical data structures, however basically
each file system:
• Has a list or catalogue of file entries, so we can iterate through this list and entries,
marked as deleted
• Keeps for each entry a list of data clusters, so we can try to find out set of clusters
composing the file
After finding out the proper file entry and assembling set of clusters, composing the file, read
and copy these clusters to another location.
Step by Step with examples:
• Disk Scanning
• Cluster chain
• Clusters chain recovery for the deleted entry
However, not every deleted file can be recovered, there are some assumptions, forsure:
• First, we assume that the file entry still exists (not overwritten with other data). The less
the files have been created on the drive where the deleted file was resided, the more
chances that space for the deleted file entry has not been used for other entries.
• Second, we assume that the file entry is more or less safe to point to the proper place
where file clusters are located. In some cases (it has been noticed in Windows XP, on
large FAT32 volumes) operating system damages file entries right after deletion so that
the first data clusterbecomes invalid and further entry restoration is not possible.
• Third, we assume that the file data clusters are safe (not overwritten with other data).
The less the write operations have been performed on the drive where deleted file was
resided, the more chances that the space occupied by data clusters of the deleted file has
not been used for other data storage.
As general advices after data loss:
1. DO NOT WRITE ANYTHING ONTO THE DRIVE CONTAINING YOUR
IMPORTANT DATA THAT YOU HAVE JUST DELETED ACCIDENTALLY! Even data
recovery software installation could spoil your sensitive data. If the data is really important to
you and you do not have another logical drive to install software to, take the whole hard drive
out of the computer and plug it into another computer where data recovery software has been
already installed or use recovery software that does not require installation, for example
recovery software which is capable to run from bootable floppy.
2. DO NOT TRY TO SAVE ONTO THE SAME DRIVE DATA THAT YOU FOUND
AND TRYING TO RECOVER! When saving recovered data onto the same drive where
sensitive data is located, you can intrude in process of recovering by overwriting FAT/MFT
records for this and other deleted entries. It's better to save data onto another logical,
removable, network or floppy drive.

You might also like