OS Unit-3
OS Unit-3
It is the most important function of an operating system that manages primary memory. It
helps processes to move back and forward between the main memory and execution disk. It
helps OS to keep track of every memory location, irrespective of whether it is allocated to
some process or it remains free.
The memory can be divided either in the fixed-sized partition or in the variable-sized
partition in order to allocate contiguous space to user processes.
In this partition scheme, each partition may contain exactly one process. There is a problem
that this technique will limit the degree of multiprogramming because the number of partitions
will basically decide the number of processes.
Whenever any process terminates then the partition becomes available for another process.
Example
Let's take an example of fixed size partitioning scheme, we will divide a memory size of 15
KB into fixed-size partitions:
It is important to note that these partitions are allocated to the processes as they arrive and
the partition that is allocated to the arrived process basically depends on the algorithm
followed.
Paging
In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.
The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the contiguous
frames or holes.
Pages of the process are brought into the main memory only when they are required otherwise
they reside in the secondary storage.
Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as
same as frame size.
Segmentation Paging
Pure segmentation is not very popular and not being used in many of the operating systems.
However, Segmentation can be combined with Paging to get the best features out of both the
techniques.
In Segmented Paging, the main memory is divided into variable size segments which are
further divided into fixed size pages.
1. Pages are smaller than segments.
2. Each Segment has a page table which means every program has multiplepage tables.
3. The logical address is represented as Segment Number (base address), Pagenumber and
page offset.
Segment Number → It points to the appropriate Segment Number.
Each Page table contains the various information about every page of the segment. The
Segment Table contains the information about every segment. Each segment table entry points
to a page table entry and every page table entry is mapped to one of the page within a segment.
The CPU generates a logical address which is divided into two parts: Segment Number and
Segment Offset. The Segment Offset must be less than the segment limit. Offset is further
divided into Page number and Page Offset. To map the exact page number in the page table, the
page number is added into the page table base.
The actual frame number with the page offset is mapped to the main memory to get the
desired word in the page of the certain segment of the process.
Allocation of Frames
The main memory in the operating system is divided into frames. These frames store the
process and once the process is stored as a frame the CPU can execute the process. Therefore
the operating system has to allocate a sufficient number of frames for each corresponding
process. Therefore there exist various algorithms that are used by the operating system in order
to allocate the frame. There are mainly five ways to allocate the frame.
• Equal frame allocation
• Proportional frame
• allocation Priority frame
• allocation Global
• replacement allocationLocal
replacement allocation
Equal frame allocation
As the name suggests the process will be allocated equally among all the available processes
in the operating system. There is a disadvantage in equal frame allocation, for example, there is
a process that requires more frames for allocation for execution and there are only a set number
of frames that are present for allocation that will allocate an insufficient number of frames for
the process execution. And the same can happen if the process requires less frame than the
fixed set of frames for allocation. Let’s say the main memory has 40 frames for allocation and
the first process requires only 10 frames and the next process requires 30 frames for execution.
In this case, the first process will waste 10 frames in the allocation and the next process will
have insufficient 10 frames for allocation. This problem was solved by the proportional frame
allocation.
Proportional frame allocation
The equal frame has two drawbacks that either it will waste the frames or it may have an
insufficient number of frames. The proportional frame allocation will allocate the frame on the
basis of the size that is required for execution and the total number of the frames that the main
memory has. The proportional frame allocation will allocate frames based on the size of the
frame. But the disadvantage in the proportional frame allocation is that there is no priority in
theallocation of the frame and it will allocate the frame based on the size the problem is solved
by priority frame allocation.
Priority frame allocation
In the proportional frame allocation, the frames were allocated on the basis of size. But in
the priority frame allocation, the process will be allocated on basis ofthe priority of the process
and the number of frame allocation. If the process is of high priority and has more number of
frames the process will be allotted that many frames in the main memory and the process with
low priority will be allocated next. The frame allocation can occur based on both priority and
the sizeof the frames required by the process.
Global replacement allocation
To understand the global replacement allocation you have to know what is paging in the
operating system. When the operating system is executing programs then it will require pages
which contains the memory required for allocation, some pages are stored in the primary
memory of the operating system which is required for operation but when the operating system
does not find the required frames in pages then it causes a page fault and calls for the pages
from the secondary memory and that is the page replacement. The global replacement
allocation takes care of the frame that is being allocated in pages. The global replacement
algorithm tells us that the process having a low priority can give frames to the process with
higher priority so that fewer number of page faults occur.
Local replacement allocation
As the global replacement allocation tells us that the frames can be stored in any priority
process the local replacement allocation tells us that the frames of the pages will be stored on
the same page, unlike global. This also does not influence the behavior of the process behavior
as it did in the global replacement allocation.
Figure: Thrashing
Causes of Thrashing
Thrashing affects the performance of execution in the Operating system. Also, thrashing
results in severe performance problems in the Operating system.
When the utilization of CPU is low, then the process scheduling mechanism triesto load many
processes into the memory at the same time due to which degree of Multiprogramming can be
increased. Now in this situation, there are more processes in the memory as compared to the
available number of frames in the memory. Allocation of the limited amount of frames to each
process.
Whenever any process with high priority arrives in the memory and if the frame is not freely
available at that time then the other process that has occupied the frame is residing in the
frame will move to secondary storage and after that this free frame will be allocated to higher
priority process.
We can also say that as soon as the memory fills up, the process starts spending a lot of time
for the required pages to be swapped in. Again the utilization of the CPU becomes low
because most of the processes are waiting for pages.
Thus a high degree of multiprogramming and lack of frames are two main causes of thrashing
in the Operating system.
File System
A file is a named collection of related information that is recorded on secondary storage such
as magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits,
bytes, lines or records whose meaning is defined by the files creator and user.
File Structure
A File Structure should be according to a required format that the operating system can
understand.
• A file has a certain defined structure according to its type.
• A text file is a sequence of characters organized into lines.
• A source file is a sequence of procedures and functions.
• An object file is a sequence of bytes organized into blocks that are understandable by
the machine.
• When operating system defines different file structures, it also contains the code to
support these file structure. Unix, MS-DOS support minimum number of file structure.
File Type
File type refers to the ability of the operating system to distinguish different types of file such
as text files source files and binary files etc. Many operating systems support many types of
files. Operating system like MS-DOS and UNIX have the following types of files −
Ordinary files
Directory files
• These files contain list of file names and other information related to these files.
Special files
• These files are also known as device files.
• These files represent physical device like disks, terminals, printers, networks,
tape drive etc.
These files are of two types −
• Character special files − data is handled character by character as in caseof terminals
or printers.
• Block special files − data is handled in blocks as in the case of disks andtapes.
File Access Mechanisms
File access mechanism refers to the manner in which the records of a file may be accessed.
There are several ways to access files −
• Sequential access
• Direct/Random access
• Indexed sequential access
Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e., the
information in the file is processed in order, one record after the other. This access method is
the most primitive one. Example: Compilers usuallyaccess files in this fashion.
Direct/Random access
In computer systems, alot of user’s information is stored, the objective of the operating system
is to keep safe the data of the user from the improper access to the system. Protection can be
provided in number of ways. For a single laptop system, we might provide protection by
locking the computer in a desk drawer or file cabinet. For multi-user systems, different
mechanisms are used for the protection.
Types of Access :
The files which have direct access of the any user have the need of protection. The files which
are not accessible to other users doesn’t require any kind of protection. The mechanism of the
protection provide the facility of the controlledaccess by just limiting the types of access to the
file. Access can be given or not given to any user depends on several factors, one of which is
the type of access required. Several different types of operations can be controlled:
• Read –
Reading from a file.
• Write –
Writing or rewriting the file.
• Execute –
Loading the file and after loading the execution process starts.
• Append –
Writing the new information to the already existing file, editing must be end at the end of
the existing file.
• Delete –
Deleting the file which is of no use and using its space for the another data.
• List –
List the name and attributes of the file.
Operations like renaming, editing the existing file, copying; these can also be controlled. There
are many protection mechanism. each of them mechanism have different advantages and
disadvantages and must be appropriate for the intended application.
Access Control:
There are different methods used by different users to access any file. The general way of
protection is to associate identity-dependent access with all the files and directories an list
called access-control list (ACL) which specify the names of the users and the types of access
associate with each of the user. The main problem with the access list is their length. If we
want to allow everyone to read a file, we must list all the users with the read access. This
technique has two undesirable consequences:
Constructing such a list may be tedious and unrewarding task, especially if we do not know in
advance the list of the users in the system.
Previously, the entry of the any directory is of the fixed size but now it changes to the variable
size which results in the complicates space management. These problems can be resolved by
use of a condensed version of the access list. To condense the length of the access-control list,
many systems recognize three classification of users in connection with each file:
• Owner –
Owner is the user who has created the file.
• Group –
A group is a set of members who has similar needs and they are sharing the same file.
• Universe –
In the system, all other users are under the category called universe.
The most common recent approach is to combine access-control lists with the normal general
owner, group, and universe access control scheme. For example: Solaris uses the three
categories of access by default but allows access-control lists to be added to specific files and
directories when more fine-grained access control is desired.
Other Protection Approaches:
The access to any system is also controlled by the password. If the use of password are is
random and it is changed often, this may be result in limit the effective access to a file.
The use of passwords has a few disadvantages:
• The number of passwords are very large so it is difficult to remember the large
passwords.
• If one password is used for all the files, then once it is discovered, all files are
accessible; protection is on all-or-none basis.
2. In-Memory Structure :
They are maintained in main-memory and these are helpful for file system management for
caching. Several in-memory structures given below :
5. Mount Table –
It contains information about each mounted volume.
6. Directory-Structure cache –
This cache holds the directory information of recently accessed
directories.
7. System wide open-file table –
It contains the copy of FCB of each open file.
8. Per-process open-file table –
It contains information opened by that particular process and it maps with
appropriate system wide open-file.
Directory Implementation :
1. Linear List –
It maintains a linear list of filenames with pointers to the data blocks.It is time-
consuming also.To create a new file, we must first search the directory to be sure that
no existing file has the same name then we add a file at end of the directory.To delete a
file, we search the directory for the named file and release the space.To reuse the
directory entry either we can mark the entry as unused or we can attach it to a list of free
directories.
2. Hash Table –
The hash table takes a value computed from the file name and returns a pointer to the
file. It decreases the directory search time. The insertion and deletion process of files
is easy. The major difficulty is hash tables are its generally fixed size and hash tables
are dependent on hash function on that size.
If the block is empty then the bit is 1 otherwise it is 0. Initially all the blocks are empty
therefore each bit in the bit map vector contains 1.
LAs the space allocation proceeds, the file system starts allocating blocks to the files and
setting the respective bit to 0.
2. Linked List
It is another approach for free space management. This approach suggests linking together all
the free blocks and keeping a pointer in the cache which points to the first free block.
Therefore, all the free blocks on the disks will be linked together with a pointer. Whenever a
block gets allocated, its previous free block will be linked to its next free block.
File recovery process can be briefly described as drive or folder scanning to find deleted entries
in Root Folder (FAT) or Master File Table (NTFS) then for the particular deleted entry,
defining clusters chain to be recovered and then copying contents of these clusters to the newly
created file.
Different file systems maintain their own specific logical data structures, however basically
each file system:
• Has a list or catalogue of file entries, so we can iterate through this list and entries,
marked as deleted
• Keeps for each entry a list of data clusters, so we can try to find out set of clusters
composing the file
After finding out the proper file entry and assembling set of clusters, composing the file, read
and copy these clusters to another location.
Step by Step with examples:
• Disk Scanning
• Cluster chain
• Clusters chain recovery for the deleted entry
However, not every deleted file can be recovered, there are some assumptions, forsure:
• First, we assume that the file entry still exists (not overwritten with other data). The less
the files have been created on the drive where the deleted file was resided, the more
chances that space for the deleted file entry has not been used for other entries.
• Second, we assume that the file entry is more or less safe to point to the proper place
where file clusters are located. In some cases (it has been noticed in Windows XP, on
large FAT32 volumes) operating system damages file entries right after deletion so that
the first data clusterbecomes invalid and further entry restoration is not possible.
• Third, we assume that the file data clusters are safe (not overwritten with other data).
The less the write operations have been performed on the drive where deleted file was
resided, the more chances that the space occupied by data clusters of the deleted file has
not been used for other data storage.
As general advices after data loss:
1. DO NOT WRITE ANYTHING ONTO THE DRIVE CONTAINING YOUR
IMPORTANT DATA THAT YOU HAVE JUST DELETED ACCIDENTALLY! Even data
recovery software installation could spoil your sensitive data. If the data is really important to
you and you do not have another logical drive to install software to, take the whole hard drive
out of the computer and plug it into another computer where data recovery software has been
already installed or use recovery software that does not require installation, for example
recovery software which is capable to run from bootable floppy.
2. DO NOT TRY TO SAVE ONTO THE SAME DRIVE DATA THAT YOU FOUND
AND TRYING TO RECOVER! When saving recovered data onto the same drive where
sensitive data is located, you can intrude in process of recovering by overwriting FAT/MFT
records for this and other deleted entries. It's better to save data onto another logical,
removable, network or floppy drive.