Os Placement Notes
Os Placement Notes
computer runs out of RAM, the operating system (OS) will move idle or unwanted pages of
memory to secondary memory to free up RAM for other processes and brings them back
when needed by the program.
This process continues during the whole execution of the program where the OS keeps
removing idle pages from the main memory and write them onto the secondary memory and
bring them back when required by the program.
Implementation of Page Table
Memory Protection
Memory protection implemented by associating protection bit with each frame
Valid-invalid bit attached to each entry in the page table:
“valid” indicates that the associated page is in the process’ logical address space, and is thus a legal
page “invalid” indicates that the page is not in the process’ logical address space
Valid (v) or Invalid (i) Bit In A Page Table
75
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Shared Pages
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers,
window systems).
Shared code must appear in same location in the logical address space of all processes
Private code and data
Each process keeps a separate copy of the code and data
The pages for the private code and data can appear anywhere in the logical address space
Shared Pages Example
76
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Hierarchical Paging
Hashed Page Tables
Inverted Page Tables
Break up the logical address space into multiple page tables A simple technique
is a two-level page table
Two-Level Page-Table Scheme
where pi is an index into the outer page table, and p2 is the displacement within the page of the
outer page table
77
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Address-Translation Scheme
Entry consists of the virtual address of the page stored in that real memory location, with information
about the process that owns that page
Decreases memory needed to store each page table, but increases time needed to search the table
when a page reference occurs
Use hash table to limit the search to one — or at most a few — page-table entries
Inverted Page Table Architecture
Segmentation
Memory-management scheme that supports user view of memory A program is a
collection of segments
A segment is a logical unit such as:
main program
Procedure
function method
object
79
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Segmentation Architecture
Logical address consists of a two tuple:
o <segment-number, offset>,
Segment table – maps two-dimensional physical adpdrhesysess;iecaachltam
bleeem
ntroy rhyas:space
base – contains the starting physical address where the segments reside in memory
limit – specifies the length of the segment
Segment-table base register (STBR) points to the segment table’s location in memory
Segment-table length register (STLR) indicates number of segments used by a program;
segment number s is legal if s < STLR
Protection
With each entry in segment table associate:
validation bit = 0 Þ illegal segment
read/write/execute privileges
Protection bits associated with segments; code sharing occurs at segment level
Since segments vary in length, memory allocation is a dynamic storage-allocation
problem A segmentation example is shown in the following diagram
80
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Segmentation Hardware
Example of Segmentation
Instead of an actual memory location the segment information includes the address of a page
table for the segment. When a program references a memory location the offset is translated
to a memory address using the page table. A segment can be extended simply by allocating
another memory page and adding it to the segment's page table.
An implementation of virtual memory on a system using segmentation with paging usually
only moves individual pages back and forth between main memory and secondary storage,
similar to a paged non-segmented system. Pages of the segment can be located anywhere in
main memory and need not be contiguous. This usually results in a reduced amount of
input/output between primary and secondary storage and reduced memory fragmentation.
Virtual Memory
Virtual Memory is a space where large programs can store themselves in form of pages
while their execution and only the required pages or portions of processes are loaded into
the main memory. This technique is useful as large virtual memory is provided for user
programs when a very small physical memory is there.
In real scenarios, most processes never need all their pages at once, for following reasons :
Error handling code is not needed unless that specific error occurs, some of which
are quite rare.
Arrays are often over-sized for worst-case scenarios, and only a small fraction of the
arrays are actually used in practice.
Certain features of certain programs are rarely used.
Fig. Diagram showing virtual memory that is larger than physical memory.
Virtual memory is commonly implemented by demand paging. It can also be implemented in a
segmentation system. Demand segmentation can also be used to provide virtual memory.
Demand Paging
A demand paging is similar to a paging system with swapping(Fig 5.2). When we want to execute a
process, we swap it into memory. Rather than swapping the entire process into memory.
When a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again Instead of swapping in a whole process, the pager brings only those necessary pages
into memory. Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the
swap time and the amount of physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those pages
that are on the disk using the valid-invalid bit scheme. Where valid and invalid pages can be checked
checking the bit and marking a page will have no effect if the process never attempts to access the
pages. While the process executes and accesses pages that are memory resident, execution proceeds
normally.
Fig. Transfer of a paged memory to continuous disk space
Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's
failure to bring the desired page into memory.
Initially only those pages are loaded which will be required the process immediately.
The pages that are not moved into the memory are marked as invalid in the page table. For
83
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
an invalid entry the rest of the table is empty. In case of pages that are loaded in the
memory, they are marked as valid along with the information about where to find the
swapped out page.
When the process requires any of the page that is not loaded into the memory, a page fault
trap is triggered and following steps are followed,
1. The memory address which is requested by the process is first checked, to verify the
request made by the process.
2. If its found to be invalid, the process is terminated.
3. In case the request by the process is valid, a free frame is located, possibly from a
free-frame list, where the required page will be moved.
4. A new operation is scheduled to move the necessary page from disk to the specified
memory location. ( This will usually block the process on an I/O wait, allowing some other
process to use the CPU in the meantime. )
5. When the I/O operation is complete, the process's page table is updated with the
new frame number, and the invalid bit is changed to valid.
6. The instruction that caused the page fault must now be restarted from the beginning.
There are cases when no pages are loaded into the memory initially, pages are only loaded
when demanded by the process by generating page faults. This is called Pure Demand
Paging.
The only major issue with Demand Paging is, after a new page is loaded, the process starts
execution from the beginning. It is not a big issue for small programs, but for larger programs
it affects performance drastically.
84
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
When a bit is modified by the CPU and not written back to the storage, it is called as a dirty
bit. This bit is present in the memory cache or the virtual storage space.
Advantages of Demand Paging:
1. Large virtual memory.
2. More efficient use of memory.
3. Unconstrained multiprogramming. There is no limit on degree of multiprogramming.
Disadvantages of Demand Paging:
1. Number of tables and amount of processor over head for handling page interrupts are greater than in
the case of the simple paged management techniques.
2. due to the lack of an explicit constraints on a jobs address space size.
Page Replacement
As studied in Demand Paging, only certain pages of a process are loaded initially into the
memory. This allows us to get more number of processes into the memory at the same time.
but what happens when a process requests for more pages and no free memory is available
to bring them in. Following steps can be taken to deal with this problem :
1. Put the process in the wait queue, until any other process finishes its execution
thereby freeing frames.
2. Or, remove some other process completely from the memory to free frames.
3. Or, find some pages that are not being used right now, move them to the disk to get free
frames. This technique is called Page replacement and is most commonly used. We have
some great algorithms to carry on page replacement efficiently.
Page Replacement Algorithm
Page replacement algorithms are the techniques using which an Operating System decides
which memory pages to swap out, write to disk when a page of memory needs to be
allocated. Paging happens whenever a page fault occurs and a free page cannot be used for
allocation purpose accounting to reason that pages are not available or the number of free
pages is lower than required pages.
When the page that was selected for replacement and was paged out, is referenced again, it
has to read in from disk, and this requires for I/O completion. This process determines the
quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better
is the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages
provided by hardware, and tries to select which pages should be replaced to minimize the
total number of page misses, while balancing it with the costs of primary storage and
processor time of the algorithm itself. There are many different page replacement
algorithms. We evaluate an algorithm by running it on a particular string of memory
reference and computing the number of page faults,
Reference String
The string of memory references is called reference string. Reference strings are generated
artificially or by tracing a given system and recording the address of each memory reference.
85
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
The latter choice produces a large number of data, where we note two things.
For a given page size, we need to consider only the page number, not the entire address.
If we have a reference to a page p, then any immediately following references
to page p will never cause a page fault. Page p will be in memory after the first reference; the
immediately following references will not fault.
For example, consider the following sequence of addresses − 123,215,600,1234,76,96
If page size is 100, then the reference string is
1,2,6,12,0,0 First In First Out (FIFO) algorithm
Oldest page in main memory is the one which will be selected for replacement.
Easy to implement, keep a list, replace pages from the tail and add new pages at
the head.
86
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Replace the page that will not be used for the longest period of time. Use the time
when a page is to be used.
87
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
o We can see notably that the bad replacement decision made by FIFO is not present in Second
chance!!!
o There are a total of 9 page read operations to satisfy the total of 18 page requests - just as good as
the more computationally expensive LRU method !!!
88
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
NRU (Not Recently Used) Page Replacement Algorithm - This algorithm requires that each page
have two additional status bits 'R' and 'M' called reference bit and change bit respectively. The reference
bit(R) is automatically set to 1 whenever the page is referenced. The change bit (M) is set to 1 whenever
the page is modified. These bits are stored in the PMT and are updated on every memory reference.
When a page fault occurs, the memory manager inspects all the pages and divides them into 4 classes
based on R and M bits.
Class 1: (0,0) − neither recently used nor modified - the best page to replace.
Class 2: (0,1) − not recently used but modified - the page will need to be written out before
replacement.
Class 3: (1,0) − recently used but clean - probably will be used again soon.
Class 4: (1,1) − recently used and modified - probably will be used again, and write out will be
needed before replacing it.
This algorithm removes a page at random from the lowest numbered non-empty class.
89
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
UNIT-IV
File Management: Concept of File, Access methods, File types, File operation, Directory structure,
File System structure, Allocation methods (contiguous, linked, indexed), Free-space management (bit
vector, linked list, grouping), directory implementation (linear list, hash table), efficiency and
performance.
I/O Hardware: I/O devices, Device controllers, Direct memory access Principles of I/O
Software: Goals of Interrupt handlers, Device drivers, Device independent I/O software.
File System
File Concept:
Computers can store information on various storage media such as, magnetic disks,
magnetic tapes, optical disks. The physical storage is converted into a logical storage
unit by operating system. The logical storage unit is called FILE. A file is a collection of
similar records. A record is a collection of related fields that can be treated as a unit by
some application program. A field is some basic element of data. Any individual field
contains a single value. A data base is collection of related data.
Student name, Marks in sub1, sub2, Fail/Pass is fields. The collection of fields is
called a RECORD. RECORD:
LAKSH 93 92 P
Collection of these records is called a data file.
FILE ATTRIBUTES :
1. Name : A file is named for the convenience of the user and is referred by its
name. A name is usually a string of characters.
2. Identifier : This unique tag, usually a number ,identifies the file within the file system.
3. Type : Files are of so many types. The type depends on the extension of the file.
Example:
.exe Executable file
.obj Object file
.src Source file
4. Location : This information is a pointer to a device and to the location of
the file on that device.
90
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
FILE OPERATIONS
1. Creating a file : Two steps are needed to create a file. They are:
Check whether the space is available ornot.
If the space is available then made an entry for the new file in the
directory. The entry includes name of the file, path of the file,etc…
2. Writing a file : To write a file, we have to know 2 things. One is name of the
file and second is the information or data to be written on the file, the system searches
the entired given location for the file. If the file is found, the system must keep a write
pointer to the location in the file where the next write is to take place.
3. Reading a file : To read a file, first of all we search the directories for the file, if
the file is found, the system needs to keep a read pointer to the location in the file where
the next read is to take place. Once the read has taken place, the read pointer is updated.
4. Repositioning within a file : The directory is searched for the appropriate
entry and the current file position pointer is repositioned to a given value. This
operation is also called file seek.
5. Deleting a file : To delete a file, first of all search the directory for named
file, then released the file space and erase the directoryentry.
6. Truncating a file : To truncate a file, remove the file contents only but, the
attributes are as itis.
FILE TYPES:The name of the file split into 2 parts. One is name and second is
Extension. The file type is depending on extension of the file.
FILE STRUCTURE
File types also can be used to indicate the internal structure of the file. The operating
system requires that an executable file have a specific structure so that it can determine
where in memory to load the file and what the location of the first instruction is. If OS
supports multiple file structures, the resulting size of OS is large. If the OS defines 5
different file structures, it needs to contain the code to support these file structures. All
OS must support at least one structure that of an executable file so that the system is able
to load and run programs.
In UNIX OS, defines all files to be simply stream of bytes. Each byte is individually
addressable by its offset from the beginning or end of the file. In this case, the logical
record size is 1 byte. The file system automatically packs and unpacks bytes into
physical disk blocks, say 512 bytes per block.
The logical record size, physical block size, packing determines how many logical
records are in each physical block. The packing can be done by the user’s application
program or OS. A file may be considered a sequence of blocks. If each block were 512
bytes, a file of 1949 bytes would be allocated 4 blocks (2048 bytes). The last 99 bytes
92
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
would be wasted. It is called internal fragmentation all file systems suffer from internal
fragmentation, the larger the block size, the greater the internal fragmentation.
FILE ACCESS METHODS
Files stores information, this information must be accessed and read into computer
memory. There are so many ways that the information in the file can be accessed.
Information in the file is processed in order i.e. one record after the other.
Magnetic tapes are supporting this type of file accessing.
Eg : A file consisting of 100 records, the current position of read/write head is 45 th
record, suppose we want to read the 75th record then, it access sequentially from 45,
46, 47
…….. 74, 75. So the read/write head traverse all the records between 45 to 75.
2. Direct access:
Direct access is also called relative access. Here records can read/write randomly
without any order. The direct access method is based on a disk model of a file, because
disks allow random access to any file block.
Eg : A disk containing of 256 blocks, the position of read/write head is at 95th block. The
block is to be read or write is 250th block. Then we can access the 250th block directly
without any restrictions.
The main disadvantage in the sequential file is, it takes more time to access a Record
.Records are organized in sequence based on a key field.
Eg :
A file consisting of 60000 records,the master index divide the total records into 6 blocks,
each block consisiting of a pointer to secondary index.The secondary index divide the
10,000 records into 10 indexes.Each index consisting of a pointer to its orginal
93
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
location.Each record in the index file consisting of 2 field, A key field and a pointer field.
DIRECTORY STRUCTURE
Sometimes the file system consisting of millions of files,at that situation it is very hard
to manage the files. To manage these files grouped these files and load one group into
one partition.
directory.
94
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
E.g :- If user 1 creates a files caled sample and then later user 2 to creates a file
called sample,then user2’s file will overwrite user 1 file.Thats why it is not used
in the multi user system.
The problem in single level directory is different user may be accidentally use
95
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
the same name for their files. To avoid this problem each user need a private
directory,
Names chosen by one user don't interfere with names chosen by a different
user.
Root directory is the first level directory.user 1,user2,user3 are user level of
directory A,B,C are files.
Two level directory eliminates name conflicts among users but it is not
satisfactory for users with a large number of files.To avoid this create the sub-
directory and load the same type of files into the sub-directory.so, here each can
have as many directories are needed.
96
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
1. Absoulte path
2. Relative path
Absoulte path : Begging with root and follows a path down to specified
files giving directory, directory name on the path.
Relative path : A path from current directory.
4. Acyclic graphdirectory
Multiple users are working on a project, the project files can be stored in a
comman sub-directory of the multiple users. This type of directory is called
acyclic graph directory .The common directory will be declared a shared
directory. The graph contain no cycles with shared files, changes made by one
user are made visible to other users.A file may now have multiple absolute paths.
when shared directory/file is deleted, all pointers to the directory/ files also to be
removed.
97
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
Application Programs
I/O Control
Devices
The File Organization Module knows about files and their logical blocks and
physical blocks. By knowing the type of file allocation used and the location of
the file, file organization module can translate logical block address to physical
addresses for the basic file system to transfer. Each file’s logical blocks are
numbered from 0 to n. so, physical blocks containing the data usually do not
match the logical numbers. A translation is needed to locate each block.
98
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
The Logical File System manages all file system structure except the actual data
(contents of file). It maintains file structure via file control blocks. A file control
block (inode in Unix file systems) contains information about the file, ownership,
permissions, location of the file contents.
Overview:
A Boot Control Block (per volume) can contain information needed by the system
to boot an OS from that volume. If the disk does not contain an OS, this block can
be empty.
A Volume Control Block (per volume) contains volume (or partition) details, such
as number of blocks in the partition, size of the blocks, a free block, count and
free block pointers, free FCB count, FCB pointers.
A Typical File Control Block
A Directory Structure (per file system) is used to organize the files. A PER-FILE
FCB contains many details about the file.
A file has been created; it can be used for I/O. First, it must be opened. The open( )
call passes a file name to the logical file system. The open( ) system call First
searches the system wide open file table to see if the file is already in use by another
process. If it is ,a per process open file table entry is created pointing to the existing
system wide open file table. If the file is not already open, the directory structure is
searched for the given file name. Once the file is found, FCB is copied into a system
99
OPERATING SYSTEMS NOTES II YEAR/I SEM MRCET
wide open file table in memory. This table not only stores the FCB but also tracks
the number of processes that have the file open.
Next, an entry is made in the per – process open file table, with the pointer to the
entry in the system wide open file table and some other fields. These are the fields
include a pointer to the current location in the file (for the next read/write operation)
and the access mode in which the file is open. The open () call returns a pointer to
the appropriate entry in the per-process file system table. All file operations are
preformed via this pointer. When a process closes the file the per- process table
entry is removed. And the system wide entry open count is decremented. When all
users that have opened the file close it, any updated metadata is copied back to the
disk base directory structure. System wide open file table entry is removed.
System wide open file table contains a copy of the FCB of each open
file, other information. Per process open file table, contains a pointer
to the appropriate entry in the system wide open file
table, other information.
100