Os Unit4 PDF
Os Unit4 PDF
UNIT IV
Virtual Memory – Demand Paging – Process creation – Page Replacement – Allocation of frames –
Thrashing. File Concept: Access Methods – Directory Structure – File System Mounting – File Sharing –
Protection
7. Consider a logical address space of eight pages of 1024 words each, mapped onto a physical
memory of 32 frames. A. How many bits are there in the logical address? B. How many bits are
there in the physical address?
Answer:
A. Logical address: 13 bits
B. Physical address: 15 bits
8. What is the basic approach of page replacement? If no frame is free is available, find one that
is not currently being used and free it.
A frame can be freed by writing its contents to swap space, and changing the page table to
indicate that the page is no longer in memory. Now the freed frame can be used to hold the page for
which the process faulted.
9. What is the various page replacement algorithms used for page replacement?
• FIFO page replacement
• Optimal page replacement
• LRU page replacement
• LRU approximation page replacement
• Counting based page replacement
• Page buffering algorithm.
• Create a file
• Delete a file
• Rename a file
• List directory
• Traverse the file system
28. What are the most common schemes for defining the logical structure of a directory?
The most common schemes for defining the logical structure of a directory
• Single-Level Directory
• Two-level Directory
• Tree-Structured Directories
• Acyclic-Graph Directories
• General Graph Directory
32. Give any two criteria to chose a file organization? (APR ‘12)
1. Fast access to single record or collection of related records.
2. Easy record adding,updating,removal without disrupting.
3. Storage efficiency.
Disadvantages:
Restricts user cooperation.
No logical grouping capability (other than by user).
11 MARKS
1. Explain demand paging in detail? (11)
Demand paging:
As there is much less physical memory than virtual memory the operating system must be
careful that it does not use the physical memory inefficiently. One way to save physical memory is to
only load virtual pages that are currently being used by the executing program. For example, a database
program may be run to query a database. In this case not the entire database needs to be loaded into
memory, just those data records that are being examined. Also, if the database query is a search query
then the it does not make sense to load the code from the database program that deals with adding new
records. This technique of only loading virtual pages into memory as they are accessed is known as
demand paging.
Transfer of a paged memory to contiguous disk space
It is similar in paging system with swapping.
Processes resides on secondary memory, when we want to execute a process, we swap it into
memory.
Rather than swapping the entire process into memory, we can use a lazy swapper.
A lazy swapper never swaps a page into memory unless that page will be needed. Swapper that
deals with pages is a pager.
When a process is to be swapped in, instead of swapping a whole process the page brings in only
those necessary pages into memory.
Thus it avoids reading into memory pages that will not be used anyway, decreasing the swap
time and the amount of physical memory needed.
With this scheme we need some hardware support to distinguish between those pages that are
in memory and those pages that are in disk.
The valid, invalid bit scheme can be used for this purpose.
When the bit is set it valid it indicates that the associated is both legal and in memory.
If the bit set to invalid is not valid or is valid but not in memory.
Page table when some pages are not in main memory:
With each page table entry a valid–invalid bit is associated
(v in-memory, i not-in-memory)
Initially valid–invalid bit is set to i on all entries.
During address translation, if valid–invalid bit in page table entry is I page fault
Example of a page table snapshot:
If there is a reference to a page, first reference to that page will trap to operating system:page fault
1. Operating system looks at another table to decide:
Invalid reference abort
Just not in memory
2. Get empty frame
3. Swap page into frame
4. Reset tables
5. Set validation bit = v
6. Restart the instruction that caused the page fault
Page fault
Memory access for a legal page not in memory causes a page fault trap.
Needs to bring the page into memory.
Steps of bringing a page into memory
1. Find a free frame
2. Read the desired page from disk into the free frame
3. Update page table valid bit set to v.
4. Restart the interrupted instruction
Pure demand paging
Start executing a process with no pages in memory. Bring a page into memory only when needed.
Steps in handling a page fault:
We check an internal table for this process to determine whether the reference was valid or
invalid memory access.
If the reference was invalid, we terminate the process. If the reference was valid but we have not
yet bought in that page
Now we find a free frame from free frame list.
We Schedule a disk operation to read the desired page into newly allocated frame.
When the disk read is complete, we need to modify the internal table which is kept with the
process and page table to indicate the page form the memory.
We restart the instruction that was interrupted by the illegal address trap.
The process now can access the page as though it had always been in the memory.
Restart instruction
block move
auto increment/decrement location
2. Explain page replacement in detail? (11) (APR’15, NOV ’15, NOV ’18)
Page replacement
P a g e | 14 Operating systems DEPARTMENT OF CSE
SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE
When a fault occurs, the OS loads the faulted page from disk into a page of memory. At some
point, the process has used the entire page frames it is allowed to use. When this happens, the OS must
replace a page for each page faulted in. That is, it must select a page to throw out of primary memory to
make room. How it does this is determined by the page replacement algorithm. The goal of the
replacement algorithm is to reduce the fault rate by selecting the best victim page to remove.
If total memory requirements exceed the physical memory, then it may be necessary to replace
pages from memory to free frames for new pages.
Page replacement algorithms:
1. FIFO Page Replacement
2. Optimal Page Replacement
3. LRU Page Replacement
4. LRU Approximation Page Replacement
a. Additional Reference Bits Algorithm
b. Second Chance Algorithm:
c. Enhanced Second Chance Algorithm
5. Counting Based Page Replacement
6. Page buffering algorithm
To illustrate the page replacement algorithms, we shall use the reference string
7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
FIFO Page Replacement:
A FIFO replacement algorithm associates with each page the time when that page was brought
into memory.
When a page must be replaced, the oldest page is chosen.
We can create a FIFO queue to hold all pages in memory; we replace the page at the head of the
queue.
When the page is brought into the memory it is inserted at the tail of the queue.
For the given reference string the 3 frames are initially empty.
The first 3 references (7,0,1) causes page faults and are brought into these empty frames.
The next reference (2) replaces page 7 because page 7 was brought in first.
The process continuous as below
Given string 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 224440 00 777
0 0 0 333222 11 100
1 1 100033 32 221
The FIFO page replacement algorithm is easy and its performance is not always good.
This algorithm yields 15 page faults
Optimal Page Replacement:
An optimal page replacement algorithm has lowest page fault rate of all algorithms.
An optimal page replacement algorithm exist and is called OPT or MIN.
It is simply replaces the page that will not be used for the longest period of time.
This algorithm yields 9 page faults.
Given string 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 2 2 7
0 0 0 0 4 0 0 0
1 1 3 3 3 1 1
The reference to page 2 replaces page 7 because it will not be used until reference 18 ,whereas page 0
will be used at 5 and page 1 at 14.
No replacement algorithm can process this string in 3 frames with than 9 faults.
But this algorithm is difficult to implement it is mainly used in comparison studies.
LRU Page Replacement:
LRU stands for least recently used
LRU replacement associates with each page the time of that page’s last use.
When a page must be replaced LRU chooses a page that has not been used for the longest period
of time.
This strategy is apt for looking backward in time.
The LRU produces 12 page faults .When reference to page4 occurs LRU selects page2 because
that was used least recently. Then LRU algorithm replaces page 2 .This process continues for the
entire reference string.
LRU with 12 faults is better than FIFO with 15 faults.
The major problem with this algorithm is, how to implement LRU replacement .It requires
substantial hardware assistance.
Given string 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7772 2 4440 1 1 1
000 0 0033 3 0 0
11 3 3222 2 2 7
1. Counters:
In the simple case, we associated with each page table entry a time of use field and add to the
CPU a logical clock or counter. The clock is incremented for every memory reference.We replace the
page with the smallest time value. This scheme requires a search of the page table to fine for each
memory access. The times must also be maintained when page tables are changed.
2. Stack:
Another approach to implementing LRU replacement is to keep a stack of page numbers.
Whenever a page is referenced, it is removed from the stack and put on the top. In this way, the most
recently used page is always at the top of the stack and the least recently used page is always at the
bottom.
Removing a page and putting it on the top of the stack then requires changing six pointers at
worst. Each update is a little more expensive, but there is no search for a replacement, the tail pointer
points to the bottom of the stack, which is the LRU page.
LRU Approximation Page Replacement:
Additional Reference Bits Algorithm:
At regular intervals a timer interrupt transfers control to the operating system. The operating
system shifts the reference bit for each page into the high order bit of its 8bit byte, shifting the other
bits right by 1 bit and discarding the low order bit.
These 8 bit shift registers contain the history of page use for the last eight time periods. If the
shift register contains 00000000 for example then the page has not been used for eight time periods a
page that is used at least once in each period has a shift register value of 1111111.
The number of bits of history can be varied of course and is selected to make the updating as fast
as possible. In the extreme case, the number can be reduced to zero, leaving only the reference bit itself.
This algorithm is called the second chance page replacement algorithm.
Second Chance Algorithm:
The basic algorithm of second chance replacement is a FIFO replacement algorithm. When a page has
been selected, however, we inspect its reference bit. If the value is 0, we proceed to replace this page,
but if the reference bit is set to 1, we give the page a second chance and move on to select the next FIFO
page. One way to implement the second chance algorithm is as a circular queue. A pointer indicates
which page is to be replaced next.
Enhanced Second Chance Algorithm:
We can enhance the second chance algorithm by considering the reference bit and the modify bit as an
ordered pair. With these two bits, we have the following four possible classes:
1. (0,0) neither recently used nor modified then best page to replace.
2. (0, 1) not recently used but modified then not quite as good, because the page will need to be written
out before replacement.
3. (1, 0) recently used but clean then probably will be used again soon.
4. (1, 1) recently used and modified then probably will be used again soon, and the page will be need to
be written out to disk before it can be replaced.
The major difference between this algorithm and the simpler clock algorithm is that here we give
preference to those pages that have been modified to reduce the number of I/Os required.
Counting Based Page Replacement:
A counter of the number of references that have been made to each page and develop the following two
schemes.
1. LFU
2. MFU
1. Least Frequently Used (LFU):
The least frequently used (LFU) page replacement algorithm requires that the page with the
smallest count be replaced. The reason for this selection is that an actively used page should have a
large reference count.
A problem arises, however, when a page is used heavily during the initial phase of a process but then is
never used again. Since it was used heavily, it has a large count and remains in memory even though it
is no longer needed.
2. Most Frequently Used (MFU):
The most frequently used (MFU) page replacement algorithm is based on the argument that the
page with the smallest count was probably just brought in and has yet to be used.
Page Buffering Algorithms:
a. A system commonly has a pool of free frames.
b. In this algorithm, when a page fault occurs, a victim page is chosen as before. However the desired
page is stored into a free frame before the victim is written out.
c. This procedure allows the process to restart as soon as possible, without waiting for the victim
page to be written out.
d. When the victim page is later written out then its frame is added to the free frame pool.
4. What are files and explain the access methods for files? (APR ’14, NOV ‘15)
A file is an abstract data type. The operating system can provide system calls to create, write,
read, reposition, delete and truncate files.
A file is a named collection of related information that is recorded on secondary storage.
A files contains either programs or data.
File is a sequence of bits, bytes, lines as defined by the files creater and user.
A file has certain ‘structure’ based on its type.
Operating system must do for each of the six basic file operations.
Text files: Sequence of characters organized into lines/pages.
Source file: Sequence of subroutines and functions organized as executable statements.
Object file: Sequence of bytes organized into blocks understandable by systems linker.
Executable file: Series of code sections that loader can bring into memory and execute.
File attributes:
Name : only information kept in human – readable
Identifier :unique tag identifies file within file system.
Type :needed for systems that support different types.
Location : pointer to file location on device.
Size : current file size.
Protection : controls who can do reading, writing, executing.
File operations:
Time, date and user identification: data for protection, security and usage monitoring.
1. Creating a file
2. Writing a file
3. Reading a file
4. Repositioning within a file
5. Deleting a file
6. Truncating a file
Creating a file:
Two steps are necessary to create a file
space in the file system must be found for the file
an entry for the new file must be made in the directory
The directory entry records the name of the file and the location in the file system,and possibly
other information.
Writing a file:
To write a file, we make a system call specifying both the name of the file and the information to
be written to the file.
The system searches the directory to find the location of the file.
The system must keep a write pointer to the location in the file where the next write is to take
place.
The write pointer must be updated whenever a write occurs.
Reading a file:
To read from a file, we use a system call that specifies the name of the file and where(in
memory) the next block of the file should be put.
The directory is searched for the associated directory entry, and the system needs to keep a read
pointer to the location in the file where the next read is to take place.
Once the read has taken place, the read pointer is updated.
The current operation location is kept as a per-process current-file-position-pointer.
Direct access:
Direct access is also called as relative access.
A file is made up of fixed length logical records that allow programs to read and write records
rapidly.
It is based on disk method of a file, since disk allow random access to any file block
In direct access, file is viewed as numbered sequence of blocks or records
In direct access, there is no restriction for reading and writing.
It is great use for immediate access to large amounts of information.
In this method, the file operation must be modified to include the block number as a parameter.
Thus, we have read n where block number is rather than read next.
This block number provide by the user to operating system is normally a relative block number.
Relative block number is a index relative to the beginning of the file.
Not all operating system support both sequential and direct access for files.
Some system require that a file as to be defined as sequential or direct access when it is created
so such file can be accessed by consistent with its declaration.
Other access methods:
It can be built on top of direct access method and involve the construction of an index for the file.
The index contains pointers to various blocks. To find the desired record in the file, we first search the
index and then use the pointers to access the file directly.
The two level directory structures solves the name collision problem, it still has disadvantages.
Effectively isolates one user form another.
Isolation is an advantage when the users are completely independent.
A disadvantage when the users want to cooperate on some task and to access one another’s files.
Some systems simply do not allow local user files to be accessed by other users.
This method is the one most used in UNIX and MS-DOS.
We resolve the link by using that path name to located the real file.
Links are easily identified by their format in the directory entry and are effectively named
indirect pointers.
The operating system structure of the system.
An acyclic graph directory structures is more flexible than is a simple tree structure, but it is also
more complex.
Several problems must be considered carefully. A file may now have multiple absolute path
names. Consequently, distinct file names may refer to the same file.
This situation is similar to the aliasing problem for programming languages.
8. Write notes about the protection strategies provided for files . (APR’15)
When information is stored in a computer system, we want to keep it safe from physical damage
(the issue of reliability) and improper access (the issue of protection). Reliability is generally provided
by duplicate copies of files. Many computers have systems programs that automatically (or through
computer-operator intervention) copy disk files to tape at regular intervals (once per day or week or
month) to maintain a copy should a file system be accidentally destroyed. File systems can be damaged
by hardware problems (such as errors in reading or writing), power surges or failures, head crashes,
dirt, temperature extremes, and vandalism. Files may be deleted accidentally. Bugs in the file-system
software can also cause file contents to be lost. Reliability .Protection can be provided in many ways.
For a small single-user system, protection provided by physically removing the floppy disks and locking
them in a desk drawer or file cabinet. In a multiuser system, however, other mechanisms are needed.
Types of Access
The need to protect files is a direct result of the ability to access files. Systems that do not permit
access to the files of other users do not need protection. Thus, we could provide complete protection by
prohibiting access. Alternatively, we could provide free access with no protection. Both approaches are
too extreme for general use. What is needed is Protection mechanisms provide controlled access by
limiting the types of file access that can be made. Access is permitted or denied depending on several
factors, one of which is the type of access requested. Several different types of operations may be
controlled:
Read. Read from the file.
Write. Write or rewrite the file.
Execute. Load the file into memory and execute it.
The most common recent approach is to combine access-control lists with the more general (and
easier to implement) owner, group, and universe access control scheme . For example, Solaris 2.6 and
beyond use the three categories of access by default but allow access-control lists to be added to
specific files and directories when more fine-grained access control is desired.
Other Protection Approaches
Another approach to the protection problem is to associate a password with each file. Just as
access to the computer system is often controlled by a password, access to each file can be controlled in
the same way. If the passwords are chosen randomly and changed often, this scheme may be effective in
limiting access to a file.
Answer:
common distributed file sharing method. User IDs identify users, allowing permissions and protections
to be per user. Group IDs allow users to be in groups, permitting group access rights.
File Sharing –Remote File Systems
Uses networking to allow file system access between systems in manually via programs like FTP
Automatically, seamlessly using distributed file systems Semi automatically via the World Wide Web.
Client server: model allows clients to mount remote file systems from servers Server can serve multiple
clients Client and user on client identification is insecure or complicated NFS is standard UNIX client
server file sharing protocol .CIFS is standard Windows protocol Standard operating system file calls are
translated into remote calls Distributed Information Systems (distributed naming services) such as
LDAP, DNS, NIS implement unified access to information needed for remote computing.
File Sharing –Failure Modes
Remote file systems add new failure modes, due to network failure, server failure. Recovery
from failure can involve state information about status of each remote request. Stateless protocols such
as NFS include all information in each request, allowing easy recovery but less security.
File Sharing –Consistency Semantics
Consistency semantics specify how multiple users are to access a shared file simultaneously
process synchronization algorithms tend to be less complex due to disk I/O and network latency (for
remote file systems Andrew File System (AFS) implemented complex remote file sharing semantics
Unix file system (UFS) implements: Writes to an open file visible immediately to other users of the same
open file ,Sharing file pointer to allow multiple users to read and write concurrently AFS has session
semantics Writes only visible to sessions starting after the file is closed.
Semantics of File Sharing
UNIX semantics: used in centralized systems. UNIX semantics: a read that follows two writes in
quick
Succession sees the result of the last write. Semantics of File Sharing is issues in Distributed File
Systems. In a Single File Server there is no client caching easy to implement UNIX semantics Client File
Caching improves performance by decreasing demand at the server updates to the cached file are not
seen by other clients.
Session Semantics: (relaxed semantics) changes to an open file are only visible to the process that
modified the file. When the file is closed, changes are visible to other processes closed file is sent
back to the server. Two or more clients are caching and modifying a file a final result depends on who
closes last use an arbitrary rule to decide who wins. In this file pointer sharing not possible when a
process and its children run on different machines.
- Subsequent reads and writes to the file are handled as routine memory accesses.
- Closing the file result in all the memory mapped data being written back to disk and removed
from virtual memory of the process.
Eg: Soloris 2 operating system uses this technique.
12. Explain how many frames can be allocated to processors?
ALLOCATION OF FRAMES:
- The operating system allocates all its buffer and table space from the free – frame list.
- When this space is not in use by the operating system, it can be used to support user paging.
- Three free frames can be reserved on the free- frame list at all the times.
- When a page fault occurs there will be a free frame available.
- Different problem arises when demand paging is combined with multi – programming.
- As multiprogramming puts 2(or more) processes at same time.
Minimum number of frames:
- As the number of frames allocated to each process decreased the page fault rate increases,
slowing processes execution.
- At least a minimum number is defined by the instruction set architecture.
- The maximum number is defined by the amount of available physical memory.
Allocation algorithms:
- Equal allocation: split ‘m’ frames among n processes
- Equal share->m/n frames.
- Left over frames free frame buffer
- Proportional allocation:allocate available memory to each processes according to its size.
- ai = si /s *m
- ai frames
- si size of memory.
- M available frames
Global versus local allocation :
Global replacement:
- Allows process to select a replacement frames from the set of all frames even if that frame is
currently allocated to some other process.
- One process can take a frame from another.
- One problem is that a process cannot control its own page fault rate.
- Results in greater system throughput ; a most common method.
Local replacement:
- Requires that each process select from only its own set of allocated frames.
- The number of frames allocated to a process does not change.
- The set of pages in memory for a process is affected by the paging behavior of only that process.
13. Explain about file system mounting
FILE SYSTEM MOUNTING:
- Similar to a file open process before using it a file system must be mounted before it can be
available to processes on the system.
- The directory structure can be built out of multiple partitions which must be mounted to make
them available within the file system name space.
PROCEDURE:
- Os is given the name of the device; location within the files structure to which to attach the file
system(mount point).
- A mount point is an empty directory
- A system may allow the same file system to be mounted repeatedly ; at different mount points
or it may only allow one mount per file system.
11 MARKS
1.What are files and explain the access methods for files? (APR ‘14) (NOV ’15) (Ref.Pg.No.20 Qn.No.4)
2. Write short notes on single and two-level Directory? (APR ‘11) (Ref.Pg.No.23 Qn.No.5)
3. Describe about Acyclic Graph Directory (NOV 13) (Ref.Pg.No.24 Qn.No.6)
4. Explain Tree structured directories? (APR ‘14) (Ref.Pg.No.25 Qn.No.7)
5. Explain Thrashing? (APR ‘11) (APR ‘14) (NOV ’15) (Ref.Pg.No.19 Qn.No.3)
6. Explain in detail about file sharing (APR ‘12) (Ref.Pg.No.28 Qn.No.10)
7. Consider the following page reference string
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.
How many page faults would occur for the following replacement algorithms, assuming one, two, three,
four, five, six, or seven frames? : (APR ‘11) (Ref.Pg.No.28 Qn.No.9)
8. Explain page replacement in detail? (APR’15) (NOV ’15, NOV ’18) (Ref.Pg.No.15 Qn.No.2)
9. Write notes about the protection strategies provided for files. (APR’15) (Ref.Pg.No.26 Qn.No.8)