os-Unit 3
os-Unit 3
PART A
2 2 2 2 6 6 6 6 10 10 10
3 3 3 3 7 7 7 7 11 11
4 4 4 4 8 8 8 8 12
Page Fault = 12
6. Will optimal page replacement algorithm suffer from belady‟s anomaly? Justify your
answer.
No optimal page replacement algorithm does not suffer from belady’s anomaly.
Because belady anomaly means No of page frame increases also no of page fault
increases. In optimal it is not possible.
7. Why page are sizes always powers of 2?
Paging is implemented by breaking up an address into a page and offset number.
It is most efficient to break the address into X page bits and Y offset bits, rather than
perform arithmetic on the address to calculate the page number and offset.
Because each bit position represents a power of 2, splitting an address between bits
results in a page size that is a power of 2.
8. Name two differences between logical and physical addresses.
Def:
A logical address is generated by the CPU and is translated into a physical address
by the memory management unit (MMU). Therefore, physical addresses are
generated by the MMU.
Difference:
Physical address is used in main memory.
Logical address is used in virtual memory.
The Physical address is the MAC address the logical is the IP address.
Best Example of physical address is man & example of logical address is his name.
The numbers of page frames are high and also the number of page fault also high.
High paging activity is known as thrashing.
Decreasing the CPU utilization, and increases the degree of multi programming.
The time taken by the dispatcher to stop one process and start another running is
To enable segmentation you need to set up a table that describes each segment - a segment
descriptor table. In x86, there are two types of descriptor tables:
Global Descriptor Table (GDT)
Local Descriptor Tables (LDT).
An LDT is set up and managed by user-space processes, and all processes have their own
LDT.
The GDT is shared by everyone - it's global.
13. What is the difference between simple paging and virtual memory paging?
In simple paging all the pages are loaded into main memory for execution.
Whereas in virtual paging loaded at once into the main memory.
Working set is the set of pages in the most recent ∆ page references
Disadvantages
User Program is being limited to the size available in the main memory.
Overlays
At a given time, the needed instructions & data are to be kept within a
memory.
The process tries to access a page that was not brought into memory causes
page fault.
The page table contains page no. and the corresponding frame number.
This frame number is combined with the page offset to define the physical
memory address that is sent to the memory unit.
24. What are the common strategies to select a free hole from a set of
available holes?
Swapping the entire process into main memory, a lazy swapper is used.
A lazy swapper never swaps a page into memory unless that page will be
needed.
Solutions:
1. Coalescing
2. Compaction
Global replacement – each process selects a replacement frame from the set
of all frames; one process can take a frame from another.
Local replacement–each process selects from only its own set of allocated
frames.
Example:
Reference string:1,2,3,4,1,2,5,1,2,3,4,5
Paging:
Transparent to programmer (system allocates memory)
No separate protection
No separate compiling
No shared code
Segmentation:
Involves programmer (allocates memory to specific function inside code)
Separate protection
Separate compiling
Share code
If the page fault and swapping happens very frequently at a higher rate, then
the operating system has to spend more time swapping these pages. This
Thrashing occurs when there are too many pages in memory, and each page refers to
another page.
0 219 600
1 2300 14
2 90 100
3 1327 580
4 1952 96
What are the physical addresses for the following logical addresses 3400 & 0110?
219+430 = 649
2300 + 10 = 2310
PARTB
2. Load time: Must generate relocatable code if memory location is not known at
compile time
3. Execution time: Need hardware support for address maps (e.g., base and limit
registers).
Address Binding
Logical and physical addresses are the same in ―compile-time and load-time.
The user program deals with logical addresses; it never sees the real
physical addresses
Dynamic Loading
Dynamic Linking
Linking postponed until execution time & is particularly useful for libraries
Small piece of code called stub, used to locate the appropriate memory-resident
library routine or function.
Overlays:
At a given time, the needed instructions & data are to be kept within a memory.
Swapping
The purpose of the swapping in operating system is to access the data present
in the hard disk and bring it to RAM so that the application programs can use
it. The thing to remember is that swapping is used only when data is not
present in RAM.
The concept of swapping has divided into two more concepts: Swap-in and
Swap-out.
Swap-out is a method of removing a process from RAM and adding it to the hard disk.
Swap-in is a method of removing a program from a hard disk and putting it back into the
main memory or RAM.
Fig.3.3. Swapping
Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take to
transfer from main memory to secondary memory.
Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number of
Page Fault and decrease the overall processing performance.
Most systems allow programs to allocate more memory to its address space during
execution. Data allocated in the heap segments of programs in an example of such
allocated memory. What is required to support dynamic memory allocation in the
following schemes?
i) Continuous memory allocation
Memory Protection:
Memory Allocation
Each process is contained in a single contiguous section of memory. There are two
methods namely:
Fixed–Partition Method:
Divide memory into fixed size partitions, where each partition has exactly one
process.
Variable-partition Method:
Divide memory into variable size partitions, depending upon the size of the
incoming process.
Solution:
Best-fit: Allocate the smallest hole that is big enough; must search entire
list, unless ordered by size. Produces the smallest left over hole.
Worst-fit: Allocate the largest hole; must also search entire list.
main memory.
2. Compaction: Move all processes towards one end of memory, hole towards
other end of memory, producing one large hole of available memory.
Paging
It avoids the considerable problem of fitting the varying size memory chunks on
to the backing store.
Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
Page offset (d) – combined with base address to define the physical address
i.e., Physical address = base address +offset
Paging Hardware
Key Points
Introduction
More expensive.
When a logical address is generated by CPU, its page number is presented to TLB.
TLB hit: If the page number is found, its frame number is immediately
available & is used to access memory
TLB miss: If the page number is not in the TLB, a memory reference to the
page table must be made.
Hit ratio: Percentage of times that a particular page is found in the TLB.
valid (v) - indicates that the associated page is in the process logical address
space, and is thus a legal page
invalid(i) - indicates that the page is not in the process logical address space
Most systems allow programs to allocate more memory to its address space during
execution. Data allocated in the heap segments of programs in an example of such
allocated memory. What is required to support dynamic memory allocation in the
following schemes?
Pure segmentation
Segmentation
Memory-management scheme that supports user view of memory
A program is a collection of segments.
A segment is a logical unit such as: Main program, Procedure, Function, Method, Object,
Local variables, global variables, Common block, Stack, Symbol table, arrays
User‟s View of a Program
EXAMPLE:
Sharing of Segments
The maximum number of segments per process is 16 KB, and each segment can be as
large as 4 gigabytes.
The local-address space of a process is divided into two partitions.
The first partition consists of up to 8 KB segments that are private to that process.
The second partition consists of up to 8KB segments that are shared among all the
processes.
Information about the first partition is kept in the local descriptor table (LDT),
information about the second partition is kept in the global descriptor table (GDT).
Each entry in the LDT and GDT consist of 8 bytes, with detailed information about a
particular segment including the base location and length of the segment.
The logical address is a pair (selector, offset) where the selector is a 16-bit number:
S g p
13 1 2
Where s designates the segment number, g indicates whether the segment is in the GDT or
LDT, and p deals with protection.
The offset is a 32-bit number specifying the location of the byte.
The base and limit information about the segment are used to generate a linear-address.
First, the limit is used to check for address validity.
If the address is not valid, a memory fault is generated, resulting in a trap to the operating
system.
If it is valid, then the value of the offset is added to the value of the base, resulting in a 32-bit
linear address. This address is then translated into a physical address.
The linear address is divided into a page number consisting of 20 bits, and a page offset
consisting of 12 bits.
Since we page the page table, the page number is further divided into a 10-bit page directory
pointer and a 10-bit page table pointer.
The logical address is as follows.
P1 P2 d
8. Explain any two structures of the page table with neat diagrams.
1. Hierarchical Paging
Break up the Page table into smaller pieces. Because if the page table is too
large then it is quite difficult to search the page number.
P1 P2 d
Each entry in hash table contains a linked list of elements that hash to the
same location.
Working Procedure:
The virtual page number in the virtual address is hashed into the hash
table.
Virtual page number is compared to field(a) in the 1st element in the linked list.
If there is a match, the corresponding page frame (field (b)) is used to form the
desired physical address.
If there is no match, subsequent entries in the linked list are searched for
a matching virtual page number.
It has one entry for each real page (frame) of memory & each entry consists of the
virtual address of the page stored in that real memory location, with information about
the process that owns that page. So, only one page table is in the system.
Demerit: Improve the amount of time needed to search the table when a page
reference occurs.
Shared code
One copy of read-only (re entrant) code shared among processes (i.e., text
editors, compilers, window systems).
Shared code must appear in same location in the logical address space of
all processes
EXAMPLE:
IA-32 Architecture
The CPU generates logical addresses, which are given to the segmentation
unit.
The segmentation unit produces a linear address for each logical address.
The linear address is then given to the paging unit, which in turn generates
the physical address in main memory.
Thus, the segmentation and paging units form the equivalent of the memory-
management unit(MMU).
IA-32 Segmentation
Information about the first partition is kept in the local descriptor table (LDT);
Information about the second partition is kept in the global descriptor table
(GDT).
Each entry in the LDT and GDT consists of an 8-byte segment descriptor
with detailed information about a particular segment, including the base
location and limit of that segment.
The logical address is a pair (selector, offset), where the selector is a 16-bit
number:
The machine has six segment registers, allowing six segments to be addressed
at any one time by a process. It also has six 8-byte micro program registers to
hold the corresponding descriptors from either the LDT or GDT.
The base and limit information about the segment in question is used to
generate a linear address.
If it is valid, then the value of the offset is added to the value of the base,
resulting in a 32-bit linear address.
IA-32 Paging
For 4-KB pages, IA-32 uses a two-level paging scheme in which the division
of the 32- bit linear address is as following.
The Page directory entry points to an inner page table that is indexed by the contents
of the innermost 10 bits in the linear address.
To improve the efficiency of physical memory use, IA-32 page tables can be swapped to
disk.
In this case, an invalid bit is used in the page directory entry to indicate
whether the table to which the entry is pointing is in memory or on disk.
Page Address Extension (PAE) also increased the page-directory and page-table
entries from 32 to 64 bits in size, which allowed the base address of page
tables and page frames to extend from 20 to 24bits.
Virtual Memory
Advantages:
Allows the program that can be larger than the physical memory.
Demand paging
Demand segmentation
Demand Paging
Lazy Swapper - Never swaps a page into memory unless that page will be
needed.
Advantages
Faster response
More users
Valid-Invalid bit
Fig.3.25 Page table when some pages are not in main memory
fork() creating a copy of the parent’s address space for the child, duplicating the pages
belonging to the parent.
Many child processes invoke the exec() system call immediately after creation, the
copying of the parent’s address space may be unnecessary.
Instead, we can use a technique known as copy-on-write, which works by allowing the
parent and child processes initially to share the same pages. These shared pages are
marked as copy-on-write pages, meaning that if either process writes to a shared page, a
copy of the shared page is created.
Fig. show the contents of the physical memory before and after process 1 modifies page
C.
When the copy-on-write technique is used, only the pages that are modified by
either process are copied;
All unmodified pages can be shared by the parent and child processes.
Pages that cannot be modified (pages containing executable code) can be shared
by the parent and child. Copy-on-write is a common technique used by several
operating systems.
These free pages are typically allocated when the stack or heap for a process must
expand or when there are copy-on-write pages to be managed.
vfork() (for virtual memory fork)—that operates differently from fork() with copy-
on-write. With vfork(), the parent process is suspended, and the child process
uses the address space of the parent.
11. Under what circumstances do page faults occur? Describe the actions taken by
the operating system when a page fault occurs.
Page Fault
b. If the reference is valid then the page has not been yet brought into main
memory.
4. Reset the page table to indicate that the page is now in memory.
When the OS sets the instruction pointer to the 1st instruction of the
process, which is on the non-memory resident page, then the process
immediately faults for the page.
After this page is bought into the memory, the process continue to execute,
faulting as necessary until every page that it needs is in memory.
1. Trap to the OS
4. Check whether the reference was legal and find the location of page on disk.
10. Reset the page table to indicate that the page is now in memory.
Page Replacement
If no frames are free, we could find one that is not currently being used
& free it.
We can free a frame by writing its contents to swap space & changing
Then we can use that freed frame to hold the page for which the process
faulted.
Write the victim page to the disk, change the page & frame tables
accordingly.
3. Read the desired page into the (new) free frame. Update the page and frame tables.
Note:
If no frames are free, two page transfers are required & this situation effectively
doubles the page- fault service time.
Modify bit:
Thrashing
If the page fault and swapping happens very frequently at a higher rate, then the
operating system has to spend more time swapping these pages. This state in the
operating system is termed thrashing. Because of thrashing the CPU utilization is going
to be reduced.
Example
If any process does not have the number of frames that it needs to support pages in
active use then it will quickly page fault.
And at this point, the process must replace some pages. As all the pages of the process
are actively in use, it must replace a page that will be needed again right away.
Consequently, the process will quickly fault again, and again, and again, replacing
pages that it must bring back in immediately. This high paging activity by a process is
called thrashing.
During thrashing, the CPU spends less time on some actual productive work spend
more time swapping.
Fig.3.30 Thrashing
Causes of Thrashing
If a process does not have ―enough pages, the page-fault rate is very high.
This leads to:
If global replacement is used then as processes enter the main memory they
tend to steal frames belonging to other processes.
Eventually all processes will not have enough frames and hence the page
fault rate becomes very high.
Effect of Thrashing
At the time, when thrashing starts then the operating system tries to apply either
the Global page replacement Algorithm or the Local page replacement algorithm.
The Global Page replacement has access to bring any page, whenever thrashing found it
tries to bring more pages. Actually, due to this, no process can get enough frames and
as a result, the thrashing will increase more and more. Thus the global page
replacement algorithm is not suitable whenever thrashing happens.
Unlike the Global Page replacement, the local page replacement will select pages which
only belongs to that process. Due to this, there is a chance of a reduction in the
thrashing. As it is also proved that there are many disadvantages of Local Page
replacement. Thus local page replacement is simply an alternative to Global Page
replacement.
Local Page replacement is better than the Global Page replacement but local page
replacement has many disadvantages too, so it is not suggestible.
Working set is the set of pages in the most recent ∆ page references
D =∆WSSi
The working-set model is successful and its knowledge can be useful in preparing but it
is a very clumpy approach in order to avoid thrashing. There is another technique that
is used to avoid thrashing and it is Page Fault Frequency (PFF) and it is a more direct
approach.
The main problem is how to prevent thrashing. As thrashing has a high page fault rate
and also we want to control the page fault rate.
When the Page fault is too high, then we know that the process needs more frames.
Conversely, if the page fault-rate is too low then the process may have too many frames.
We can establish upper and lower bounds on the desired page faults. If the actual page-
fault rate exceeds the upper limit then we will allocate the process to another frame.
And if the page fault rate falls below the lower limit then we can remove the frame from
the process.
Thus with this, we can directly measure and control the page fault rate in order to
prevent thrashing.
14. What are the advantage and disadvantage of contiguous and non-contiguous
memory allocation?
fragmentation
allocation.
Advantages of paging
Multiprogramming is supported
Disadvantages of paging:
Some memory space stays unused when available blocks are not
sufficient for address space for jobs to run
Advantages of segmentation:
Disadvantages of segmentation:
Main memory will always limit the size of segmentation, that is,
segmentation is bound by the size limit of memory
15. Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in
order), how would the first-fit, best-fit, and worst-fit algorithms place
processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)? Which
algorithm makes the most efficient use of memory?
First-fit:
Best-fit:
Worst-fit: