0% found this document useful (0 votes)
6 views

os-Unit 3

Unit 3 of the Operating Systems course at Mailam Engineering College covers memory management concepts including external fragmentation, paging, segmentation, and virtual memory. It discusses various algorithms for page replacement, the significance of address binding, and the implications of thrashing. Additionally, it explains memory management techniques such as swapping, dynamic loading, and overlays, along with the differences between logical and physical addresses.

Uploaded by

atchayap14
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

os-Unit 3

Unit 3 of the Operating Systems course at Mailam Engineering College covers memory management concepts including external fragmentation, paging, segmentation, and virtual memory. It discusses various algorithms for page replacement, the significance of address binding, and the implications of thrashing. Additionally, it explains memory management techniques such as swapping, dynamic loading, and overlays, along with the differences between logical and physical addresses.

Uploaded by

atchayap14
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

CS3451 –Operating Systems Unit 3 Mailam Engineering College

Prepared By: Mrs.G.Vasanthi,AP/IT Page 1


CS3451 –Operating Systems Unit 3 Mailam Engineering College

UNIT III MEMORY MANAGEMENT 10


Main Memory - Swapping - Contiguous Memory Allocation – Paging - Structure of the Page Table
- Segmentation, Segmentation with paging; Virtual Memory - Demand Paging – Copy on Write -
Page Replacement - Allocation of Frames –Thrashing.

PART A

1. Define external fragmentation.


 External fragmentation occurs when there is a sufficient amount of space in the
memory to satisfy the memory request of a process. But the process's memory request
can not be satisfied as the memory available is in a non-contiguous manner.
2. What is the counting based page replacement algorithm?
 "Page Replacement Counting Algorithms: Keep a counter of the number of references
that have been made to each page LFU Algorithm: replaces page with smallest count
MFU Algorithm: based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.
3. Under what circumstances would a user be better off using a time sharing system
rather than a PC or a single user workstation?
 When there are few other users, the task is large, and the hardware is fast, time-
sharing makes sense.
 The full power of the system can be brought to bear on the user’s problem. The
problem can be solved faster than on a personal computer.
 Another case occurs when lots of other users need resources at the same time.
 A personal computer is best when the job is small enough to be executed reasonably
on it and when performance is sufficient to execute the program to the user’s
satisfaction.
4. State the effect of thrashing in an operating system.
 At the time, when thrashing starts then the operating system tries to apply either the
Global page replacement Algorithm or the Local page replacement algorithm.
5. Consider the following page reference string
1,2,3,4,5,6,7,8,9,10,11,12
How many page faults and page fault ratio would occur for the FIFO page replacement
algorithm? Assume there is four page frames.
1 1 1 1 5 5 5 5 9 9 9 9

Prepared By: Mrs.G.Vasanthi,AP/IT Page 2


CS3451 –Operating Systems Unit 3 Mailam Engineering College

2 2 2 2 6 6 6 6 10 10 10
3 3 3 3 7 7 7 7 11 11
4 4 4 4 8 8 8 8 12
Page Fault = 12
6. Will optimal page replacement algorithm suffer from belady‟s anomaly? Justify your
answer.
 No optimal page replacement algorithm does not suffer from belady’s anomaly.
Because belady anomaly means No of page frame increases also no of page fault
increases. In optimal it is not possible.
7. Why page are sizes always powers of 2?
 Paging is implemented by breaking up an address into a page and offset number.
 It is most efficient to break the address into X page bits and Y offset bits, rather than
perform arithmetic on the address to calculate the page number and offset.
 Because each bit position represents a power of 2, splitting an address between bits
results in a page size that is a power of 2.
8. Name two differences between logical and physical addresses.
Def:
 A logical address is generated by the CPU and is translated into a physical address
by the memory management unit (MMU). Therefore, physical addresses are
generated by the MMU.
Difference:
 Physical address is used in main memory.
Logical address is used in virtual memory.
 The Physical address is the MAC address the logical is the IP address.
Best Example of physical address is man & example of logical address is his name.

9. What do you mean by thrashing?

 The numbers of page frames are high and also the number of page fault also high.
 High paging activity is known as thrashing.

 Decreasing the CPU utilization, and increases the degree of multi programming.

10. Define the term „Dispatch Latency‟.

 The time taken by the dispatcher to stop one process and start another running is

Prepared By: Mrs.G.Vasanthi,AP/IT Page 3


CS3451 –Operating Systems Unit 3 Mailam Engineering College

known as dispatch latency.

11. Mention the significance of LDT and GDT in segmentation.

To enable segmentation you need to set up a table that describes each segment - a segment
descriptor table. In x86, there are two types of descriptor tables:
 Global Descriptor Table (GDT)
 Local Descriptor Tables (LDT).
An LDT is set up and managed by user-space processes, and all processes have their own
LDT.
The GDT is shared by everyone - it's global.

12. What are overlays?

To enable a process to be larger than the amount of memory allocated to it,


overlays are used. The idea of overlays is to keep in memory only those
instructions and data that are needed at a given time.

13. What is the difference between simple paging and virtual memory paging?

In simple paging all the pages are loaded into main memory for execution.
Whereas in virtual paging loaded at once into the main memory.

14. What is the principal of locality?

 Working-Set Strategy is based on the assumption of the model of locality.

 Locality is defined as the set of pages actively used together.

 Working set is the set of pages in the most recent ∆ page references

 ∆ is the working set window.

 if ∆ too small, it will not encompass entire locality

 if ∆ too large, it will encompass several localities

15. What are the disadvantages of single contiguous memory allocation?

 For Single Contiguous Allocation no special hardware is required.


• A simple Hardware Protection is required to ensure that, there is no accidental
tampering of user programs with the operating system.

Disadvantages

Prepared By: Mrs.G.Vasanthi,AP/IT Page 4


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 Memory is not fully utilized,

 Processor (CPU) is also not fully utilized,

 User Program is being limited to the size available in the main memory.

16. Compare Swapping and Overlays Swapping.

 A process can be swapped temporarily out of memory to a backing store


(SWAP OUT) and then brought back into memory for continued execution
(SWAP IN).

Overlays

 Enable a process larger than the amount of memory allocated to it.

 At a given time, the needed instructions & data are to be kept within a
memory.

17. What is the advantage of demand paging?

 Never brings a page into memory until it is required.

 We could start a process with no pages in memory.

18. Define TLB.

 TLB - Translation Look aside Buffer


 It is a fast lookup hardware cache.
 It contains the recently or frequently used page table entries.
 It has two parts: Key (tag) & Value.
 More expensive.

19. What is address binding?

Binding logical address with physical address is called as address binding.

20. What do you mean by page fault?

The process tries to access a page that was not brought into memory causes
page fault.

21. Why is paging used?

 Paging avoids the considerable problem of fitting the varying sized

Prepared By: Mrs.G.Vasanthi,AP/IT Page 5


CS3451 –Operating Systems Unit 3 Mailam Engineering College

memory chunks onto the backing store.

22. What is Segmentation?

 Segmentation is a memory management that supports user view of memory.

 A program is a collection of segments.

 A segment is a logical unit such as main program, procedure, function,


local variables, and global variables.

23. What is the purpose of paging the page table?

 The page table contains page no. and the corresponding frame number.

 This frame number is combined with the page offset to define the physical
memory address that is sent to the memory unit.

24. What are the common strategies to select a free hole from a set of
available holes?

The most common strategies are

a. First fit- allocate the first hole

b. Best fit- allocate the smallest hole

c. Worst fit- allocate the largest hole

25. Define lazy swapper.

 Swapping the entire process into main memory, a lazy swapper is used.

 A lazy swapper never swaps a page into memory unless that page will be
needed.

26. Define effective access time.

 Let p be the probability of a page fault (0<=p<=1). The value of p is expected to


be close to 0; that is, there will be only a few page faults.

Effective access time = (1-p) * ma + p* page fault time.

 ma-memory access time, p-page fault

27. What are the major problems to implement demand paging?

Prepared By: Mrs.G.Vasanthi,AP/IT Page 6


CS3451 –Operating Systems Unit 3 Mailam Engineering College

The two major problems to implement demand paging is developing

a. Frame allocation algorithm

b. Page replacement algorithm

28. What is Internal Fragmentation?

Allocated memory may be slightly larger than requested memory.

29. Differentiate frames from Pages.

 Divide logical memory into blocks of same size called “pages”.

 Divide physical memory into fixed-sized blocks called “frames”

30. What is External Fragmentation? How can it be solved?

 External Fragmentation takes place when enough total memory space


exists to satisfy a request, but it is not contiguous i.e, storage is
fragmented into a large number of small holes scattered throughout the
main memory.

Solutions:

1. Coalescing

2. Compaction

31. Differentiate between global and local page replacement algorithms.

 Global replacement – each process selects a replacement frame from the set
of all frames; one process can take a frame from another.

 Local replacement–each process selects from only its own set of allocated
frames.

32. What is meant by belady‟s anomaly?

 As the number of available frames increases, the number of page faults


increases. It is called as belady’s anamoly.

Example:

Reference string:1,2,3,4,1,2,5,1,2,3,4,5

 If No. of available frames = 3 then the no. of page faults=9

Prepared By: Mrs.G.Vasanthi,AP/IT Page 7


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 If No.of available frames =4 then the no.of page faults=10

33. Differentiate between page and segment.

Paging:
 Transparent to programmer (system allocates memory)
 No separate protection
 No separate compiling
 No shared code
Segmentation:
 Involves programmer (allocates memory to specific function inside code)
 Separate protection
 Separate compiling
 Share code

34. Differentiate internal and external fragmentation.

External Fragmentation – This takes place when enough total memory


space exists to satisfy a request, but it is not contiguous i.e, storage is
fragmented into a large number of small holes scattered throughout the
main memory.

Internal Fragmentation – Allocated memory may be slightly larger than


requested memory.

35. Define swapping.

 Swapping is a memory management scheme in which any process can be


temporarily swapped from main memory to secondary memory so that the
main memory can be made available for other processes.

 It is used to improve main memory utilization. In secondary memory, the


place where the swapped-out process is stored is called swap space.

36. What is thrashing?

What is thrashing? And how to solve these problems?

 If the page fault and swapping happens very frequently at a higher rate, then
the operating system has to spend more time swapping these pages. This

Prepared By: Mrs.G.Vasanthi,AP/IT Page 8


CS3451 –Operating Systems Unit 3 Mailam Engineering College

state in the operating system is termed thrashing.

37. When thrashing is used?

 Thrashing occurs when there are too many pages in memory, and each page refers to
another page.

38. What is demand paging?

 Demand Paging is a technique in which a page is usually brought into the


main memory only when it is needed or demanded by the CPU.

39. Consider the following segment table.

Segment Base Length

0 219 600

1 2300 14

2 90 100

3 1327 580

4 1952 96

What are the physical addresses for the following logical addresses 3400 & 0110?

219+430 = 649

2300 + 10 = 2310

illegal reference, trap to operating system

1327+ 400= 1727

illegal reference, trap to operating system

Prepared By: Mrs.G.Vasanthi,AP/IT Page 9


CS3451 –Operating Systems Unit 3 Mailam Engineering College

PARTB

1. Explain in detail about memory management background.

Memory Management: Background

1. Compile time: Must generate absolute code if memory location is known in


prior.

2. Load time: Must generate relocatable code if memory location is not known at
compile time

3. Execution time: Need hardware support for address maps (e.g., base and limit
registers).

Address Binding

• Address binding: Mapping of instructions and data from one address to


another address in memory.

Multistep Processing of a User Program

Fig.3.1 Processing of a User Program

Logical vs. Physical Address Space

 Logical address–generated by the CPU; also referred to as “virtual address“

 Physical address – An address seen by the memory unit.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 10


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 Logical and physical addresses are the same in ―compile-time and load-time.

 Logical (virtual) and physical addresses differ in ―execution-time.

Memory-Management Unit (MMU)

 It is a hardware device that maps Logical address to physical address.

 In this scheme, the relocation register‘s value is added to Logical address


generated by a user process.

 The user program deals with logical addresses; it never sees the real
physical addresses

Dynamic relocation using relocation register

Fig.3.2 Dynamic relocation using relocation register

Dynamic Loading

 The routine is not loaded until it is called.

 Better memory-space utilization; unused routine is never loaded

 No special support from the operating system

Dynamic Linking

 Linking postponed until execution time & is particularly useful for libraries

 Small piece of code called stub, used to locate the appropriate memory-resident
library routine or function.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 11


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Overlays:

 Enable a process larger than the amount of memory allocated to it.

 At a given time, the needed instructions & data are to be kept within a memory.

2. Explain in detail about swapping.

Swapping

 Swapping is a memory management scheme in which any process can be


temporarily swapped from main memory to secondary memory so that the main
memory can be made available for other processes. It is used to improve main
memory utilization. In secondary memory, the place where the swapped-out
process is stored is called swap space.

 The purpose of the swapping in operating system is to access the data present
in the hard disk and bring it to RAM so that the application programs can use
it. The thing to remember is that swapping is used only when data is not
present in RAM.

 Although the process of swapping affects the performance of the system, it


helps to run larger and more than one process. This is the reason why
swapping is also referred to as memory compaction.

 The concept of swapping has divided into two more concepts: Swap-in and
Swap-out.

 Swap-out is a method of removing a process from RAM and adding it to the hard disk.
 Swap-in is a method of removing a program from a hard disk and putting it back into the
main memory or RAM.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 12


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.3. Swapping

Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take to
transfer from main memory to secondary memory.

User process size is 2048Kb


Data transfer rate is 1Mbps = 1024 kbps
Time = process size / transfer rate
= 2048 / 1024
= 2 seconds
= 2000 milliseconds
Now taking swap-in and swap-out time, the process will take 4000 milliseconds.
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore, processes
do not have to wait very long before they are executed.
4. It improves the main memory utilization.

Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number of
Page Fault and decrease the overall processing performance.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 13


CS3451 –Operating Systems Unit 3 Mailam Engineering College

3. Explain about contiguous memory allocation with neat diagram.

Most systems allow programs to allocate more memory to its address space during
execution. Data allocated in the heap segments of programs in an example of such
allocated memory. What is required to support dynamic memory allocation in the
following schemes?
i) Continuous memory allocation

Write a brief note on contiguous memory allocation.

Contiguous memory allocation

Memory Protection:

1. Protecting the OS from user process.

2. Protecting user processes from one another.

o Protection is done by ―Relocation-register & Limit-register scheme

o Relocation register contains value of smallest physical address i.e base


value.

o Limit register contains range of logical addresses – each logical address


must be less than the limit register

H/W address protection with base and limit registers

Fig.3.4 H/W address protection

Prepared By: Mrs.G.Vasanthi,AP/IT Page 14


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Memory Allocation

Each process is contained in a single contiguous section of memory. There are two
methods namely:

 Fixed – Partition Method

 Variable – Partition Method

Fixed–Partition Method:

 Divide memory into fixed size partitions, where each partition has exactly one
process.

 The drawback is memory space unused within a partition is wasted.(eg. when


process size<partition size)

Variable-partition Method:

 Divide memory into variable size partitions, depending upon the size of the
incoming process.

 When a process terminates, the partition becomes available for another


process.

Dynamic Storage-Allocation Problem:

How to satisfy a request of size n fr om a list of free holes?

Solution:

 First-fit: Allocate the first hole that is big enough.

 Best-fit: Allocate the smallest hole that is big enough; must search entire
list, unless ordered by size. Produces the smallest left over hole.

 Worst-fit: Allocate the largest hole; must also search entire list.

Problem for above solutions:

 Internal Fragmentation – Allocated memory may be slightly larger than


requested memory.

 External Fragmentation – This takes place when enough total memory


space exists to satisfy a request, but it is not contiguous i.e, storage is
fragmented into a large number of small holes scattered throughout the

Prepared By: Mrs.G.Vasanthi,AP/IT Page 15


CS3451 –Operating Systems Unit 3 Mailam Engineering College

main memory.

Solutions for external fragmentation:

1. Coalescing: Merge the adjacent holes together.

2. Compaction: Move all processes towards one end of memory, hole towards
other end of memory, producing one large hole of available memory.

4. Explain in detail about Paging.

With a neat diagram Discuss about a mechanism of paging scheme.

With a neat diagram explain the concept of paging in memory management.


Key Points

 Introduction – pages, frames

 Paging hardware – Address translation scheme

 Paging model of logical Vs physical memory

Paging

 It is a memory management scheme that permits the physical address


space of a process to be non contiguous.

 It avoids the considerable problem of fitting the varying size memory chunks on
to the backing store.

(i) Basic Method:

 Divide logical memory into blocks of same size called “pages”.

 Divide physical memory into fixed-sized blocks called “frames”

Address Translation Scheme – Logical Address is divided into

Page number (p) – used as an index into a page table which contains base
address of each page in physical memory

Page offset (d) – combined with base address to define the physical address
i.e., Physical address = base address +offset

Prepared By: Mrs.G.Vasanthi,AP/IT Page 16


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Paging Hardware

Fig.3.5 Paging model of logical and physical memory

Fig.3.6 Paging example for a 32-byte memory with 4-byte pages

Page size = 4 bytes

Physical memory size = 32 bytes i.e( 4 X 8 = 32 so, 8 pages)

Logical address 0 maps to physical address 20 i.e ((5X4)+0)

Prepared By: Mrs.G.Vasanthi,AP/IT Page 17


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Where Frame no=5, Page size=4, Offset=0

Fig.3.7 Paging Example

5. Why are translation look-aside buffers important? Explain the details


stored in a TLB table entry.

Key Points

 Introduction

 Paging Hardware with TLB – TLB Hit, TLB miss

 Memory Protection – Valid, Invalid

TLB (Translation Look-aside Buffer)

 It is a fast lookup hardware cache.

 It contains the recently or frequently used page table entries.

 It has two parts: Key (tag) &Value.

 More expensive.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 18


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Paging Hardware with TLB

Fig.3.8 Paging H/W withTLB

When a logical address is generated by CPU, its page number is presented to TLB.

 TLB hit: If the page number is found, its frame number is immediately
available & is used to access memory

 TLB miss: If the page number is not in the TLB, a memory reference to the
page table must be made.

 Hit ratio: Percentage of times that a particular page is found in the TLB.

(ii) Memory Protection

 Memory protection implemented by associating protection bit with each frame

 Valid-invalid bit attached to each entry in the page table:

 valid (v) - indicates that the associated page is in the process logical address
space, and is thus a legal page

 invalid(i) - indicates that the page is not in the process logical address space

Prepared By: Mrs.G.Vasanthi,AP/IT Page 19


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.9 Paging valid - Invalid Bit

6. With necessary diagram explain the concept of segmentation.

Most systems allow programs to allocate more memory to its address space during
execution. Data allocated in the heap segments of programs in an example of such
allocated memory. What is required to support dynamic memory allocation in the
following schemes?
Pure segmentation
Segmentation
 Memory-management scheme that supports user view of memory
 A program is a collection of segments.
 A segment is a logical unit such as: Main program, Procedure, Function, Method, Object,
Local variables, global variables, Common block, Stack, Symbol table, arrays
User‟s View of a Program

Prepared By: Mrs.G.Vasanthi,AP/IT Page 20


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.10 Logical Address Space for segmentation


Segmentation Hardware
 Logical address consists of a two tuple : <Segment-number, offset>
 Segment table – maps two-dimensional physical addresses; each table entry has:
Base – contains the starting physical address where the segments reside in memory
Limit – specifies the length of the segment
 Segment-table base register (STBR) points to the segment table‘s location in memory
 Segment-table length register (STLR) indicates number of segments used by a program;
Sharing
 shared segments
 same segment number
Allocation
 first fit/best fit
 external fragmentation
Protection: With each entry in segment table associate:
 validation bit = 0
 Protection bits associated with segments; code sharing occurs at segment level
 Since segments vary in length, memory allocation is a dynamic storage-allocation problem
Address Translation scheme

Prepared By: Mrs.G.Vasanthi,AP/IT Page 21


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.11 Segmentation Hardware

EXAMPLE:

Fig.3.12 Segmentation Example

Prepared By: Mrs.G.Vasanthi,AP/IT Page 22


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Sharing of Segments

Fig.3.13 Sharing of Segments


Advantage of segmentation involves the sharing of code or data.
 Each process has a segment table associated with it, which the dispatcher uses to define
the hardware segment table when this process is given the CPU.
 Segments are shared when entries in the segment tables of two different processes point
to the same physical location.

7. With a neat sketch, explain about segmentation and paging.


Compare paging with segmentation in terms of the amount of memory required by
the address translation structures in order to convert virtual addresses to physical
addresses.
Compare paging with segmentation in terms of memory requirement by the
address translation structure in order to convert virtual addresses to physical
memory.
 The IBM 386 uses segmentation with paging for memory management.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 23


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 The maximum number of segments per process is 16 KB, and each segment can be as
large as 4 gigabytes.
 The local-address space of a process is divided into two partitions.
 The first partition consists of up to 8 KB segments that are private to that process.
 The second partition consists of up to 8KB segments that are shared among all the
processes.
 Information about the first partition is kept in the local descriptor table (LDT),
information about the second partition is kept in the global descriptor table (GDT).
 Each entry in the LDT and GDT consist of 8 bytes, with detailed information about a
particular segment including the base location and length of the segment.
The logical address is a pair (selector, offset) where the selector is a 16-bit number:

S g p

13 1 2
 Where s designates the segment number, g indicates whether the segment is in the GDT or
LDT, and p deals with protection.
 The offset is a 32-bit number specifying the location of the byte.
 The base and limit information about the segment are used to generate a linear-address.
 First, the limit is used to check for address validity.
 If the address is not valid, a memory fault is generated, resulting in a trap to the operating
system.
 If it is valid, then the value of the offset is added to the value of the base, resulting in a 32-bit
linear address. This address is then translated into a physical address.
 The linear address is divided into a page number consisting of 20 bits, and a page offset
consisting of 12 bits.
 Since we page the page table, the page number is further divided into a 10-bit page directory
pointer and a 10-bit page table pointer.
 The logical address is as follows.

P1 P2 d

Prepared By: Mrs.G.Vasanthi,AP/IT Page 24


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.14 Segmentation with Paging


 To improve the efficiency of physical memory use.
 Intel 386 page tables can be swapped to disk.
 In this case, an invalid bit is used in the page directory entry to indicate whether the table
to which the entry is pointing is in memory or on disk.
 If the table is on disk, the operating system can use the other 31 bits to specify the disk
location of the table; the table then can be brought into memory on demand.

8. Explain any two structures of the page table with neat diagrams.

Structures of the Page Table

1. Hierarchical Paging

 Break up the Page table into smaller pieces. Because if the page table is too
large then it is quite difficult to search the page number.

Example: “Two-Level Paging“

Prepared By: Mrs.G.Vasanthi,AP/IT Page 25


CS3451 –Operating Systems Unit 3 Mailam Engineering College

P1 P2 d

Fig. 3.15 Hierarchical Paging

Address-Translation Scheme for hierarchical paging

Address-translation scheme for a two-level 32-bit paging architecture

Fig.3.16 Address Translation Scheme

 It requires more number of memory accesses, when the number of levels is


increased.

(b) Hashed Page Tables

 Each entry in hash table contains a linked list of elements that hash to the
same location.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 26


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 Each entry consists of;

(a) Virtual page numbers

(b) Value of mapped page frame.

(c) Pointer to the next element in the linked list.

Working Procedure:

 The virtual page number in the virtual address is hashed into the hash
table.

 Virtual page number is compared to field(a) in the 1st element in the linked list.

 If there is a match, the corresponding page frame (field (b)) is used to form the
desired physical address.

 If there is no match, subsequent entries in the linked list are searched for
a matching virtual page number.

Fig.3.17 Hashed Paging

(c) Inverted Page Table

It has one entry for each real page (frame) of memory & each entry consists of the
virtual address of the page stored in that real memory location, with information about
the process that owns that page. So, only one page table is in the system.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 27


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.18 Inverted Paging

When a memory reference occurs, part of the virtual address, consisting of

<Process-id, Page-no> is presented to the memory sub-system.

 Then the inverted page table is searched for match:

o If a match is found, then the physical address is generated.

o If no match is found, then an illegal address access has been


attempted.

 Merit: Reduce the amount of memory needed.

 Demerit: Improve the amount of time needed to search the table when a page
reference occurs.

(v) Shared Pages

 One advantage of paging is the possibility of sharing common code.

Shared code

 One copy of read-only (re entrant) code shared among processes (i.e., text
editors, compilers, window systems).

 Shared code must appear in same location in the logical address space of
all processes

EXAMPLE:

Prepared By: Mrs.G.Vasanthi,AP/IT Page 28


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.19 Shared Paging

9. Explain in detail 32 and 64 bit architecture examples.

 The IA-32 architecture supported both paging and segmentation.

IA-32 Architecture

 Memory management in IA-32 systems is divided into two components


segmentation and paging and works as follows:

 The CPU generates logical addresses, which are given to the segmentation
unit.

 The segmentation unit produces a linear address for each logical address.

 The linear address is then given to the paging unit, which in turn generates
the physical address in main memory.

 Thus, the segmentation and paging units form the equivalent of the memory-
management unit(MMU).

Prepared By: Mrs.G.Vasanthi,AP/IT Page 29


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.20 Logical to physical address Translation

IA-32 Segmentation

 The IA-32 architecture allows a segment to be as large as 4 GB, and the


maximum number of segments per process is 16KB.

 The logical address space of a process is divided into two partitions.

 The first partition consists of up to 8K segments that are private to that


process.

 The second partition consists of up to 8K segments that are shared among


all the processes.

 Information about the first partition is kept in the local descriptor table (LDT);

 Information about the second partition is kept in the global descriptor table
(GDT).

 Each entry in the LDT and GDT consists of an 8-byte segment descriptor
with detailed information about a particular segment, including the base
location and limit of that segment.

 The logical address is a pair (selector, offset), where the selector is a 16-bit
number:

 In which s designates the segment number, g indicates whether the


segment is in the GDT or LDT, and p deals with protection. The offset is a
32-bit number specifying the location of the byte within the segment.

 The machine has six segment registers, allowing six segments to be addressed
at any one time by a process. It also has six 8-byte micro program registers to
hold the corresponding descriptors from either the LDT or GDT.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 30


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 The base and limit information about the segment in question is used to
generate a linear address.

 First, the limit is used to check for address validity.

 If the address is not valid, a memory fault is generated, resulting in a trap


to the operating system.

Fig.3.21 IA32 Segmentation

 If it is valid, then the value of the offset is added to the value of the base,
resulting in a 32-bit linear address.

IA-32 Paging

 The IA-32 architecture allows a page size of either 4 KB or 4MB.

 For 4-KB pages, IA-32 uses a two-level paging scheme in which the division
of the 32- bit linear address is as following.

 The Page directory entry points to an inner page table that is indexed by the contents
of the innermost 10 bits in the linear address.

 To improve the efficiency of physical memory use, IA-32 page tables can be swapped to
disk.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 31


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.22 Page Address Extension

 In this case, an invalid bit is used in the page directory entry to indicate
whether the table to which the entry is pointing is in memory or on disk.

 Page Address Extension (PAE) also increased the page-directory and page-table
entries from 32 to 64 bits in size, which allowed the base address of page
tables and page frames to extend from 20 to 24bits.

9. Explain the concept of demand paging. How can demand paging be


implemented with virtual memory?

Virtual Memory

It is a technique that allows the execution of processes that may not be


completely in main memory.

Advantages:

 Allows the program that can be larger than the physical memory.

 Separation of user logical memory from physical memory

 Allows processes to easily share files & address space.

 Allows for more efficient for process creation.

Virtual memory can be implemented using,

 Demand paging

Prepared By: Mrs.G.Vasanthi,AP/IT Page 32


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 Demand segmentation

Virtual Memory that is Larger than Physical Memory

Fig.3.23 Virtual memory and physical memory

Demand Paging

 It is similar to a paging system with swapping.

 Demand Paging - Bring a page into memory only when it is needed

 To execute a process, swap that entire process into memory.

 Lazy Swapper - Never swaps a page into memory unless that page will be
needed.

Advantages

 Less I/O needed

 Less memory needed

 Faster response

 More users

Prepared By: Mrs.G.Vasanthi,AP/IT Page 33


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Transfer of a paged memory to contiguous disk space

Fig.3.24 Transfer of a paged memory to contiguous disk space

Valid-Invalid bit

 Valid associated page is in memory.

 In-Valid - valid page but is currently on the disk

Page table when some pages are not in main memory

Fig.3.25 Page table when some pages are not in main memory

Prepared By: Mrs.G.Vasanthi,AP/IT Page 34


CS3451 –Operating Systems Unit 3 Mailam Engineering College

10. Explain in detail about Copy on Write.

 fork() creating a copy of the parent’s address space for the child, duplicating the pages
belonging to the parent.

 Many child processes invoke the exec() system call immediately after creation, the
copying of the parent’s address space may be unnecessary.

 Instead, we can use a technique known as copy-on-write, which works by allowing the
parent and child processes initially to share the same pages. These shared pages are
marked as copy-on-write pages, meaning that if either process writes to a shared page, a
copy of the shared page is created.

 Fig. show the contents of the physical memory before and after process 1 modifies page
C.

Fig.3.26 Before Process 1 modifies page C

Fig.3.27 After Process 1 modifies Page C

Prepared By: Mrs.G.Vasanthi,AP/IT Page 35


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 When the copy-on-write technique is used, only the pages that are modified by
either process are copied;

 All unmodified pages can be shared by the parent and child processes.

 Pages that cannot be modified (pages containing executable code) can be shared
by the parent and child. Copy-on-write is a common technique used by several
operating systems.

 When it is determined that a page is going to be duplicated using copy on-write, it


is important to note the location from which the free page will be allocated. Many
operating systems provide a pool of free pages for such requests.

 These free pages are typically allocated when the stack or heap for a process must
expand or when there are copy-on-write pages to be managed.

 Operating systems typically allocate these pages using a technique known as


zero-fill-on-demand. Zero-fill-on-demand pages have been zeroed-out before being
allocated, thus erasing the previous contents.

 vfork() (for virtual memory fork)—that operates differently from fork() with copy-
on-write. With vfork(), the parent process is suspended, and the child process
uses the address space of the parent.

11. Under what circumstances do page faults occur? Describe the actions taken by
the operating system when a page fault occurs.

Page Fault

 Access to a page marked invalid causes a page fault trap.

Steps in Handling a Page Fault

Prepared By: Mrs.G.Vasanthi,AP/IT Page 36


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Fig.3.28 Steps in Handling a Page Fault

1. Determine whether the reference is a valid or invalid memory access

a. If the reference is invalid then terminate the process.

b. If the reference is valid then the page has not been yet brought into main
memory.

2. Find a free frame.

3. Read the desired page into the newly allocated frame.

4. Reset the page table to indicate that the page is now in memory.

5. Restart the instruction that was interrupted.

Pure demand paging

 Never bring a page into memory until it is required.

 We could start a process with no pages in memory.

 When the OS sets the instruction pointer to the 1st instruction of the
process, which is on the non-memory resident page, then the process
immediately faults for the page.

 After this page is bought into the memory, the process continue to execute,
faulting as necessary until every page that it needs is in memory.

Performance of demand paging

Prepared By: Mrs.G.Vasanthi,AP/IT Page 37


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 Let p be the probability of a page fault 0

 Effective Access Time(EAT)

EAT = (1 – p) x ma + p x page fault time.

Where m a - memory access, p Probability of page fault (0≤ p≤1)

A page fault causes the following sequence to occur:

1. Trap to the OS

2. Save the user registers and process state.

3. Determine that the interrupt was a page fault.

4. Check whether the reference was legal and find the location of page on disk.

5. Read the page from disk to free frame.

i. Wait in a queue until read request is serviced.

ii. Wait for seek time and latency time.

iii. Transfer the page from disk to free frame.

6. While waiting, allocate CPU to some other user.

7. Interrupt from disk.

8. Save registers and process state for other users.

9. Determine that the interrupt was from disk.

10. Reset the page table to indicate that the page is now in memory.

11.Wait for CPU to be allocated to this process again.

12.Restart the instruction that was interrupted.

11. Explain in detail about Page replacement.

Page Replacement

 If no frames are free, we could find one that is not currently being used
& free it.

 We can free a frame by writing its contents to swap space & changing

Prepared By: Mrs.G.Vasanthi,AP/IT Page 38


CS3451 –Operating Systems Unit 3 Mailam Engineering College

the page table to indicate that the page is no longer in memory.

 Then we can use that freed frame to hold the page for which the process
faulted.

Basic Page Replacement

1. Find the location of the desired page on disk

2. Find a free frame

 If there is a free frame, then use it.

 If there is no free frame, use a page replacement algorithm to select a victim


frame

 Write the victim page to the disk, change the page & frame tables
accordingly.

3. Read the desired page into the (new) free frame. Update the page and frame tables.

4. Restart the Process.

Fig.3.29 Page Replacement

Note:

If no frames are free, two page transfers are required & this situation effectively
doubles the page- fault service time.

Modify bit:

Prepared By: Mrs.G.Vasanthi,AP/IT Page 39


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 It indicates that any word or byte in the page is modified.

 When we select a page for replacement, we examine its modify bit.

12. Discuss about any three page replacement algorithms in detail.

Page Replacement Algorithms


a. FIFO Page Replacement
b. Optimal Page Replacement
c. LRU Page Replacement
(a) FIFO page replacement algorithm
 Replace the oldest page.
 This algorithm associates with each page, the time when that page was brought in.
Example:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
No.of available frames = 3 (3 pages can be in memory at a time per process)

No. of page faults = 15


Drawback:
 FIFO page replacement algorithms performance is not always good.
 To illustrate this, consider the following example:
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 If No.of available frames = 3 then the no.of page faults =9
 If No.of available frames =4 then the no.of page faults =10
 Here the no. of page faults increases when the no.of frames increases .This is called as
Belady‟s Anomaly.
(b) Optimal page replacement algorithm
 Replace the page that will not be used for the longest period of time.
Example:

Prepared By: Mrs.G.Vasanthi,AP/IT Page 40


CS3451 –Operating Systems Unit 3 Mailam Engineering College

Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1


No.of available frames = 3

No. of page faults = 9


Drawback:
It is difficult to implement as it requires future knowledge of the reference string.

(c) LRU (Least Recently Used) page replacement algorithm


 Replace the page that has not been used for the longest period of time.
Example:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1

No. of page faults = 12

13. What is meant by thrashing? Discuss in detail.

Thrashing

 If the page fault and swapping happens very frequently at a higher rate, then the
operating system has to spend more time swapping these pages. This state in the
operating system is termed thrashing. Because of thrashing the CPU utilization is going
to be reduced.

Example

Prepared By: Mrs.G.Vasanthi,AP/IT Page 41


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 If any process does not have the number of frames that it needs to support pages in
active use then it will quickly page fault.

 And at this point, the process must replace some pages. As all the pages of the process
are actively in use, it must replace a page that will be needed again right away.

 Consequently, the process will quickly fault again, and again, and again, replacing
pages that it must bring back in immediately. This high paging activity by a process is
called thrashing.

 During thrashing, the CPU spends less time on some actual productive work spend
more time swapping.

Fig.3.30 Thrashing

Causes of Thrashing

 If a process does not have ―enough pages, the page-fault rate is very high.
This leads to:

o Low CPU utilization

o Operating system thinks that it needs to increase the degree of


multiprogramming

o another process is added to the system

 When the CPU utilization is low, the OS increases the degree of


multiprogramming.

 If global replacement is used then as processes enter the main memory they
tend to steal frames belonging to other processes.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 42


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 Eventually all processes will not have enough frames and hence the page
fault rate becomes very high.

 Thus swapping in and swapping out of pages only takes place.

 This is the cause of thrashing.

Effect of Thrashing

 At the time, when thrashing starts then the operating system tries to apply either
the Global page replacement Algorithm or the Local page replacement algorithm.

Global Page Replacement

 The Global Page replacement has access to bring any page, whenever thrashing found it
tries to bring more pages. Actually, due to this, no process can get enough frames and
as a result, the thrashing will increase more and more. Thus the global page
replacement algorithm is not suitable whenever thrashing happens.

Local Page Replacement

 Unlike the Global Page replacement, the local page replacement will select pages which
only belongs to that process. Due to this, there is a chance of a reduction in the
thrashing. As it is also proved that there are many disadvantages of Local Page
replacement. Thus local page replacement is simply an alternative to Global Page
replacement.

Techniques used to handle the thrashing

 Local Page replacement is better than the Global Page replacement but local page
replacement has many disadvantages too, so it is not suggestible.

 To limit thrashing, we can use a local replacement algorithm.

 To prevent thrashing, there are two methods namely ,

[1] Working Set Strategy

[2] Page Fault Frequency

1. Working Set Strategy

 It is based on the assumption of the model of locality.

 Locality is defined as the set of pages actively used together.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 43


CS3451 –Operating Systems Unit 3 Mailam Engineering College

 Working set is the set of pages in the most recent ∆ page references

 ∆ is the working set window.

 if ∆ too small, it will not encompass entire locality

 if ∆ too large, it will encompass several localities

 if ∆= 0, it will encompass entire program

D =∆WSSi

 Where WSSi is the working set size for process i.

 D is the total demand of frames

 If D >m then Thrashing will occur.

2. Page Fault Frequency

Fig.3.31 Page fault frequency

o If actual rate too low, process loses frame

Prepared By: Mrs.G.Vasanthi,AP/IT Page 44


CS3451 –Operating Systems Unit 3 Mailam Engineering College

o If actual rate too high, process gains frame

 The working-set model is successful and its knowledge can be useful in preparing but it
is a very clumpy approach in order to avoid thrashing. There is another technique that
is used to avoid thrashing and it is Page Fault Frequency (PFF) and it is a more direct
approach.

 The main problem is how to prevent thrashing. As thrashing has a high page fault rate
and also we want to control the page fault rate.

 When the Page fault is too high, then we know that the process needs more frames.
Conversely, if the page fault-rate is too low then the process may have too many frames.

 We can establish upper and lower bounds on the desired page faults. If the actual page-
fault rate exceeds the upper limit then we will allocate the process to another frame.
And if the page fault rate falls below the lower limit then we can remove the frame from
the process.

 Thus with this, we can directly measure and control the page fault rate in order to
prevent thrashing.

14. What are the advantage and disadvantage of contiguous and non-contiguous
memory allocation?

The advantage of contiguous memory allocation is

1. It supports fast sequential and direct access

2. It provides a good performance

3. The number of disk seek required is minimal

The disadvantage of contiguous memory allocation is

 fragmentation

Non contiguous memory allocation, offers the following advantages over


contiguous memory allocation:

 Allows the interdependence of code and data among processes.

 External fragmentation is none existent with non contiguous memory

Prepared By: Mrs.G.Vasanthi,AP/IT Page 45


CS3451 –Operating Systems Unit 3 Mailam Engineering College

allocation.

 Virtual memory allocation is strongly supported in non contiguous


memory allocation.

Non contiguous memory allocation methods include Paging and Segmentation.

Advantages of paging

 Paging Eliminates Fragmentation

 Multiprogramming is supported

 Overheads that come with compaction during relocation are eliminated

Disadvantages of paging:

 Paging increases the price of computer hardware, as page addresses are


mapped to hardware

 Memory is forced to store variables like page tables

 Some memory space stays unused when available blocks are not
sufficient for address space for jobs to run

Advantages of segmentation:

 Fragmentation is eliminated in Segmentation memory allocation

 Segmentation fully supports virtual memory

 Dynamic memory segment growth is fully supported

 Segmentation supports Dynamic Linking

 Segmentation allows the user to view memory in a logical sense.

Disadvantages of segmentation:

 Main memory will always limit the size of segmentation, that is,
segmentation is bound by the size limit of memory

 It is difficult to manage segments on secondary storage

 Segmentation is slower than paging.

 Segmentation falls victim to external fragmentation even though it


eliminates internal fragmentation.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 46


CS3451 –Operating Systems Unit 3 Mailam Engineering College

15. Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in
order), how would the first-fit, best-fit, and worst-fit algorithms place
processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)? Which
algorithm makes the most efficient use of memory?

First-fit:

212K is put in 500K partition

417K is put in 600K partition

112K is put in 288K partition (new partition 288K = 500K - 212K)

426K must wait

Best-fit:

212K is put in 300K partition

417K is put in 500K partition

112K is put in 200K partition

426K is put in 600K partition

Worst-fit:

212K is put in 600K partition

417K is put in 500K partition

112K is put in 388K partition

426K must wait

In this example, best-fit turns out to be the best.

Prepared By: Mrs.G.Vasanthi,AP/IT Page 47

You might also like