Unit-3 Os Notes
Unit-3 Os Notes
MEMORY MANAGEMENT
1. MAIN MEMORY
Dynamic Loading
Dynamic Linking
Linking postponed until execution time & is particularly useful for libraries
Small piece of code called stub, used to locate the appropriate memory-resident
library routine or function.
Stub replaces itself with the address of the routine, and executes the routine
Operating system needed to check if routine is in processes‘ memory address
Shared libraries: Programs linked before the new library was installed will
continue using the older library
Overlays:
Enable a process larger than the amount of memory allocated to it.
At a given time, the needed instructions & data are to be kept within a memory.
Swapping
A process can be swapped temporarily out of memory to a backing store (SWAP
OUT)and then brought back into memory for continued execution (SWAP IN).
Backing store – fast disk large enough to accommodate copies of all memory
images for all users & it must provide direct access to these memory images
Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and
executed
Transfer time :
Major part of swap time is transfer time
Total transfer time is directly proportional to the amount of memory swapped.
Example: Let us assume the user process is of size 1MB & the backing store
is a standard hard disk with a transfer rate of 5MBPS.
3. PAGING
It is a memory management scheme that permits the physical address space of a
process to be noncontiguous.
It avoids the considerable problem of fitting the varying size memory chunks on to
the backing store.
3.1 Basic Method:
o Divide logical memory into blocks of same size called “pages”. o
Divide physical memory into fixed-sized blocks called “frames” o
Page size is a power of 2, between 512 bytes and 16MB.
Allocation
o When a process arrives into the system, its size (expressed in pages) is
examined.
o Each page of process needs one frame. Thus if the process requires ‗n‘ pages, at
least ‗n‘ frames must be available in memory.
o If ‗n‘ frames are available, they are allocated to this arriving process.
o The 1st page of the process is loaded into one of the allocated frames & the
frame number is put into the page table.
o Repeat the above step for the next pages & so on.
Frame table: It is used to determine which frames are allocated, which frames are
available, how many total frames are there, and so on.(ie) It contains all the information
about the frames in the physical memory.
o When a logical address is generated by CPU, its page number is presented to TLB.
o TLB hit: If the page number is found, its frame number is immediately
available & is used to access memory
o TLB miss: If the page number is not in the TLB, a memory reference to the page
table must be made.
o Hit ratio: Percentage of times that a particular page is found in the TLB.
For example hit ratio is 80% means that the desired page
number in the TLB is 80% of the time.
o Effective Access Time:
Assume hit ratio is 80%.
If it takes 20ns to search TLB & 100ns to access memory, then the
memory access takes 120ns(TLB hit)
If we fail to find page no. in TLB (20ns), then we must 1st access
memory for page table (100ns) & then access the desired byte in
memory (100ns).
Therefore Total = 20 + 100 + 100
= 220 ns(TLB miss).
Then Effective Access Time (EAT) = 0.80 X (120 + 0.20) X 220.
= 140 ns.
(iii) Protectio
o Memory protection implemented by associating protection bit with each
frame
o Valid-invalid bit attached to each entry in the page table:
“valid (v)” indicates that the associated page is in the process‘ logical
address space, and is thus a legal page
“invalid (i)” indicates that the page is not in the process‘ logical address
space
a) Hierarchical Paging
o Break up the Page table into smaller pieces. Because if the page table is too
large then it is quit difficult to search the page number.
Example: “Two-Level Paging “
Address-Translation Scheme/32 PAGING ARCHITECTURE
It requires more number of memory accesses, when the number of levels is increased.
o Each entry in hash table contains a linked list of elements that hash to the
same location.
o Each entry consists of;
Clustered page table: It is a variation of hashed page table & is similar to hashed page
table except that each entry in the hash table refers to several pages rather than a single
page.
o In the worst case a process would need n pages plus one byte.It would be allocated
n+1 frames resulting in an internal fragmentation of almost an entire frame.
Example:
Page size = 2048 bytes
Process size= 72766 bytes
Process needs 35 pages plus 1086 bytes.
It is allocated 36 frames resulting in an internal fragmentation of 962 bytes.
5. SEGMENTATION
o Memory-management scheme that supports user view of memory
o A program is a collection of segments. A segment is a logical unit such as: Main
program, Procedure, Function, Method, Object, Local variables, global variables,
Common block, Stack, Symbol table, arrays
Sharing of Segments
o Each entry in the LDT and GDT consist of 8 bytes, with detailed information about
a particular segment including the base location and length of the segment.
The logical address is a pair (selector, offset) where the selector is a16-bit
number:
s g p
13 1 2
10 10 12
o To improve the efficiency of physical memory use. Intel 386 page tables can be
swapped to disk. In this case, an invalid bit is used in the page directory entry to
indicate whether the table to which the entry is pointing is in memory or on disk.
o If the table is on disk, the operating system can use the other 31 bits to specify the
disk location of the table; the table then can be brought into memory on demand.
6. VIRTUAL MEMORY
o It is a technique that allows the execution of processes that may not be completely
in main memory.
o Advantages:
Allows the program that can be larger than the physical memory.
Separation of user logical memory from physical memory
Allows processes to easily share files & address space.
Allows for more efficient process creation.
o Virtual memory can be implemented using
Demand paging
Demand segmentation
Virtual Memory That is Larger than Physical Memory
6.1 Demand Paging
o It is similar to a paging system with swapping.
o Demand Paging - Bring a page into memory only when it is needed
o To execute a process, swap that entire process into memory. Rather than swapping
the entire process into memory however, we use ―Lazy Swapper‖
o Lazy Swapper - Never swaps a page into memory unless that page will be
needed.
o Advantages
Less I/O needed
Less memory needed
Faster response
More users
Valid-Invalid bit
o A valid – invalid bit is associated with each page table entry. o
Valid associated page is in memory.
In-Valid
invalid page
valid page but is currently on the disk
Page Fault
o Access to a page marked invalid causes a page fault trap.
Process Creation
o Virtual memory enhances the performance of creating and running processes:
- Copy-on-Write
- Memory-Mapped Files
a) Copy-on-Write
o fork() creates a child process as a duplicate of the parent process & it worked by
creating copy of the parent address space for child, duplicating the pages belonging to
the parent.
o Copy-on-Write (COW) allows both parent and child processes to initially share the
same pages in memory. These shared pages are marked as Copy-on-Write pages,
meaning that if either process modifies a shared page, a copy of the shared page is
created.
o vfork():
With this the parent process is suspended & the child process uses the address
space of the parent.
Because vfork() does not use Copy-on-Write, if the child process changes any
pages of the parent‘s address space, the altered pages will be visible to the parent
once it resumes.
Therefore, vfork() must be used with caution, ensuring that the child process does
not modify the address space of the parent.
o If no frames are free, we could find one that is not currently being used & free it.
o We can free a frame by writing its contents to swap space & changing the page
table to indicate that the page is no longer in memory.
o Then we can use that freed frame to hold the page for which the process faulted.
If no frames are free, two page transfers are required & this situation effectively
doubles the page- fault service time.
Example:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
No.of available frames = 3 (3 pages can be in memory at a time per process)
Drawback:
o It is difficult to implement as it requires future knowledge of the reference string.
1. Counters
Every page table entry has a time-of-use field and a clock or counter is
associated with the CPU.
The counter or clock is incremented for every memory reference.
Each time a page is referenced , copy the counter into the time-of-use
field.
When a page needs to be replaced, replace the page with the smallest
counter value.
2. Stack
Keep a stack of page numbers
Whenever a page is referenced, remove the page from the stack and put
it on top of the stack.
When a page needs to be replaced, replace the page that is at the bottom
of the stack.(LRU page)
2. (0,1) – not recently used but modified page has to be written out before
replacement.
3. (1,0) - recently used but not modified page may be used again
4. (1,1) – recently used and modified page may be used again and page has
to be written to disk
Example: If there are 5 processes and 100 frames, give each process 20
frames.
o Proportional allocation
Allocate according to the size of process
Let si be the size of process i.
Let m be the total no. of frames
Then S = ∑ si
a i = si / S * m
where ai is the no.of frames allocated to process i.
Global vs. Local Replacement
o Global replacement – each process selects a replacement frame from the set of all
frames; one process can take a frame from another.
o Local replacement – each process selects from only its own set of allocated
frames.
6.4 Thrashing
o High paging activity is called thrashing.
o If a process does not have ―enough‖ pages, the page-fault rate is very high. This
leads to:
low CPU utilization
operating system thinks that it needs to increase the degree of
multiprogramming
another process is added to the system
o When the CPU utilization is low, the OS increases the degree of
multiprogramming.
o If global replacement is used then as processes enter the main memory they tend to
steal frames belonging to other processes.
o Eventually all processes will not have enough frames and hence the page fault rate
becomes very high.
o Thus swapping in and swapping out of pages only takes place. o
This is the cause of thrashing.
1. Working-Set Strategy
o It is based on the assumption of the model of locality.
o Locality is defined as the set of pages actively used together.
o Working set is the set of pages in the most recent page references
o is the working set window.
if too small , it will not encompass entire locality
if too large ,it will encompass several localities
if = it will encompass entire program
o D = WSSi
Where WSSi is the working set size for process i.
D is the total demand of frames
o if D > m then Thrashing will occur.
2. Page-Fault Frequency Scheme
Other Issues
o Prepaging
To reduce the large number of page faults that occurs at process startup
Prepage all or some of the pages a process will need, before they are
referenced
But if prepaged pages are unused, I/O and memory are wasted
o Page Size
Page size selection must take into consideration:
o fragmentation
o table size
o I/O overhead
o locality
o TLB Reach
o
TLB Reach - The amount of memory accessible from the TLB
o
TLB Reach = (TLB Size) X (Page Size)
o
Ideally, the working set of each process is stored in the TLB. Otherwise there
is a high degree of page faults.
o
Increase the Page Size. This may lead to an increase in fragmentation as not
all applications require a large page size
o
Provide Multiple Page Sizes. This allows applications that require larger page
sizes the opportunity to use them without an increase in fragmentation.
o I/O interlock
o
Pages must sometimes be locked into memory
o
Consider I/O. Pages that are used for copying a file from a device must be
locked from being selected for eviction by a page replacement
algorithm.
Although easy access to an abundance of memory certainly is not a luxury to the kernel, a
little understanding of the issues can go a long way toward making the process relatively
painless.
A General-Purpose Allocator
The general interface for allocating memory inside of the kernel is kmalloc():
#include <linux/slab.h>
void * kmalloc(size_t size, int flags);
example:
Flags
The flags field controls the behavior of memory allocation.
We can divide flags into three groups: action modifiers, zone modifiers and types.
Action modifiers tell the kernel how to allocate memory.
They specify, for example, whether the kernel can sleep (that is, whether the call
to kmalloc() can block) in order to satisfy the allocation. Zone modifiers, on the
other hand, tell the kernel from where the request should be satisfied.
For example, some requests may need to be satisfied from memory that hardware
can access through direct memory access (DMA).
Finally, type flags specify a type of allocation. They group together relevant action
and zone modifiers into a single mnemonic.
In general, instead of specifying multiple action and zone modifiers, you specify a
single type flag.
8. OS Examples
Windows XP
Windows XP implements virtual memory using demand paging with clustering.
Clustering handles page faults by bringing in not only the faulting page but also
several pages following the faulting page.
When a process is first created, it is assigned a working-set minimum and
maximum.
The working-set minimum is the minimum number of pages the process is
guaranteed to have in memory.
If sufficient memory is available, a process may be assigned as many pages as its
working-set maximum.
For most applications, the value of working-set minimum and working-set
maximum is 50 and 345 pages, respectively
The virtual memory manager maintains a list of free page frames. Associated with
this list is a threshold value that is used to indicate whether sufficient free memory
is available.
If a page fault occurs for a process that is below its working-set maximum, the
virtual memory manager allocates a page from this list of free pages.
If a process is at its working-set maximum and it incurs a page fault, it must select
a page for replacement using a local page-replacement policy.
When the amount of free memory falls below the threshold, the virtual memory
manager uses a tactic known as automatic working-set trimming to restore the
value above the threshold. Automatic working-set trimming works by evaluating
the number of pages allocated to processes.
If a process has been allocated more pages than its working-set minimum, the
virtual memory manager removes pages until the process reaches its working-set
minimum.
A process that is at its working-set minimum may be allocated pages from the
free-page frame list once sufficient free memory is available.
The algorithm used to determine which page to remove from a working set
depends on the type of processor.
On single-processor 80x86 systems, Windows XP uses a variation of the clock
algorithm discussed in Section 9.4.5.2
On Alpha and, multiprocessor x86 systems, clearing the reference bit may require
invalidating the entry in the translation look-aside buffer on other processors.
Solaris
In Solaris, when a thread incurs a page fault, the kernel assigns a page to the
faulting thread from the list of free pages it maintains.
Therefore, it is imperative that the kernel keep a sufficient amount of free memory
available.
Associated with this list of free pages is a parameter—lotsfree—that represents a
threshold to begin paging.
The lotsfree parameter is typically set to 1/64 the size of the physical memory.
Four times per second, the kernel checks whether the amount of free memory is
less than lotsfree.
If the number of free pages falls below lotsfree, a process known as the pageout
starts up.
The pageout process works as follows: The front hand of the clock scans all pages
in memory, setting the reference bit to 0.
Later, the back hand of the clock examines the reference bit for the pages in
memory, appending those pages whose bit is still set to 0 to the free list and
writing to disk their contents if modified.
Solaris maintains a cache list of pages that have been "freed" but have not yet been
overwritten. The free list contains frames that have invalid contents.
Pages can be reclaimed from the cache list if they are accessed before being
moved to the free list.