UNIT 4 OS
UNIT 4 OS
Program: B.Tech
Course Code: BCS401
Course Name: Operating System
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System
Memory Management
The Bare Machine and Resident Monitor are not directly related to
the operating system
Bare Machine:
Resident Monitor:
Resident Monitor
• Control Language
Interpreter is used to read and
carry out the instruction from
one level to the next level.
• Loader Loads all the
necessary system and
application programs into the
main memory.
• Device Driver is used to
manage the connecting input-
output devices to the system.
So basically it is the interface
between the user and the
system.
• Interrupt
Program Name: B. Tech
Processing Program
it Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System
Memory management
• Memory management is the functionality of an operating system which
handles or manages primary memory and moves processes back and
forth between main memory and disk during execution.
• Memory management keeps track of each and every memory location,
regardless of either it is allocated to some process or it is free.
CONTINUED…
• Advantages of Single contiguous memory management
schemes:
• Simple to implement.
• Easy to manage and design.
• In a Single contiguous memory management scheme, once a
process is loaded, it is given full processor's time, and no
other processor will interrupt it.
• Disadvantages of Single contiguous memory management
schemes:
• Wastage of memory space due to unused memory as the
process is unlikely to use all the available memory space.
• The CPU remains idle, waiting for the disk to load the binary
image into the main memory.
• It can not be executed if the program is too large to fit the
Program
entireName: B. Tech
available main memory Program
space. Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System
Multiple Partitioning:
The single Contiguous memory management scheme is inefficient as it
limits computers to execute only one program at a time resulting in
wastage in memory space and CPU time. The problem of inefficient
CPU use can be overcome using multiprogramming that allows more
than one program to run concurrently. To switch between two
processes, the operating systems need to load both processes into the
main memory. The operating system needs to divide the available main
memory into multiple parts to load multiple processes into the main
memory. Thus multiple processes can reside in the main memory
simultaneously.
The multiple partitioning schemes can be of two types:
• Fixed Partitioning
• Dynamic Partitioning
Fixed Partitioning
• The main memory is divided into several fixed-sized partitions in a
fixed partition memory management scheme or static partitioning.
These partitions can be of the same size or different sizes. Each
partition can hold a single process. The number of partitions
determines the degree of multiprogramming, i.e., the maximum
number of processes in memory. These partitions are made at the
time of system generation and remain fixed after that.
• Advantages of Fixed Partitioning memory management schemes:
• Simple to implement.
• Easy to manage and design.
• Disadvantages of Fixed Partitioning memory management
schemes:
• This scheme suffers from internal fragmentation.
• The number of partitions is specified at the time of system
generation.
Program Name: B. Tech Program Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System
Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a
fixed partitioning scheme. In a dynamic partitioning scheme, each process
occupies only as much memory as they require when loaded for
processing. Requested processes are allocated memory until the entire
physical memory is exhausted or the remaining space is insufficient to
hold the requesting process. In this scheme the partitions used are of
variable size, and the number of partitions is not defined at the system
generation time.
Advantages of Dynamic Partitioning memory management schemes:
• Simple to implement.
• Easy to manage and design.
Disadvantages of Dynamic Partitioning memory management schemes:
• This scheme also suffers from internal fragmentation.
• The number of partitions is specified at the time of system
segmentation.
Program Name: B. Tech Program Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System
In the above diagram, when the scheduler selects a process for the
execution process, the dispatcher, on the other hand, is responsible for
loading the relocation and limit registers with the correct values as part of
the context switch as every address generated by the CPU is checked
against these 2 registers, and we may protect the operating system,
programs, and the data of the users from being altered by this running
process.
Continued…..
– Memory Protection using Rings: In CS, the domains related to
ordered protection are called Protection Rings.
– This method helps in improving fault tolerance and provides security.
These rings are arranged in a hierarchy from most privileged to least
privileged. In the single-level sharing OS, each and every segment
has a protection ring for the process of reading, writing, and
executing operations of the process.
– If there is a use of a higher ring number by the process then the ring
number for the segment creates a fault.
– But we do have some methods for calling the procedures safely that
can run in a lower ring number and then return to the number of the
higher ring.
Continued…..
Capability-based addressing: It is a method of protecting the memory that cannot
be seen in modern commercial computers. Here, the pointers (objects consisting of a
memory address) are restored by the capabilities objects that can only be created
with the protected instructions and may only execute by a kernel, or by another
process that is authorized to execute and therefore it gives an advantage of
controlling the unauthorized processes in creating additional separate address
spaces in memory.
Memory Protection using masks: The masks are used in the protection of memory
during the organization of paging. In this method, before the implementation, the
page numbers are indicated to each program and are reserved for the placement of
its directives. Here the allocated pages for the program are now given the control of
the operating system in the form of mask code (an n-bit binary code) which is
formed for every working program that is determined by the bit number of OD
pages.
CONTINUED…
• Memory Protection using Segmentation: It is a method of dividing the
system memory into different segments. The data structures of x86
architecture of OS like local descriptor table and global descriptor table are
used in the protection of memory.
• Memory Protection using Simulated segmentation: With this technique, we
can monitor the program for interpreting the machine code instructions of
system architectures. Through this, the simulator can help in protecting the
memory by using a segmentation using the scheme and validating the target
address of every instruction in real-time.
• Memory Protection using Dynamic tainting: Dynamic tainting is a technique
that consists of marking and tracking certain data in a program at runtime as
it protects the process from illegal memory accesses. In tainting technique,
we taint a program to mark two kinds of data i.e., memory in the data space
and the pointers.
EXAMPLE: The main memory will be divided into a group of frames. The
four processes in the system are P1, P2, P3, and P4. They are each 4 KB
in size as shown in figure:
Program Name: B.TECH Program Code: BCS-401
Program Name: B.TECH Program Code: BCS-401
SEGMENTATION
• A process is divided into Segments
• The details about each segment are stored in a table called a
segment table.
• Segment table contains mainly two information about segment:
1. Base: It is the base address of the segment
2. Limit: It is the length of the segment.
Fig- Page
replacement
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a
victim frame.
c. Write the victim frame to the disk; change the page and frame tables
accordingly.
3. Read the desired page into the newly freed frame; change the page and
frame tables.
4. Continue
Program the user
Name: process from where
B.Tech the page
Program fault
Code: occurred.
BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System
Step 1: Starting with the first page, we will check if that particular page given in
the reference string is already present in the frames. If yes, then that is called
PAGE HIT, otherwise that is a PAGE FAULT/PAGE MISS and accordingly we will
add the pages in the frames.
Step 2: As soon as the frames are full, then from that particular page we will look
forward and check page that is present in the frame is demanded last by the
CPU in the reference string. Hence, that late demanded page will be replaced
from the frame.
Step 3: In some cases, we will get pages which are not present in the reference
string or whose demand is not in future, then we can replace the pages with
them easily.
Program Name: B.we
Step 4: Similarly, Tech Program
will follow the above Code:
steps till we will fill all the pages in the
BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System
Step 1: Starting with the first page, we will check if that particular page given in
the reference string is already present in the frames. If yes, then that is called
PAGE HIT, otherwise that is a PAGE FAULT/PAGE MISS and accordingly we will
add the pages in the frames.
Step 2: As soon as the frames are full, then from that particular page we will look
backward and check page that is present in the frame is demanded first by the
CPU in the reference string. Hence, that first demanded page will be replaced
from the frame.
Step 3: In some cases, we will get pages which are not present in the reference
string or whose demand was not in past, then we can replace the pages with
them easily.
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System
Thrashing:
❑ If the number of frames allocated to a low-priority process falls below the
minimum number required by the computer architecture, we must suspend
that process’s execution. We should then page out its remaining pages,
freeing all its allocated frames. This provision introduces a swap-in, swap-
out level of intermediate CPU scheduling.
❑ If the process does not have the number of frames it needs to support pages
in active use, it will quickly page-fault. At this point, it must replace some
page. However, since all its pages are in active use, it must replace a page
that will be needed again right away. Consequently, it quickly faults again,
and again, and again, replacing pages that it must bring back in immediately.
This high paging activity is called thrashing. A process is thrashing if it
is spending more time paging than executing.
❑ As the degree of multiprogramming increases, CPU utilization also increases,
although more slowly, until a maximum is reached. If the degree of
multiprogramming is increased even further, thrashing sets in, and CPU
utilization drops sharply. At this point, to increase CPU utilization and stop
Program Name:
thrashing, weB.must
Techdecrease the degree
Program Code: BCS401
of multiprogramming.
Department of Applied Computational Science & Engg.
Course Code : Course Name:
Fig-
Thrashing
CAUSE OF THRASHING:
• If CPU utilization is too low, we increase the degree of
multiprogramming by introducing a new process to the system. A
global page-replacement algorithm is used; it replaces page without
regard to the process to which they belong. Now suppose that a
process enters a new phase in its execution and needs more frames.
It starts faulting and taking frames away from other processes.
These processes need those pages, however, and so they also fault,
taking frames from other processes. These faulting processes must
use the paging device to swap pages in and out. As they queue up
for the paging device, the ready queue empties.
• As processes wait for the paging device, CPU utilization decreases.
The CPU scheduler sees the decreasing CPU utilization and increases
the degree of multiprogramming as a result. The new process tries
to get started by taking frames from running processes, causing
more page faults and a longer queue for the paging device.
Program Name: B. Tech Program Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System
CONTINUED…
• As a result, CPU utilization drops even further, and the CPU
scheduler tries to increase the degree of multiprogramming even
• more. Thrashing has occurred, and system throughput plunges. The
page fault rate increases tremendously. As a result, the effective
memory-access time increases. No work is getting done, because the
processes are spending all their time paging.
Allocation of Frames
• Each process needs minimum number of pages
• Example: IBM 370 – 6 pages to handle SS MOVE instruction:
– instruction is 6 bytes, might span 2 pages
– 2 pages to handle from
– 2 pages to handle to
• Two major allocation schemes
– fixed allocation
– priority allocation
Fixed Allocation
• Equal allocation – For example, if there are 100 frames and 5 processes,
give each process 20 frames.
• Proportional allocation – Allocate according to the size of process
Priority Allocation
• Use a proportional allocation scheme using priorities rather than size
• If process Pi generates a page fault,
– select for replacement one of its frames
– select for replacement a frame from a process with lower priority
number
CONTINUED…
• Whenever the CPU needs to access memory, it first checks the cache
memory. If the data is not found in cache memory, then the CPU
moves into the main memory.
• The cache is the fastest component in the memory hierarchy and
approaches the speed of CPU components.
• Cache memory is organised as distinct set of blocks where each set
contains a small fixed number of blocks.
• Whenever an access is made to cache, the cache controller does not
search the entire cache in order to look for a match. Rather, the
controller maps the address to a particular set of the cache and
therefore searches only the set for a match. This kind of cache
organisation is called set associative because the cache is divided
into distinct sets of blocks.
CONTINUED…
• A block of words one just accessed is then transferred from main
memory to cache memory. The block size may vary from one word
(the one just accessed) to about 16 words adjacent to the one just
accessed.
• The performance of the cache memory is frequently measured in
terms of a quantity called hit ratio.
• When the CPU refers to memory and finds the word in cache, it is
said to produce a hit.
• If the word is not found in the cache, it is in main memory and it
counts as a miss.
• The ratio of the number of hits divided by the total CPU references
to memory (hits plus misses) is the hit ratio.
Cache Mapping:
Cache mapping defines how a block from the main memory is mapped to the
cache memory in case of a cache miss. There are three different types of
mapping used for the purpose of cache memory which are as follows:
1. Direct mapping: In direct mapping, the cache consists of normal high-
speed random-access memory. Each location in the cache holds the data, at
a specific address in the cache. This address is given by the lower
significant bits of the main memory address. This enables the block to be
selected directly from the lower significant bit of the memory address. The
remaining higher significant bits of the address are stored in the cache
with the data to complete the identification of the cached data.
CONTINUED…
2. Associative Mapping: In fully associative type of cache memory, each
location in cache stores both memory address as well as data. Whenever a
data is requested, the incoming memory address a simultaneously compared
with all stored addresses using the internal logic the associative memory. If a
match is found, the corresponding is read out. Otherwise, the main memory is
accessed if address is not found in cache. This method is known as fully
associative mapping approach because cached data is related to the main
memory by storing both memory address and data in the cache.
CONTINUED…
3. Set Associative Mapping: In set associative mapping a cache is
divided into a set of blocks. The number of blocks in a set is known as
associativity or set size. Each block in each set has a stored tag. This tag
together with index completely identify the block. Thus, set associative
mapping allows a limited number of blocks, with the same index and
different tags.
In this type of cache, the following steps are used to access the
data from a cache:
• The index of the address from the processor is used to access the set.
• Then the comparators are used to compare all tags of the selected
set with the incoming tag.
• If a match is found, the corresponding location is accessed.
• If no match is found, an access is made to the main memory.
Locality of reference
Locality of Reference refers to the tendency of the computer program
to access instructions whose addresses are near one another. The
property of locality of reference is mainly shown by loops and
subroutine calls in a program.
▪ In case of loops in program control processing unit repeatedly
refers to the set of instructions that constitute the loop.
▪ In case of subroutine calls, every time the set of instructions are
fetched from memory.
▪ References to data items also get localized that means same data
item is referenced again and again.
CONTINUED…
❑ Temporal Locality-Temporal locality means current data or
instruction that is being fetched may be needed soon. So we should
store that data or instruction in the cache memory so that we can
avoid again searching in main memory for the same data.
❑ Spatial Locality - Spatial locality means instruction or data near to
the current memory location that is being fetched, may be needed
soon in the near future. This is slightly different from the temporal
locality. Here we are talking about nearly located memory locations
while in temporal locality we were talking about the actual memory
location that was being fetched.