0% found this document useful (0 votes)
46 views68 pages

UNIT 4 OS

os notes

Uploaded by

zunaidkhan1492
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views68 pages

UNIT 4 OS

os notes

Uploaded by

zunaidkhan1492
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Department of Applied Computational Science & Engg.

Course Code : BCS401 Course Name: Operating System

Program: B.Tech
Course Code: BCS401
Course Name: Operating System
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Memory Management
The Bare Machine and Resident Monitor are not directly related to
the operating system

Bare Machine:

• Bare machine is logical hardware which is used to execute the program in


the processor without using the operating system.
• Initially, when the operating systems are not developed, the execution of an
instruction is done by directly on hardware without using any interfering
hardware
• Bare machines accepting the instruction in only machine language,

Program Name: B.Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Resident Monitor:

• Resident Monitor is a code that runs on Bare Machines.


• The resident monitor works like an operating system that controls the
instructions and performs all necessary functions. It also works like job
sequencer because it also sequences the job and sends them to the
processor.
• After scheduling the job Resident monitors loads the programs one by
one into the main memory according to their sequences.
• when the program execution occurred there is no gap between the
program execution and the processing

Program Name: B.Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Resident Monitor

• Control Language
Interpreter is used to read and
carry out the instruction from
one level to the next level.
• Loader Loads all the
necessary system and
application programs into the
main memory.
• Device Driver is used to
manage the connecting input-
output devices to the system.
So basically it is the interface
between the user and the
system.
• Interrupt
Program Name: B. Tech
Processing Program
it Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Memory management
• Memory management is the functionality of an operating system which
handles or manages primary memory and moves processes back and
forth between main memory and disk during execution.
• Memory management keeps track of each and every memory location,
regardless of either it is allocated to some process or it is free.

• It checks how much memory


is to be allocated to processes.
• It decides which process will
get memory at what time.
• It tracks whenever some
memory gets freed or
unallocated and
correspondingly it updates the
status.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Why Memory Management is Required?

• Allocate and de-allocate memory before and after


process execution.
• To keep track of used memory space by processes.
• To minimize fragmentation issues.
• To proper utilization of main memory.
• To maintain data integrity while executing of process.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Logical and Physical Address Space

Logical Address Space:


• An address generated by the CPU is known as a “Logical Address”. It is also
known as a Virtual address. Logical address space can be defined as the size
of the process. A logical address can be changed.
Physical Address Space:
• An address seen by the memory unit (i.e the one loaded into the memory
address register of the memory) is commonly known as a “Physical Address”.
A Physical address is also known as a Real address. The set of all physical
addresses corresponding to these logical addresses is known as Physical
address space. A physical address is computed by MMU. The run-time
mapping from virtual to physical addresses is done by a hardware device
Memory Management Unit(MMU). The physical address always remains
constant.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Static and Dynamic Loading


Loading a process into the main memory is done by a loader.
There are two different types of loading :
• Static Loading: Static Loading is basically loading the
entire program into a fixed address. It requires more
memory space.
• Dynamic Loading: The entire program and all data of a
process must be in physical memory for the process to
execute. So, the size of a process is limited to the size
of physical memory. To gain proper memory utilization,
dynamic loading is used. In dynamic loading, a routine is not
loaded until it is called. All routines are residing on disk in
a relocatable load format. One of the advantages of
dynamic loading is that the unused routine is never loaded.
This loading is useful when a large amount of code is
needed to handle it efficiently.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : Course Name:

Static and Dynamic Linking

To perform a linking task a linker is used. A linker is a program that takes


one or more object files generated by a compiler and combines them into a
single executable file.
• Static Linking: In static linking, the linker combines all necessary program
modules into a single executable program. So there is no runtime
dependency. Some operating systems support only static linking, in which
system language libraries are treated like any other object module.
• Dynamic Linking: The basic concept of dynamic linking is similar to
dynamic loading. In dynamic linking, “Stub” is included for each
appropriate library routine reference. A stub is a small piece of code. When
the stub is executed, it checks whether the needed routine is already in
memory or not. If not available then the program loads the routine into
memory.
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Memory Management Techniques:

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : Course Name:

Contiguous memory management schemes:

In a Contiguous memory management scheme, each program occupies a


single contiguous block of storage locations, i.e., a set of memory locations
with consecutive addresses.
Single contiguous memory management schemes:

The Single contiguous memory management scheme is the simplest


memory management scheme used in the earliest generation of computer
systems. In this scheme, the main memory is divided into two contiguous
areas or partitions. The operating systems reside permanently in one
partition, generally at the lower memory, and the user process is loaded
into the other partition.
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

CONTINUED…
• Advantages of Single contiguous memory management
schemes:
• Simple to implement.
• Easy to manage and design.
• In a Single contiguous memory management scheme, once a
process is loaded, it is given full processor's time, and no
other processor will interrupt it.
• Disadvantages of Single contiguous memory management
schemes:
• Wastage of memory space due to unused memory as the
process is unlikely to use all the available memory space.
• The CPU remains idle, waiting for the disk to load the binary
image into the main memory.
• It can not be executed if the program is too large to fit the
Program
entireName: B. Tech
available main memory Program
space. Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Multiple Partitioning:
The single Contiguous memory management scheme is inefficient as it
limits computers to execute only one program at a time resulting in
wastage in memory space and CPU time. The problem of inefficient
CPU use can be overcome using multiprogramming that allows more
than one program to run concurrently. To switch between two
processes, the operating systems need to load both processes into the
main memory. The operating system needs to divide the available main
memory into multiple parts to load multiple processes into the main
memory. Thus multiple processes can reside in the main memory
simultaneously.
The multiple partitioning schemes can be of two types:
• Fixed Partitioning
• Dynamic Partitioning

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Fixed Partitioning
• The main memory is divided into several fixed-sized partitions in a
fixed partition memory management scheme or static partitioning.
These partitions can be of the same size or different sizes. Each
partition can hold a single process. The number of partitions
determines the degree of multiprogramming, i.e., the maximum
number of processes in memory. These partitions are made at the
time of system generation and remain fixed after that.
• Advantages of Fixed Partitioning memory management schemes:
• Simple to implement.
• Easy to manage and design.
• Disadvantages of Fixed Partitioning memory management
schemes:
• This scheme suffers from internal fragmentation.
• The number of partitions is specified at the time of system
generation.
Program Name: B. Tech Program Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a
fixed partitioning scheme. In a dynamic partitioning scheme, each process
occupies only as much memory as they require when loaded for
processing. Requested processes are allocated memory until the entire
physical memory is exhausted or the remaining space is insufficient to
hold the requesting process. In this scheme the partitions used are of
variable size, and the number of partitions is not defined at the system
generation time.
Advantages of Dynamic Partitioning memory management schemes:
• Simple to implement.
• Easy to manage and design.
Disadvantages of Dynamic Partitioning memory management schemes:
• This scheme also suffers from internal fragmentation.
• The number of partitions is specified at the time of system
segmentation.
Program Name: B. Tech Program Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Memory Protection in Operating Systems


In Memory protection, we have to protect the operating system from user
processes and which can be done by using a relocation register with a limit
register. Here, the relocation register has the value of the smallest physical
address whereas the limit register has the range of the logical addresses.
These two registers have some conditions like each logical address must be
less than the limit register.
The memory management unit is used to translate the logical address with the
value in the relocation register dynamically after which the translated (or
mapped) address is then sent to memory.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

In the above diagram, when the scheduler selects a process for the
execution process, the dispatcher, on the other hand, is responsible for
loading the relocation and limit registers with the correct values as part of
the context switch as every address generated by the CPU is checked
against these 2 registers, and we may protect the operating system,
programs, and the data of the users from being altered by this running
process.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Need of Memory protection:


Memory protection prevents a process from accessing unallocated memory
in OS as it stops the software from seizing control of an excessive amount
of memory and may cause damage that will impact other software which is
currently being used or may create a loss of saved data.
These resources of memory protection also help in detecting malicious or
harmful applications, that may after damaged the processes of the
operating system.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : Course Name:

Methods of memory protection schemes

Memory Protection using Keys:The concept of


using memory protection with keys can be found
in most modern computers with the purpose of
paged memory organization and for the dynamic
distribution between the parallel running
programs. The keys are based on the use of
special codes as we can verify the compliance
between using arrays of memory cells and the
number of running programs. This key method
gives the users a process to impose page-based
protections without any modification in the page
tables.
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Continued…..
– Memory Protection using Rings: In CS, the domains related to
ordered protection are called Protection Rings.
– This method helps in improving fault tolerance and provides security.
These rings are arranged in a hierarchy from most privileged to least
privileged. In the single-level sharing OS, each and every segment
has a protection ring for the process of reading, writing, and
executing operations of the process.
– If there is a use of a higher ring number by the process then the ring
number for the segment creates a fault.
– But we do have some methods for calling the procedures safely that
can run in a lower ring number and then return to the number of the
higher ring.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Continued…..
Capability-based addressing: It is a method of protecting the memory that cannot
be seen in modern commercial computers. Here, the pointers (objects consisting of a
memory address) are restored by the capabilities objects that can only be created
with the protected instructions and may only execute by a kernel, or by another
process that is authorized to execute and therefore it gives an advantage of
controlling the unauthorized processes in creating additional separate address
spaces in memory.
Memory Protection using masks: The masks are used in the protection of memory
during the organization of paging. In this method, before the implementation, the
page numbers are indicated to each program and are reserved for the placement of
its directives. Here the allocated pages for the program are now given the control of
the operating system in the form of mask code (an n-bit binary code) which is
formed for every working program that is determined by the bit number of OD
pages.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

CONTINUED…
• Memory Protection using Segmentation: It is a method of dividing the
system memory into different segments. The data structures of x86
architecture of OS like local descriptor table and global descriptor table are
used in the protection of memory.
• Memory Protection using Simulated segmentation: With this technique, we
can monitor the program for interpreting the machine code instructions of
system architectures. Through this, the simulator can help in protecting the
memory by using a segmentation using the scheme and validating the target
address of every instruction in real-time.
• Memory Protection using Dynamic tainting: Dynamic tainting is a technique
that consists of marking and tracking certain data in a program at runtime as
it protects the process from illegal memory accesses. In tainting technique,
we taint a program to mark two kinds of data i.e., memory in the data space
and the pointers.

Program Name: B. Tech Program Code: BCS401


PAGING
▪ It is a memory management technique in which memory is divide
into fixed pages.
▪ It permits physical address of a process to be non contiguous.
▪ It is used for faster access to data
▪ It allows process to be comprised of fixed size blocks , called pages.
▪ The main idea behind the paging is to divide each process
in the form of pages.
▪ One page of the process is to be stored in one of the
frames of the memory. The pages can be stored at the
different locations of the memory but the priority is always
to find the contiguous frames .

Program Name: B.TECH Program Code: BCS-401


Program Name: B.TECH Program Code: BCS-401
▪ Physical memory is divided into fixed-sized blocks called frames
▪ Logical memory is divided into blocks of same size called pages.
▪ To run a program of size n pages , need to find n free frames and
load program.
▪ When a process requests memory, the operating system allocates
one or more page frames to the process and maps the process’s
logical pages to the physical page frames.
▪ mapping between logical pages and physical page frames is
maintained by the page table.

EXAMPLE: The main memory will be divided into a group of frames. The
four processes in the system are P1, P2, P3, and P4. They are each 4 KB
in size as shown in figure:
Program Name: B.TECH Program Code: BCS-401
Program Name: B.TECH Program Code: BCS-401
SEGMENTATION
• A process is divided into Segments
• The details about each segment are stored in a table called a
segment table.
• Segment table contains mainly two information about segment:
1. Base: It is the base address of the segment
2. Limit: It is the length of the segment.

• CPU generates a logical address which contains two parts:


1. Segment Number
2. Offset

Program Name: B.TECH Program Code: BCS-401


The address generated by the CPU is divided into:
• Segment number (s): Number of bits required to represent
the segment.
• Segment offset (d): Number of bits required to represent the
size of the segment.

Program Name: B.TECH Program Code: BCS-401


Program Name: B.TECH Program Code: BCS-401
Types of Segmentation in Operating System
• Virtual Memory Segmentation: Each process is divided into a
number of segments, but the segmentation is not done all at once.
This segmentation may or may not take place at the run time of the
program.
• Simple Segmentation: Each process is divided into a number of
segments, all of which are loaded into memory at run time, though
not necessarily contiguously.
• Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
Program Name: B.TECH Program Code: BCS-401
VIRTUAL MEMORY CONCEPT
• Virtual Memory is a storage allocation scheme
• size is limited by the addressing scheme of the computer system
• It is a technique that is implemented using both software and
hardware.
• It maps memory addresses used by a program, called virtual
addresses, into physical addresses in computer memory.
• User can load the bigger size processes than the available main
memory by having the illusion that the memory is available to load
the process.
• Instead of loading one big process in the main memory,
the Operating System loads the different parts of more
than one process in the main memory.

Program Name: B.TECH Program Code: BCS-401


Program Name: B.TECH Program Code: BCS-401
• Advantages of Virtual Memory
1. The degree of Multiprogramming will be increased.
2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.

• Disadvantages of Virtual Memory


1. The system becomes slower since swapping takes time.
2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use

Program Name: B.TECH Program Code: BCS-401


DEMAND PAGING
• It can be described as a memory management technique
• It is used to improve memory usage and system performance.
• used in virtual memory systems where pages enter main memory
only when requested or needed by the CPU.
• operating system loads only the necessary pages of a program
into memory at runtime, instead of loading the entire program
into memory at the start.
• In demand paging, the pages of a process which are least used,
get stored in the secondary memory.
• A page is copied to the main memory when its demand is made or
page fault occurs.

Program Name: B.TECH Program Code: BCS-401


• Demand paging is the process that solves the above problem only by
swapping the pages on Demand. This is also known as lazy
swapper(It never swaps the page into the memory unless it is
needed).
• Swapper that deals with the individual pages of a process are
referred to as Pager.

Program Name: B.TECH Program Code: BCS-401


Program Name: B.TECH Program Code: BCS-401
Advantages of Demand Paging
• With the help of Demand Paging, memory is utilized efficiently.
• Demand paging avoids External Fragmentation.
• Less Input/Output is needed for Demand Paging.
• This process is not constrained by the size of physical memory.
• Portions of the process that are never called are never loaded.

Disadvantages of Demand paging


• There is an increase in overheads due to interrupts and page tables.
• Memory access time in demand paging is longer.

Program Name: B.TECH Program Code: BCS-401


Performance of Demand Paging
• It is often measured in terms of the effective access time
• Effective access time is the amount of time it takes to access
memory, if the cost of page faults are amortized over all
memory accesses.
• In some sense it is an average or expected access time.

Program Name: B.TECH Program Code: BCS-401


ea = (1 - p) * ma + p*pft
ea = effective access time
ma = physical memory (core) access time
pft = page fault time
p = probability of a page fault occuring
(1-p) = the probability of accessing memory in an available
frame
The page fault time is the sum of the additional overhead
associated with accessing a page in the backing store.

Program Name: B.TECH Program Code: BCS-401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

PAGE REPLACEMENT ALGORITHMS


In a paging-based memory management system within an
operating system, a strategy to select which memory page to evict
is crucial when introducing a new page. This selection process is
called a page replacement algorithm. Page replacement is
mandatory when the system encounters a page fault and there
are no available page frames in memory to accommodate the
new page. However, if the evicted page is needed soon after its
removal, this leads to another page fault. Therefore, it's critical to
choose for replacement a page that is unlikely to be needed
shortly.
Page replacement takes the following approach. If no frame is
free, we find one that is not currently being used and free it. We
can free a frame by writing its contents to swap space and
changing the page table (and all other tables) to indicate that the
page is no longer in memory. We can now use the freed frame to
Program
hold Name: B.Tech
the page Program
for which the process Code: BCS401
faulted.
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Fig- Page
replacement
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a
victim frame.
c. Write the victim frame to the disk; change the page and frame tables
accordingly.
3. Read the desired page into the newly freed frame; change the page and
frame tables.
4. Continue
Program the user
Name: process from where
B.Tech the page
Program fault
Code: occurred.
BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

1) FIFO PAGE REPLACEMENT:


❖ The simplest page replacement algorithm is a first-in, first-out
(FIFO) algorithm. A FIFO replacement algorithm associates with
each page the time when that page was brought into the
memory. When a page must be replaced, the oldest page is
chosen. We can create a FIFO queue to hold all pages in memory.

❖ FIFO page-replacement algorithm is easy to understand and


program.
❖ Belady’s anomaly may occurs for some sequence like
1,2,3,4,1,2,5,1,2,3,4,5; number of faults for four frames (ten) is
greater than the number of faults for three frames (nine).
Note: There are many different page replacement algorithms.
We evaluate an algorithm by running it on a particular string
of memory reference and computing the number of page
faults. The string of memory
Program Name: B. Tech
references is called reference
Program Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

We use the reference string:


7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1,
7, 0, 1

Fig- Graph of page faults versus number of


Step 1: Starting with the first page, we will check if that particular page given in the
frames.
reference string is already present in the frames. If yes, then that is called PAGE HIT,
otherwise that is a PAGE FAULT/PAGE MISS and accordingly we will add the pages in
the frames.
Step 2: Similarly, we will check for further pages in the frames.
Step 3: As soon as the frames are full, we need to empty the frame which was filled at the
first time and fill it with new page.
Step 4: Whenever an already entered page is found in the frame, it is called PAGE HIT
and we will copy that page as it is.
Step 5: Similarly, we will follow the above steps till we will fill all the pages in the frames.
Hit Ratio = No. of hits/No. of references
Program Name: B. Tech Program
Miss Ratio Code: BCS401
= No. of miss/No. of references
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

2) Optimal Page replacement:


❖ An optimal page-replacement algorithm has the lowest page-
fault rate of all algorithms and will never suffer from Belady’s
anomaly. An optimal page-replacement algorithm exists, and
has been called OPT or MIN.
❖ It is simply “Replace the page that will not be used for the
longest period of time.”
❖ The optimal page-replacement algorithm is difficult to
implement because it requires future knowledge of the
reference string.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

We use the reference string:


7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1,
7, 0, 1

Step 1: Starting with the first page, we will check if that particular page given in
the reference string is already present in the frames. If yes, then that is called
PAGE HIT, otherwise that is a PAGE FAULT/PAGE MISS and accordingly we will
add the pages in the frames.
Step 2: As soon as the frames are full, then from that particular page we will look
forward and check page that is present in the frame is demanded last by the
CPU in the reference string. Hence, that late demanded page will be replaced
from the frame.
Step 3: In some cases, we will get pages which are not present in the reference
string or whose demand is not in future, then we can replace the pages with
them easily.
Program Name: B.we
Step 4: Similarly, Tech Program
will follow the above Code:
steps till we will fill all the pages in the
BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

3) Least recently used:


• The FIFO algorithm uses the time when a page was brought
into memory; the OPT algorithm uses the time when a page is
to be used. In LRU replace the page that has not been used for
the longest period of time.
• LRU replacement associates with each page the time of that
page's last use. When a page must be replaced, LRU chooses
that page that has not been used for the longest period of time.

4) MOST RECENTLY USED:


❖ MRU replace the page that is most recently used in the past.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : Course Name:
We use the reference string:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1,
7, 0, 1

Step 1: Starting with the first page, we will check if that particular page given in
the reference string is already present in the frames. If yes, then that is called
PAGE HIT, otherwise that is a PAGE FAULT/PAGE MISS and accordingly we will
add the pages in the frames.
Step 2: As soon as the frames are full, then from that particular page we will look
backward and check page that is present in the frame is demanded first by the
CPU in the reference string. Hence, that first demanded page will be replaced
from the frame.
Step 3: In some cases, we will get pages which are not present in the reference
string or whose demand was not in past, then we can replace the pages with
them easily.
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Thrashing:
❑ If the number of frames allocated to a low-priority process falls below the
minimum number required by the computer architecture, we must suspend
that process’s execution. We should then page out its remaining pages,
freeing all its allocated frames. This provision introduces a swap-in, swap-
out level of intermediate CPU scheduling.
❑ If the process does not have the number of frames it needs to support pages
in active use, it will quickly page-fault. At this point, it must replace some
page. However, since all its pages are in active use, it must replace a page
that will be needed again right away. Consequently, it quickly faults again,
and again, and again, replacing pages that it must bring back in immediately.
This high paging activity is called thrashing. A process is thrashing if it
is spending more time paging than executing.
❑ As the degree of multiprogramming increases, CPU utilization also increases,
although more slowly, until a maximum is reached. If the degree of
multiprogramming is increased even further, thrashing sets in, and CPU
utilization drops sharply. At this point, to increase CPU utilization and stop
Program Name:
thrashing, weB.must
Techdecrease the degree
Program Code: BCS401
of multiprogramming.
Department of Applied Computational Science & Engg.
Course Code : Course Name:

Fig-
Thrashing

This phenomenon is illustrated in Figure shown above, in which CPU


utilization is plotted against the degree of multiprogramming. As the
degree of multiprogramming increases, CPU utilization also increases,
although more slowly, until a maximum is reached. If the degree of
multiprogramming is increased even further, thrashing sets in, and
CPU utilization drops sharply. At this point, to increase CPU utilization
and stop thrashing, we must decrease the degree of multiprogramming.
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

CAUSE OF THRASHING:
• If CPU utilization is too low, we increase the degree of
multiprogramming by introducing a new process to the system. A
global page-replacement algorithm is used; it replaces page without
regard to the process to which they belong. Now suppose that a
process enters a new phase in its execution and needs more frames.
It starts faulting and taking frames away from other processes.
These processes need those pages, however, and so they also fault,
taking frames from other processes. These faulting processes must
use the paging device to swap pages in and out. As they queue up
for the paging device, the ready queue empties.
• As processes wait for the paging device, CPU utilization decreases.
The CPU scheduler sees the decreasing CPU utilization and increases
the degree of multiprogramming as a result. The new process tries
to get started by taking frames from running processes, causing
more page faults and a longer queue for the paging device.
Program Name: B. Tech Program Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

CONTINUED…
• As a result, CPU utilization drops even further, and the CPU
scheduler tries to increase the degree of multiprogramming even
• more. Thrashing has occurred, and system throughput plunges. The
page fault rate increases tremendously. As a result, the effective
memory-access time increases. No work is getting done, because the
processes are spending all their time paging.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

How to Eliminate Thrashing?


Thrashing has some negative impacts on hard drive health and system
performance. Therefore, it is necessary to take some actions to avoid it. To
resolve the problem of thrashing, here are the following methods, such as:
• Adjust the swap file size: If the system swap file is not configured correctly,
disk thrashing can also happen to you.
• Increase the amount of RAM: As insufficient memory can cause disk
thrashing, one solution is to add more RAM to the laptop. With more
memory, your computer can handle tasks easily and don't have to work
excessively. Generally, it is the best long-term solution.
• Decrease the number of applications running on the computer: If there
are too many applications running in the background, your system
resource will consume a lot. And the remaining system resource is slow
that can result in thrashing. So while closing, some applications will release
some resources so that you can avoid thrashing to some extent.
• Replace programs: Replace those programs that are heavy memory
occupied with equivalents that use less memory.
Program Name: B. Tech Program Code: BCS401
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Allocation of Frames
• Each process needs minimum number of pages
• Example: IBM 370 – 6 pages to handle SS MOVE instruction:
– instruction is 6 bytes, might span 2 pages
– 2 pages to handle from
– 2 pages to handle to
• Two major allocation schemes
– fixed allocation
– priority allocation

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Fixed Allocation
• Equal allocation – For example, if there are 100 frames and 5 processes,
give each process 20 frames.
• Proportional allocation – Allocate according to the size of process

Priority Allocation
• Use a proportional allocation scheme using priorities rather than size
• If process Pi generates a page fault,
– select for replacement one of its frames
– select for replacement a frame from a process with lower priority
number

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Global vs. Local Allocation


• Global replacement – process selects a replacement frame from
the set of all frames; one process can take a frame from another
• Local replacement – each process selects from only its own set of
allocated frames

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Techniques to Prevent Thrashing:


1) Working-Set Model:
• Δ ≡ working-set window ≡ a fixed number of page references
Example: 10,000 instruction
• WSS i (working set of Process Pi) =
total number of pages referenced in the most recent Δ (varies in time)
– if Δ too small will not encompass entire locality
– if Δ too large will encompass several localities
– if Δ = ∞ ⇒ will encompass entire program
• D = Σ WSS i ≡ total demand frames
• if D > m ⇒ Thrashing
• Policy if D > m, then suspend one of the processes

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : Course Name:

Keeping Track of the Working


• Set
Approximate with interval timer + a reference bit
• Example: Δ = 10,000
Timer interrupts after every 5000 time units
Keep in memory 2 bits for each page
Whenever a timer interrupts copy and sets the values of all
reference bits to 0
If one of the bits in memory = 1 ⇒ page in working set
• Why is this not completely accurate?
• Improvement = 10 bits and interrupt every 1000 time units
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

2) Page-Fault Frequency Scheme:


• Establish “acceptable” page-fault rate
– If actual rate too low, process loses frame
– If actual rate too high, process gains frame

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Cache memory organisation:


Cache Memory: It is a Static Random-Access memory. It is the
memory which is very nearest to the CPU, all recent instructions are
stored into the cache memory. Data is transferred in the form of words
between the cache memory and the CPU.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

CONTINUED…
• Whenever the CPU needs to access memory, it first checks the cache
memory. If the data is not found in cache memory, then the CPU
moves into the main memory.
• The cache is the fastest component in the memory hierarchy and
approaches the speed of CPU components.
• Cache memory is organised as distinct set of blocks where each set
contains a small fixed number of blocks.
• Whenever an access is made to cache, the cache controller does not
search the entire cache in order to look for a match. Rather, the
controller maps the address to a particular set of the cache and
therefore searches only the set for a match. This kind of cache
organisation is called set associative because the cache is divided
into distinct sets of blocks.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

CONTINUED…
• A block of words one just accessed is then transferred from main
memory to cache memory. The block size may vary from one word
(the one just accessed) to about 16 words adjacent to the one just
accessed.
• The performance of the cache memory is frequently measured in
terms of a quantity called hit ratio.
• When the CPU refers to memory and finds the word in cache, it is
said to produce a hit.
• If the word is not found in the cache, it is in main memory and it
counts as a miss.
• The ratio of the number of hits divided by the total CPU references
to memory (hits plus misses) is the hit ratio.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Cache Mapping:
Cache mapping defines how a block from the main memory is mapped to the
cache memory in case of a cache miss. There are three different types of
mapping used for the purpose of cache memory which are as follows:
1. Direct mapping: In direct mapping, the cache consists of normal high-
speed random-access memory. Each location in the cache holds the data, at
a specific address in the cache. This address is given by the lower
significant bits of the main memory address. This enables the block to be
selected directly from the lower significant bit of the memory address. The
remaining higher significant bits of the address are stored in the cache
with the data to complete the identification of the cached data.

Fig. Direct Mapping

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

CONTINUED…
2. Associative Mapping: In fully associative type of cache memory, each
location in cache stores both memory address as well as data. Whenever a
data is requested, the incoming memory address a simultaneously compared
with all stored addresses using the internal logic the associative memory. If a
match is found, the corresponding is read out. Otherwise, the main memory is
accessed if address is not found in cache. This method is known as fully
associative mapping approach because cached data is related to the main
memory by storing both memory address and data in the cache.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

CONTINUED…
3. Set Associative Mapping: In set associative mapping a cache is
divided into a set of blocks. The number of blocks in a set is known as
associativity or set size. Each block in each set has a stored tag. This tag
together with index completely identify the block. Thus, set associative
mapping allows a limited number of blocks, with the same index and
different tags.
In this type of cache, the following steps are used to access the
data from a cache:
• The index of the address from the processor is used to access the set.
• Then the comparators are used to compare all tags of the selected
set with the incoming tag.
• If a match is found, the corresponding location is accessed.
• If no match is found, an access is made to the main memory.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : Course Name:
Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

Locality of reference
Locality of Reference refers to the tendency of the computer program
to access instructions whose addresses are near one another. The
property of locality of reference is mainly shown by loops and
subroutine calls in a program.
▪ In case of loops in program control processing unit repeatedly
refers to the set of instructions that constitute the loop.
▪ In case of subroutine calls, every time the set of instructions are
fetched from memory.
▪ References to data items also get localized that means same data
item is referenced again and again.

Program Name: B. Tech Program Code: BCS401


Department of Applied Computational Science & Engg.
Course Code : BCS401 Course Name: Operating System

CONTINUED…
❑ Temporal Locality-Temporal locality means current data or
instruction that is being fetched may be needed soon. So we should
store that data or instruction in the cache memory so that we can
avoid again searching in main memory for the same data.
❑ Spatial Locality - Spatial locality means instruction or data near to
the current memory location that is being fetched, may be needed
soon in the near future. This is slightly different from the temporal
locality. Here we are talking about nearly located memory locations
while in temporal locality we were talking about the actual memory
location that was being fetched.

Program Name: B. Tech Program Code: BCS401


THANK YOU

You might also like