0% found this document useful (0 votes)
10 views

os

Uploaded by

susildahal234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

os

Uploaded by

susildahal234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

What is Process?(unit 2) What is Deadlock?

(unit 2) Memory management ( UNIT3)


A process is an instance of a program that is being Deadlock is a situation in computing where two or more Memory management is the process of controlling and
executed. When we run a program, it does not execute directly. It
processes are unable to proceed because each is waiting for coordinating a computer's main memory. It ensures that
takes some time to follow all the steps required to execute the
program, and following these execution steps is known as a the other to release resources. Key concepts include mutual blocks of memory space are properly managed and allocated
process. A process can create other processes to perform exclusion, resource holding, circular wait, and no so the operating system (OS), applications and other running
multiple tasks at a time; the created processes are known preemption. Consider an example when two trains are processes have the memory they need to carry out their
as clone or child process, and the main process is known as coming toward each other on the same track and there is operations.
the parent process. Each process contains its own memory space
and does not share it with the other processes. It is known as the only one track, none of the trains can move once they are in Swapping in Operating System
active entity. A typical process remains in the below form in front of each other. This is a practical example of deadlock. Swapping is a memory management scheme in which any
memory. Figure describe process can be temporarily swapped from main memory to
A situation occurs in operating systems when there are two secondary memory so that the main memory can be made
What is thread ? or more processes that hold some resources and wait for available for other processes. It is used to improve main
A thread is the subset of a process and is also known as resources held by other(s). For example, in the below memory utilization. In secondary memory, the place where
the lightweight process. A process can have more than diagram, Process 1 is holding Resource 1 and waiting for the swapped-out process is stored is called swap space.
one thread, and these threads are managed
resource 2 which is acquired by process 2, and process 2 is The purpose of the swapping in operating system is to access
independently by the scheduler. All the threads within
waiting for resource 1. the data present in the hard disk and bring it to RAM so that
one process are interrelated to each other. Threads
the application programs can use it.
have some common information, such as data segment,
code segment, files, etc., that is shared to their peer
Memory Allocation Strategies in Memory Management
threads. But contains its own registers, stack, and
counter. Memory management in an operating system is responsible
for allocating and deallocating memory resources efficiently
Operations on Processes for various processes. Several strategies are employed to
1Creation: The process of constructing a new process for allocate memory to processes and manage the system's
execution, triggered by the system, user, or another process. memory efficiently. These strategies can be broadly classified
2)Blocking: The process enters a waiting state when it Necessary Conditions for Deadlock in OS into contiguous, non-contiguous, and dynamic memory
requests I/O, allowing the OS to switch to another process. Deadlock can arise if the following four conditions hold allocation approaches.
3) Preemption: The process is interrupted when its time slice simultaneously (Necessary Conditions) characterstics and
and solution and prevention Logical Address (Virtual Address)
ends or a higher-priority process arrives, and it moves to the
ready state.  Mutual Exclusion: Two or more resources are A logical address (also known as a virtual address) is the
non-shareable (Only one process can use at a address generated by the CPU during the execution of a
4) process Termination: Ending a process and releasing its
time). program. These addresses are part of the program's virtual
resources after it completes or is forcefully stopped.
 Hold and Wait: A process is holding at least one address space, which is a range of addresses that the
resource and waiting for resources. operating system assigns to a process.
What is a Inter process communication?
 No Preemption: A resource cannot be taken Physical Address (Real Address)
Inter process communication is the mechanism provided by
from a process unless the process releases the A physical address is the actual address in the physical
the operating system that allows processes to communicate
resource. memory (RAM). This is the address that is used by the
with each other. This communication could involve a process
letting another process know that some event has occurred  Circular Wait: A set of processes waiting for each memory hardware to access the actual data in memory.
or the transferring of data from one process to another. other in circular form.
What is paging
Types of ipc Paging is a memory management scheme that eliminates the
What is Deadlock Detection?
need for a contiguous allocation of physical memory. The
Deadlock detection is a process in computing where the process of retrieving processes in the form of pages from the
system checks if there are any sets of processes that are
secondary storage into the main memory is known as paging.
stuck waiting for each other indefinitely, preventing them
The basic purpose of paging is to separate each procedure
from moving forward. In simple words, deadlock detection is into pages. Additionally, frames will be used to split the main
the process of finding out whether any process are stuck in memory. This scheme permits the physical address space of a
loop or not. There are several algorithms like;
process to be non – contiguous.In paging, the physical
 Resource Allocation Graph memory is divided into fixed-size blocks called page frames,
 Banker’s Algorithm which are the same size as the pages used by the process.
The process’s logical address space is also divided into fixed-
Deadlock Recovery size blocks called pages, which are the same size as the page
Once a deadlock is detected, the system must recover by frames. When a process requests memory, the operating
breaking the deadlock and allowing processes to continue. system allocates one or more page frames to the process and
Recovery Techniques: maps the process’s logical pages to the physical page frames.
Process stage 1)Process Termination: Terminate one or more processes to
break the circular wait.
Options:
 Terminate all processes in the deadlock.
 Terminate processes one by one until the
deadlock is resolved.
2)Resource Preemption: Take resources from processes and
assign them to others to break the deadlock.
Considerations:
 Rollback: Rollback the process to a safe state.
 Priority: Choose the process to preempt based
on priority.
Deadlock Avoidance
Method Example Types of paging How Demand Paging Works?
Description Demand paging is a memory management technique where
Banker's Ensures resources are Checking if granting pages are loaded into memory only when they are needed.
Algorithm allocated only if the a resource to
What is Process Synchronization Instead of loading the entire program into memory at once,
Process Synchronization is the coordination of execution of system remains in a safe Process P1 will leave the operating system loads pages on-demand as the program
multiple processes in a multi-process system to ensure that state after each the system in a safe accesses specific memory locations. This approach minimizes
they access shared resources in a controlled and predictable allocation. state. the initial loading time and conserves memory resources.
manner. It aims to resolve the problem of race conditions Safe A state where all All processes can Types of pageing Inverted Page Tables
and other synchronization issues in a concurrent system. The State processes can eventually complete without Inverted page tables are an alternative approach to
main objective of process synchronization is to ensure that finish. causing deadlock. traditional page tables used for virtual-to-physical address
multiple processes access shared resources without Unsafe A state where deadlock Processes waiting translation. Unlike conventional page tables, where each
interfering with each other and to prevent the possibility of State may occur but isn’t for each other’s process has its own page table, inverted page tables maintain
inconsistent data due to concurrent access. To achieve this, guaranteed. resources could a single global table that maps physical frames t0
various synchronization techniques such as semaphores, cause a deadlock. corresponding virtual pages.
monitors, and critical sections are used.In a multi-process What is pcb
system, synchronization is necessary to ensure data A Process Control Block (PCB) is a data structure used by the
consistency and integrity, and to avoid the risk of deadlocks operating system to store information about each process in
and other synchronization problems. Process synchronization the system. It is essential for the management and
is an important aspect of modern operating systems, and it scheduling of processes.
plays a crucial role in ensuring the correct and efficient components of a Process Control Block (PCB):
functioning of multi-process systems 1ProcessID(PID),2ProcesState,3ProgramCounter(PC),4CPURegisters,5M
emoryManagementInformation,6SchedulingInformation,7I/OStatusInfo
rmation,8Accounting Information, 9Parent Process ID (PPID)
What is Critical Section Problem(unit 2) (UNIT 1) Hashed Page Table UNIT 3
The Critical Section Problem occurs when multiple processes What is a System Structure for an Operating System A Hashed Page Table is a type of page table structure
or threads try to access shared resources simultaneously, A system structure for an operating system is like the
used in paging systems to efficiently manage large
which can cause data inconsistency or race conditions.The blueprint of how an OS is organized and how its different
Critical Section is a part of the code in a process where parts interact with each other. Because operating systems
virtual address spaces. Instead of using a linear or
shared resources (like memory, files, or databases) are have complex structures, we want a structure that is easy to multilevel page table, a hash function is used to map
accessed. Only one process should be allowed in the critical understand so that we can adapt an operating system to virtual page numbers to physical frames in memory
section at a time to avoid conflicts. meet our specific needs. Similar to how we break down
larger problems into smaller, more manageable
Peterson’s Solution subproblems, building an operating system in pieces is
The producer-consumer problem (or bounded buffer simpler. The operating system is a component of every
problem) describes two processes, the producer and the segment. The strategy for integrating different operating
consumer, which share a common, fixed-size buffer used as a system components within the kernel can be thought of as
queue. Producers produce an item and put it into the buffer. an operating system structure
If the buffer is already full then the producer will have to wait MONOLITHIC STRUCTURE
for an empty block in the buffer. Consumers consume an The core of an operating system for computers is called the kernel (OS). All other
System components are provided with fundamental services by the kernel.
item from the buffer. If the buffer is already empty then the
consumer will have to wait for an item in the buffer.
Hierarchical Page Table
What is Mutex Lock As we knew when the CPU access a page of any process it has to be in the main
A Mutex Lock (short for Mutual Exclusion Lock) is a memory. Along with the page, the page table of the same process must also be stored
in the main memory. Now, what if the size of the page table is larger than the frame
synchronization mechanism used to prevent multiple size of the main memory.In that case, we have to breakdown the page table at
processes or threads from accessing shared resources multiple levels in order to fit in the frame of the main memory.
(critical sections) simultaneously, ensuring mutual exclusion.
Key Features of Mutex Locks:
 Mutual Exclusion: Ensures that only one process micro-kernel
can access the critical section at a time. The operating system is created using a micro-kernel framework that strips the kernel
of any unnecessary parts. Systems and user applications are used to implement these
Lock & Unlock Mechanism: optional kernel components. So, Micro-Kernels is the name given to these systems
 Lock: A process must acquire the lock before that have been developed.The Micro-Kernel Structure minimizes the functions
entering the critical section. handled by the kernel, delegating most services to user-level applications. The micro-
kernel itself only handles core functions, such as memory management, process
 Unlock: Once the process finishes its task, it scheduling, and basic communication between processes.
releases the lock so that other processes can
enter.
Semaphores
A Semaphore is a process synchronization tool used to
manage access to shared resources by multiple processes in a
Segmentation in OS (Operating System)
concurrent system. It helps prevent race conditions and In Operating Systems, Segmentation is a memory management technique in which
ensures mutual exclusion. the memory is divided into the variable size parts. Each part is known as a segment
which can be allocated to a process. The details about each segment are stored in a
Types of Semaphores: table called a segment table. Segment table is stored in one (or many) of the
 Binary Semaphore: Can take only two values: 0 segments.
Segment table contains mainly two information about segment:
(locked) or 1 (unlocked).Used for mutual
Modular Structure  Base: It is the base address of the segment
exclusion It is considered as the best approach for an OS. It involves designing of a modular  Limit: It is the length of the segment.
 Counting Semaphore: Can take any non- kernel. The kernel has only a set of core components and other services are added as Why Segmentation is required?
dynamically loadable modules to the kernel either during runtime or boot time. It Till now, we were using Paging as our main memory management technique. Paging is
negative integer value. Used to manage access resembles layered structure due to the fact that each kernel has defined and more close to the Operating system rather than the User. It divides all the processes
to multiple instances of a shared resource. protected interfaces, but it is more flexible than a layered structure as a module can into the form of pages regardless of the fact that a process can have some relative
call any other module parts of functions which need to be loaded in the same page.
Monitor
A Monitor is a high-level synchronization construct that
provides a safe way to manage shared resources between
multiple processes or threads. It ensures that only one
process can execute a critical section at any given time,
preventing race conditions and ensuring data consistency.
Key Features of Monitors:
 Mutual Exclusion: Only one process can access
the shared resource at a time.
 Encapsulation: Monitors combine shared
System calls
variables and synchronization mechanisms
System calls are interfaces provisioned by the operating
inside a single object.
system to allow user-level applications to interact with low-
 Condition Variables: Monitors use condition
level hardware components & make use of all the services
variables to handle waiting and signaling. What is Virtual Memory?
provided by the kernel, which is a core component and the Virtual memory is a memory management technique used by operating
monitor BankAccount {
heart of an operating system that manages all the hardware systems to give the appearance of a large, continuous block of memory
int balance = 0;
and the services provided by the OS. to applications, even if the physical memory (RAM) is limited. It allows
procedure deposit(int amount) { the system to compensate for physical memory shortages, enabling
These system calls are essential for every process to interact
balance = balance + amount; larger applications to run on systems with less RAM.
with the kernel and properly use the services provided by it.
}213 A memory hierarchy, consisting of a computer system’s memory and a
System calls are an interface between a process and the
procedure withdraw(int amount) { disk, enables a process to operate with only some portions of its
operating system. And they're the only way to switch from address space in memory. A virtual memory is what its name indicates-
if (balance >= amount) {
user mode to kernel mode. it is an illusion of a memory that is larger than the real memory. We
balance = balance - amount;
Types of System Calls refer to the software component of virtual memory as a virtual memory
} else { manager. The basis of virtual memory is the noncontiguous memory
Services provided by an OS are typically related to any kind of
wait(); // Wait if insufficient balance allocation model. The virtual memory manager removes some
operation that a user program can perform like creation,
} components from memory to make room for other components.
termination, forking, moving, communication, etc. Similar
}
types of operations are grouped into one single system call
}
category. System calls are classified into the following
CPU Scheduling
categories:
CPU Scheduling is the process of determining which process
in the ready queue should be allocated the CPU next. It helps
achieve maximum CPU utilization, throughput, and response
time, ensuring fair allocation of CPU time to all processes.
CPU Scheduling Criteria:
 CPU Utilization: Keeping the CPU busy as much
What isFragmentation
as possible. Fragmentation in memory management refers to the inefficient use of memory that
 Throughput: The number of processes results in wasted memory space. It occurs when memory is allocated and deallocated
dynamically, leading to gaps in memory that cannot be used effectively.
completed per unit time. 1) Internal Fragmentation: Definition: Internal fragmentation occurs
 Turnaround Time: The total time taken by a when allocated memory is slightly larger than the requested
memory, resulting in unused space within the allocated block.Cause:
process from submission to completion. Caused by fixed-size memory allocation.
 Waiting Time: The time a process spends waiting 2) External Fragmentation:Definition: External fragmentation occurs
when free memory is divided into small, non-contiguous blocks
in the ready queue. scattered throughout the memory. Even though the total free
 Response Time: The time from submission to the memory is enough to satisfy a process's request, the process cannot
first response of a process. be allocated memory because the free blocks are not contiguous.
Cause: Caused by dynamic allocation and deallocation of memory
over time.

You might also like