Unit 2 Operating System
Unit 2 Operating System
BCA 301
Syllabus
Processes: Process Concept, Process Scheduling,
Operation on Processes
CPU Scheduling: Basic Concepts, Scheduling
Criteria, Scheduling Algorithms
Process Synchronization: Background, The Critical-
Section Problem, Semaphores solution to critical
section problem
Process related commands in Linux: ps, top, pstree,
nice, renice and system calls
Processes
A process is basically a program in execution. The
execution of a process must progress in a sequential
fashion.
A process is defined as an entity which represents the
basic unit of work to be implemented in the system.
In a simple term, we write our computer programs in a text
file and when we execute this program, it becomes a
process which performs all the tasks mentioned in the
program.
When a program is loaded into the memory and it becomes
a process, it can be divided into four sections ─ stack, heap,
text and data.
Processes
Processes
S.N. Component & Description
1 Stack
The Stack is used for local variables. Space on the stack is reserved for
local variables when they are declared. So the Stack contains the
temporary data such as method/function parameters, return address and
local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.
The Heap is used for the dynamic memory allocation, and is managed via
calls to new, delete, malloc, free, etc.
3 Data
The Data section is made up the global and static variables, allocated and
initialized prior to executing the main.
4 Text
The Text section is made up of the compiled program code, read in from
non-volatile storage when the program is launched. This includes the
current activity represented by the value of Program Counter and the
contents of the processor's registers.
Program
A program is a piece of code which may be a single
line or millions of lines. A computer program is usually
written by a computer programmer in a
programming language.
A computer program is a collection of instructions that
performs a specific task when executed by a computer.
When we compare a program with a process, we can
conclude that a process is a dynamic instance of a
computer program.
A part of a computer program that performs a well-
defined task is known as an algorithm. A collection of
computer programs, libraries and related data are
referred to as software.
Program
A computer program is a
collection of instructions that
performs a specific task when
executed by a computer. When
we compare a program with a
process, we can conclude that a
process is a dynamic instance
of a computer program.
A part of a computer program
that performs a well-defined
task is known as an algorithm.
A collection of computer
programs, libraries and related
data are referred to as software.
Comparison Chart of Process and Program
BASIS FOR PROGRAM
PROCESS
COMPARISON
The process could create a new sub process and wait for its
termination.
5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory which are ready to execute into memory and execution can
for execution be continued.
What is Context Switch?
Switching the CPU to another process requires saving the
state of the old process and loading the saved state for the
new process. This task is known as a Context Switch.
The context of a process is represented in the Process
Control Block (PCB) of a process; it includes the value of
the CPU registers, the process state and memory-
management information. When a context switch occurs,
the Kernel saves the context of the old process in its PCB
and loads the saved context of the new process scheduled
to run.
Context switch time is pure overhead, because
the system does no useful work while switching. Its
speed varies from machine to machine, depending on the
memory speed, the number of registers that must be
copied, and the existence of special instructions (such as a
single instruction to load or store all registers). Typical
speeds range from 1 to 1000 microseconds.
Operations on Processes
The processes in the system can execute concurrently and they must be created and
deleted dynamically.
1. Creation
Once the process is created, it will be ready and come into the ready
queue (main memory) and will be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating
system chooses one process and start executing it. Selecting the
process which is to be executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts
executing it. Process may come to the blocked or wait state during
the execution then in that case the processor starts executing the
other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the
process. The Context of the process (PCB) will be deleted and the
process gets terminated by the Operating system.
System Call
When a program in user mode requires access to RAM or a hardware
resource, it must ask the kernel to provide access to that resource. This is
done via something called a system call.
When a program makes a system call, the mode is switched from user mode
to kernel mode. This is called a context switch.
Then the kernel provides the resource which the program requested. After
that, another context switch happens which results in change of mode from
kernel mode back to user mode.
Generally, system calls are made by the user level programs in the following
situations:
The process which called the fork() call is the parent process
and the process which is created newly is called the child
process. The child process will be exactly the same as the
parent. Note that the process state of the parent i.e., the
address space, variables, open files etc. is copied into the child
process. This means that the parent and child processes
have identical but physically different address spaces. The
change of values in parent process doesn't affect the child and
vice versa is true too.
Process Creation
Let's look at an example:
// example.c
#include <stdio.h>
void main()
{ int val;
val = fork(); // line A
printf("%d", val); // line B
}
When the above example code is executed, when line A is executed, a child process is
created. Now both processes start execution from line B. To differentiate between the
child process and the parent process, we need to look at the value returned by the
fork() call.
The difference is that, in the parent process, fork() returns a value which represents
the process ID of the child process. But in the child process, fork() returns the value
0.
This means that according to the above program, the output of parent process will be
the process ID of the child process and the output of the child process will be 0.
Process Creation
Exec()
The exec() system call is also used to create processes.
But there is one big difference between fork() and
exec() calls.
The fork() call creates a new process while preserving
the parent process.
But, an exec() call replaces the address space, text
segment, data segment etc. of the current process with
the new process.
Process Termination
By making the exit(system call), typically returning an integer value,
processes may request their own termination. This integer value is passed
along to the parent if it is doing a wait(), and is typically zero on successful
completion and some negative value in the event of any problem.
Shared Memory
Message passing
Interprocess Communication through Shared memory-
Bounded Buffer/Producer Consumer Problem
There are two processes: Producer and Consumer. Producer
produces some item and Consumer consumes that item. The
two processes shares a common space or memory location
known as buffer where the item produced by Producer is
stored and from where the Consumer consumes the item if
needed.
while(1) {
while(1){
while(in == out);
nextConsumed = buffer[out];
out = (out + 1) % buffer_size;
}
Interprocess Communication through Shared memory-
Bounded Buffer/Producer Consumer Problem
Messaging Passing Method
In this method, processes communicate with each other
without using any kind of shared memory. This is best
approach for Interprocess communication as distributed
processes at different locations can communicate through
network. If two processes p1 and p2 want to communicate
with each other, they proceed as follow:
send(A, message)
receive(A, message) where A is Id of
mailbox
CPU scheduling
In the uniprogrammming systems like MS DOS, when a
process waits for any I/O operation to be done, the CPU
remains idle. This is an overhead since it wastes the time
and causes the problem of starvation. However, In
Multiprogramming systems, the CPU doesn't remain idle
during the waiting time of the Process and it starts
executing other processes. Operating System has to define
which process the CPU will be given.
2. Burst Time
The total amount of time required by the CPU to execute
the whole process is called the Burst Time. This does not
include the waiting time. It is confusing to calculate the
execution time for a process even before executing it hence
the scheduling problems based on the burst time cannot be
implemented in reality.
3. Completion Time
The Time at which the process enters into the completion
state or the time at which the process completes its
execution, is called completion time.
Various Times related to the Process
4. Turnaround time
The total amount of time spent by the process from its
arrival to its completion, is called Turnaround time.
5. Waiting Time
The Total amount of time for which the process waits for
the CPU to be assigned is called waiting time.
6. Response Time
The difference between the arrival time and the time at
which the process first gets the CPU is called Response
Time.
RT=First response-AT
CPU Scheduling Criteria
There are many different criteria’s to check when considering
the "best" scheduling algorithm, they are:
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle,
CPU would be working most of the time(Ideally 100% of the
time). Considering a real system, CPU usage should range from
40% (lightly loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit time or rather
say total amount of work done in a unit of time. This may range
from 10/second to 1/hour depending on the specific processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e.
The interval from time of submission of the process to the time of
completion of the process(Wall clock time).
CPU Scheduling Criteria
Waiting Time
The sum of the periods spent waiting in the ready queue
amount of time a process has been waiting in the ready
queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready
queue waiting for their turn to get into the CPU.
Response Time
Amount of time it takes from when a request was submitted
until the first response is produced. Remember, it is the
time till the first response and not the completion of process
execution(final response).
Scheduling Algorithms
The Purpose of a Scheduling algorithm:
Simple
Easy
First come, First serve
Problems with FCFS Scheduling
It is Non Pre-emptive algorithm, which means
the process priority doesn't matter.
If a process with very least priority is being executed, more
like daily routine backup process, which takes more time,
and all of a sudden some other high priority process
arrives, like interrupt to avoid system crash, the high
priority process will have to wait, and hence in this case,
the system will crash, just because of improper process
scheduling.
Not optimal Average Waiting Time.
Resources utilization in parallel is not possible, which leads
to Convoy Effect, and hence poor resource(CPU, I/O etc)
utilization.
Due to the non-preemptive nature of the algorithm, the
problem of starvation may occur.
What is Convoy Effect?
P0 9-0=9
P1 6-1=5
P2 14 - 2 = 12
Each process present in the ready queue is assigned the CPU for
that time quantum, if the execution of the process is completed
during that time then the process will terminate else the
process will go back to the ready queue and waits for the next
turn to complete the execution.
Round Robin scheduling algorithm
Advantages
while(1) {
while(counter==Buffer_size);
buffer[in] = nextProduced;
in = (in + 1) % buffer_size;
counter++;
}
while(1){
while(counter==0);
nextConsumed = buffer[out];
out = (out + 1) % buffer_size;
counter--;
}
Suppose the initial value of variable counter is 5 and
producer and consumer executes the statements
counter++ and counter–- concurrently.
Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its
critical section at a given point of time.
Progress
If no process is in its critical section, and if one or more threads want to
execute their critical section then any one of these threads must be
allowed to get into its critical section.
Bounded Waiting
After a process makes a request for getting into its critical section, there
is a limit for how many other processes can get into their critical
section, before this process's request is granted. So after the limit is
reached, system must grant the process permission to get into its
critical section.
Peterson’s Solution
Peterson’s Solution is a classical software based solution
to the critical section problem.
In Peterson’s solution, we have two shared variables: