Os Module2 by Divya Miss
Os Module2 by Divya Miss
SYSTEMS Module2_Part1
Process:
Most central concept of any operating system is the process.
process is a program in execution; process execution must progress in
sequential fashion. No parallel execution of instructions of a single
process.
The process includes the current activity as represented by the value of
the program counter and the content of the processor registers.
The program is only part of a process.
Also it includes the process stack which contain temporary data (such as
method parameters return address and local variables) & a data
section which contain global variables.
Process Management
Process State
As a process executes, it changes state. The state of a process is defined in part
by the
current activity of that process. Each process may be in one of the following
states:
Each process is represented in the operating system by a process control block (PCB)—
also called a task control block.
It contains many pieces of information associated with a specific process, including these:
Process state – running, waiting, etc.
Program counter – indicates the address of the next instruction to be executed
CPU registers – include accumulators, index registers, stack pointers, and general-purpose
registers, plus any condition-code information.
Along with the program counter, this state information must be saved when an interrupt occurs, to
allow the process to be continued correctly afterwards
Process control block
Scheduling queues: As processes enter the system, they are put into a job queue. This queue
consists of all process in the system.
The process that are residing in main memory and are ready & waiting to execute is kept on
a list called ready queue.
This queue is generally stored as a linked list.
A ready queue header contains pointers to the first & final PCB in the list. The PCB includes
a pointer field that points to the next PCB in the ready queue.
The lists of processes waiting for a particular I/O device are kept on a list called device
queue. Each device has its own device queue.
A new process is initially put in the ready queue. It waits in the ready queue until it is selected for
execution & is given the CPU.
Queuing diagram
A common representation of process scheduling is a queuing diagram.
Each rectangular box represents a queue. Two types of queues are present: the ready queue and a set
of device queues.
The circles represent the resources that serve the queues, and the arrows indicate the
flow of processes in the system.
A new process is initially put in the ready queue. It waits there until it is selected for execution, or
dispatched.
Once the process is allocated the CPU and is executing, one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new child process and wait for the child’s termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and be
put back in the ready queue.
Queuing diagram
Types of schedulers
Types of schedulers:
There are 3 types of schedulers mainly used:
1. Long term scheduler:
Long term scheduler selects process from the disk & loads them into memory
for execution.
It controls the degree of multi-programming i.e. no. of processes in memory.
It executes less frequently than other schedulers. So, the long term
scheduler is needed to be invoked only when a process leaves the system.
Most processes in the CPU are either I/O bound or CPU bound.
An I/O bound process is one that spends most of its time in I/O operation
than it spends in doing computing operation.
A CPU bound process is one that spends more of its time in doing
computations than I/O operations.
It is important that the long term scheduler should select a good mix of I/O
bound & CPU bound processes.
Types of schedulers
The processes in most systems can execute concurrently, and they may be created and
deleted dynamically.
Thus, these systems must provide a mechanism for process creation and termination.
Process creation
Process Creation
A process may create several new processes, via a create-process system call,
during the course of execution. The creating process is called a parent process,
and the new processes are called the children of that process, forming a tree of processes
Most operating systems identify processes according to a unique process identifier
(pid), which is typically an integer number.
Resource sharing options
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Execution options
Parent and children execute concurrently
Parent waits until children terminate
Process termination
Process Termination
A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit() system call. At that point, the process may
return a status value (typically an integer) to its parent process (via the wait() system call).
All the resources of the process—including physical and virtual memory, open files,
and
I/O buffers—are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons, such
as these:
The child has exceeded its usage of some of the resources that it has been allocated.
The task assigned to the child is no longer required.
The parent is exiting, and the operating system does not allow a child to continue if its
parent terminates.
Cascading termination
Some systems do not allow a child to exist if its parent has terminated. In such
systems, if a process terminates (either normally or abnormally), then all its children
must also be terminated.
This phenomenon, referred to as cascading termination, is normally initiated
by the operating system.
OPERATING Module2_Part3
SYSTEMS
Textbook : Operating Systems Concepts by Silberschatz
Interprocess communication
Message passing
-is useful for exchanging smaller amounts of data.
-is easier to implement than is shared memory for intercomputer communication.
-are typically implemented using system calls and thus require the more time-consuming task
of kernel intervention.
Shared memory allows
-maximum speed and convenience of communication.
-is faster than message passing, In contrast,
- system calls , only to establish shared-memory regions. Once shared memory is
established, all accesses are treated as routine memory accesses, and no assistance from the
kernel is required.
Shared memory systems
Interprocess communication using shared memory requires ,communicating processes to
establish a region of shared memory.
Typically, a shared-memory region resides in the address space of the process, creating the
shared memory segment.
Other processes that wish to communicate using this shared memory segment must attach it to
their address space.
In shared memory system , cooperating processes can exchange information by reading and
writing data in the shared areas.
The form of the data and the location are determined by these processes and are not under the
operating system's control.
The processes are also responsible for ensuring that they are not writing to the same location
simultaneously.
Shared memory systems
let's consider the producer-consumer problem, which is a common paradigm for
cooperating processes. A producer process produces information that is consumed by a
consumer process.
One solution to the producer–consumer problem uses shared memory. To allow
producer and
consumer processes to run concurrently, we must have available a buffer of items that
can be
filled by the producer and emptied by the consumer.
This buffer will reside in a region of memory that is shared by the producer and
consumer processes.
A producer can produce one item while the consumer is consuming another item.
The producer and consumer must be synchronized, so that the consumer does not try
to consume an item that has not yet been produced.
The producer should produce data only when the buffer is not full.
Shared memory systems
unbounded buffer places no practical limit on the size of the buffer. The consumer may have
to wait for new items, but the producer can always produce new items.
bounded buffer assumes a fixed buffer size. In this case, the consumer must wait if the buffer
is empty, and the producer must wait if the buffer is full.
Shared memory systems
Bounded buffer , inter process communication using shared memory.
The following variables reside in a region of memory shared by the
producer and consumer processes:
Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Bounded buffer , inter process
communication using shared memory.
The shared buffer is implemented as a circular array with two logical pointers: in and
out.
in points to the next free position in the buffer;
out points to the first full position in the buffer.
The buffer is empty when in == out;
the buffer is full when ((in + 1)% BUFFER SIZE) == out.
Producer consumer problem using
shared memory
The producer process has a local variable next produced in which the new item to be
produced is stored.
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
Producer consumer problem using
shared memory
The consumer process has a local variable next consumed in which the item to be
consumed is stored.
item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
indirect communication, the messages are sent to and received from mailboxes,
or
ports.
A mail box can be viewed abstractly as an object into which messages can be placed
by processes and from which messages can be removed.
Each mailbox has a unique identification number.
A process can communicate with another process via a number of different mailboxes,
but two
processes can communicate only if they have a shared mailbox.
The send() and receive() primitives are defined as follows:
• send(A, message)—Send a message to mailbox A.
• receive(A, message)—Receive a message from mailbox A.
Synchronous or asynchronous communication
Producer
message next_produced;
while (true) {
/* produce an item in next_produced */
send(next_produced);
}
Consumer
message next_consumed;
while (true) {
receive(next_consumed)
/* consume the item in next_consumed */
}
buffering
CPU scheduling deals with the problem of deciding which of the processes in the
ready queue
is to be allocated the CPU. There are many different CPU-scheduling algorithms.
Some of
them are
First-Come, First-Served Scheduling
Shortest-Job-First Scheduling
shortest-remaining-time-first
Priority Scheduling
Round-Robin Scheduling
First-Come, First-Served Scheduling
The Gantt Chart for the above schedule is:(Gantt chart, which is a bar chart that
illustrates a particular schedule, including the start and finish times of each of
the participating processes)
If the processes arrive in the order P2, P3 , P1, however, the results will be as shown in the
following Gantt chart:
0 3 9 13 15
First-Come, First-Served Scheduling
Example
For the process listed what is the average turn around time?
0 3 9 13 15
Turn around time=completion time –arrival time
CPU scheduling deals with the problem of deciding which of the processes in the
ready queue
is to be allocated the CPU. There are many different CPU-scheduling algorithms.
Some of
them are
First-Come, First-Served Scheduling
Shortest-Job-First Scheduling
shortest-remaining-time-first
Priority Scheduling
Round-Robin Scheduling
Shortest job first scheduling algorithm
SJF is an optimal algorithm because it decreases the wait times for short
processes much
more than it increases the wait times for long processes.
It gives minimum turn around time
Consider the case of 4 jobs, with run times of a,b,c,and d respectively.
The first job finishes with time a,the second job finishes with time a+b and so on.
The average turn around time =(4a+3b+2c+d)/4. It is clear that ‘a’ contributes to the
average
than the other times, so it should be the shortest job ,with b next, then c and so on. So
we
can say that SJF is optimal
SJF
As an example of SJF scheduling, consider the following set of processes, with the
length of the CPU burst given in milliseconds:
Process Burst Time
P1 6
P2 8
P3 7
P4 3
Using SJF scheduling, we would schedule these processes according to the
following Gantt
chart:
For the processes listed draw gantt chart illustrating their execution
3 9 11 15
Process A start executing: It is the only choice at time 0 . At time 3, B is the only
choice .At time 9, B completes, process D runs because D is shorter than process C
SJF
For the process listed what is the average turn around time?
0 3 9 11 15
Turn around time=completion time –arrival time
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h> f()
int main() P()
{
output
Hello world!
Hello world!
fork() system call
Ex2.c
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
Int main(int argc,char *argv[])
{
printf(“We are in ex2.c\n”);
printf(“Pid of ex2.c=%d\n”,getpid());
Return 0;
}
Compile these two programs
gcc ex1.c -o ex1
gcc ex2.c –o ex2
Run the first program ./ex1
Pid of ex1.c=5962
We are in ex2.c
Pid of ex2.c=5962
Wait system call
#include <unistd.h>
#include <sys/types.h> A call to wait() blocks the calling process until one of its
#include <stdio..h>
child processes exits or a signal is received. After child
process terminates, parent continues its execution after
Int main()
wait system call instruction.
{
pid_t q;
q=fork();
if(q==0)//child
{ printf(“I am a child having Id %d/n”,getpid());
printf(“My parent’s id is %d\n”,getppid());
}
else{//parent
printf(“ My child’s id is %d/n”,q));
printf(“I am parent having id %d\n”,getpid());
}
printf(“Common”);
}
Wait system call
Output may be
My child’s id is 188
I am a child having Id 188
I am parent having id 157
My parent’s id is 157
Common
Common
Wait system call
#include <unistd.h>
#include <sys/types.h>
#include <stdio..h>
#include <sys/wait.h>
Int main()
{
pid_t q;
q=fork();
if(q==0)//child
{ printf(“I am a child having Id %d/n”,getpid());
printf(“My parent’s id is %d\n”,getppid());
}
else{//parent
wait(NULL);
printf(“ My child’s id is %d/n”,q));
printf(“I am parent having id %d\n”,getpid());
}
printf(“Common”);
}
Wait system call
It deletes all buffers and closes all open files before ending the program.
// C program to illustrate exit() function.
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
printf("START");
Output
START
OPERATING Module2_Part7
SYSTEMS
Textbook : Operating Systems Concepts by Silberschatz
Scheduling algorithms
CPU scheduling deals with the problem of deciding which of the processes in the
ready queue
is to be allocated the CPU. There are many different CPU-scheduling algorithms.
Some of
them are
First-Come, First-Served Scheduling
Shortest-Job-First Scheduling
shortest-remaining-time-first
Priority Scheduling
Round-Robin Scheduling
Shortest remaining time first scheduling
algorithm
The SJF algorithm can be either preemptive or nonpreemptive.
The choice arises when a new process arrives at the ready queue while a previous process
is still executing.
The next CPU burst of the newly arrived process may be shorter than what is left of the
currently executing process.
A preemptive SJF algorithm will preempt the currently executing process, and allow the shorter
process to run whereas a nonpreemptive SJF algorithm will allow the currently running
process to finish its CPU burst.
Preemptive SJF scheduling is sometimes called shortest-remaining-time-first
scheduling.
SRTF
As an example, consider the following four processes, with the length of the CPU
burst given in milliseconds:
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
If the processes arrive at the ready queue at the times shown and need the indicated
burst times, then the resulting preemptive SJF schedule is as depicted in the following
Gantt chart:
Process P1 is started at time 0, since it is the only process in the queue. ProcessP2 arrives at time
1. The remaining time for process P1 (7 milliseconds) is larger than the time required by process
P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled. The average
waiting time for this example is [(10- 1) + (1 - 1) + (17- 2) +(5-3)]/ 4 = 26/4 = 6.5 milliseconds.
Nonpreemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds.
SRTF
For the processes listed draw gantt chart illustrating their execution
0 3 4 8 10 15
Process A start executing at time 0. It remains running when process B arrives
because remaining time is less.At time 3 process B is the only process in the queue.At
time 4.001, process C arrives and start running because its remaining time(4) is less
than B’s remaining time(4.999),At 6.001 process B remains running because its
remaining time (1.999) is less than D’s remaining time. When process C terminates
process D runs its remaining time is less than that of process B. Then process B runs
SRTF
For the process listed what is the average turn around time?
0 3 4 8 10 15
Turn around time=completion time –arrival time
CPU scheduling deals with the problem of deciding which of the processes in the
ready queue
is to be allocated the CPU. There are many different CPU-scheduling algorithms.
Some of
them are
First-Come, First-Served Scheduling
Shortest-Job-First Scheduling
shortest-remaining-time-first
Priority Scheduling
Round-Robin Scheduling
Priority scheduling
0 5 7 11 15
Waiting time=turn around time – execution time
0 2 4 7 11 15
CPU scheduling deals with the problem of deciding which of the processes in the
ready queue
is to be allocated the CPU. There are many different CPU-scheduling algorithms.
Some of
them are
First-Come, First-Served Scheduling
Shortest-Job-First Scheduling
shortest-remaining-time-first
Priority Scheduling
Round-Robin Scheduling
Round robin scheduling
Front of the
queue
When B completes its time slice, put back to the tail of the queue. F will
run. When F completes F put back to the tail of the queue. Then D runs
and so on
Round robin scheduling
The average waiting time under the RR policy is often long. Consider the following set
of processes that arrive at time 0, with the length of the CPU burst given in
milliseconds:
Example with 3 processes
Process Burst Time
P1 24
P2 3
P3 3
If we use a time quantum of 4 milliseconds, then process P1 gets the first 4
milliseconds. it requires another 20 milliseconds, it is preempted after the first time
quantum
CPU is given to the next process in the queue, process P2 . Process P2 does not need 4
milliseconds, so it quits before its time quantum expires.
The CPU is then given to the next process, process P3. Once each process has received
1 time quantum, the CPU is returned to process P1 for an additional time quantum.
Round robin scheduling
Let's calculate the average waiting time for the above schedule. P1 waits for 6
millisconds (10- 4), P2 waits for 4 millisconds, and P3 waits for 7 millisconds.
Thus, the average waiting time is 17/3 = 5.66 milliseconds.
Round robin scheduling
If there are n. processes in the ready queue and the time quantum is q,
then each process gets 1 timeslice of the CPU time in chunks of at most q time units.
Each process must wait no longer than (n - 1) x q time units until its
next time quantum. For example, with five processes and a time quantum of 20
milliseconds, each process will get up to 20 milliseconds every 100 milliseconds.
The performance of the RR algorithm depends heavily on the size of the time quantum.
At one extreme, if the time quantum is extremely large, the RR policy is the same as the
FCFS policy.
Setting too short causes too many process switched and lowers cpu efficiency
We have to take time quantum in between
Round robin scheduling
For the processes listed draw gantt chart illustrating their
execution(quantum=2)
process Arrival time Processing time
A 0.000 3
B 1.001 6
C 4.001 4
D 6.001 2
0 2 4 5 7 9 11 13 15
When ProcessA’s first time quantum expires process B runs.At time 4 process A
restarts process B returns to the ready queue.At time 4.001 process C enters the
ready queue after B.at time 6 process D enters the ready queue after C.Starting at
&process C D B and C runs in sequence.
Round robin scheduling
For the process listed what is the average turn around time?
0 2 15
Remember here A runs two times, At 1 A runs again. B does not arrive until 1.001
Although the time quantum should be large compared with the context switch time, it should not be too
large. Time quantum q should be large compared to context switch time. q usually 10 milliseconds to
100 milliseconds, Context switch < 10 microseconds