Process in Operating System(Unit 2)
Process in Operating System(Unit 2)
A process is a program in execution which then forms the basis of all computation. The
process is not as same as program code but a lot more than it. A process is an 'active' entity as
opposed to the program which is considered to be a 'passive' entity. Attributes held by the
process include hardware state, memory, CPU, etc.
The Text section is made up of the compiled program code, read in from non-volatile
storage when the program is launched.
The Data section is made up of the global and static variables, allocated and
initialized prior to executing the main.
The Heap is used for the dynamic memory allocation and is managed via calls to
new, delete, malloc, free, etc.
The Stack is used for local variables. Space on the stack is reserved for local variables
when they are declared.
There is a Process Control Block for each process, enclosing all the information about the
process. It is also known as the task control block. It is a data structure, which contains the
following:
Process vs Program
Process Program
Basically, a process is the running On the other hand, the program is the
instance of the code. executable code.
Process Scheduling
When there are two or more runnable processes then it is decided by the Operating system
which one to run first then it is referred to as Process Scheduling.
The act of determining which process is in the ready state, and should be moved to the
running state is known as Process Scheduling.
The prime aim of the process scheduling system is to keep the CPU busy all the time and to
deliver minimum response time for all programs. For achieving this, the scheduler must apply
appropriate rules for swapping processes IN and OUT of CPU.
Non Pre-emptive Scheduling: When the currently executing process gives up the
CPU voluntarily.
Pre-emptive Scheduling: When the operating system decides to favour another
process, pre-empting the currently executing process.
All processes, upon entering into the system, are stored in the Job Queue.
Processes in the Ready state are placed in the Ready Queue.
Processes waiting for a device to become available are placed in Device Queues.
There are unique device queues available for each I/O device.
A new process is initially put in the Ready queue. It waits in the ready queue until it is
selected for execution(or dispatched). Once the process is assigned to the CPU and is
executing, one of the following several events can occur:
The process could issue an I/O request, and then be placed in the I/O queue.
The process could create a new subprocess and wait for its termination.
The process could be removed forcibly from the CPU, as a result of an interrupt, and
be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready
state, and is then put back in the ready queue. A process continues this cycle until it
terminates, at which time it is removed from all queues and has its PCB and resources
deallocated.
Types of Schedulers
Long term scheduler runs less frequently. Long Term Schedulers decide which program must
get into the job queue. From the job queue, the Job Processor, selects processes and loads
them into the memory for execution. Primary aim of the Job Scheduler is to maintain a good
degree of Multiprogramming. An optimal degree of Multiprogramming means the average
rate of process creation is equal to the average departure rate of processes from the execution
memory.
This is also known as CPU Scheduler and runs very frequently. The primary aim of this
scheduler is to enhance CPU performance and increase process execution rate.
This scheduler removes the processes from memory (and from active contention for the
CPU), and thus reduces the degree of multiprogramming. At some later time, the process can
be reintroduced into memory and its execution van be continued where it left off. This
scheme is called swapping. The process is swapped out, and is later swapped in, by the
medium term scheduler.
Swapping may be necessary to improve the process mix, or because a change in memory
requirements has overcommitted available memory, requiring memory to be freed up.
1. Switching the CPU to another process requires saving the state of the old process and
loading the saved state for the new process. This task is known as a Context Switch.
2. The context of a process is represented in the Process Control Block(PCB) of a
process; it includes the value of the CPU registers, the process state and memory-
management information.
3. Context switch time is pure overhead, because the system does no useful work
while switching. Its speed varies from machine to machine, depending on the
memory speed, the number of registers that must be copied, and the existence of
special instructions(such as a single instruction to load or store all registers). Typical
speeds range from 1 to 1000 microseconds.
Operations on Process
Below we have discussed the two major operation Process Creation and Process
Termination.
Process Creation
Through appropriate system calls, such as fork or spawn, processes may create other
processes. The process which creates other process, is termed the parent of the other process,
while the created sub-process is termed its child.
Each process is given an integer identifier, termed as process identifier, or PID. The parent
PID (PPID) is also stored for each process.
There are two options for the parent process after creating the child :
Wait for the child process to terminate before proceeding. Parent process makes a
wait() system call, for either a specific child process or for any particular child
process, which causes the parent process to block until the wait() returns.
Run concurrently with the child, continuing to process without waiting.
There are also two possibilities in terms of the address space of the new process:
Process Termination
By making the exit(system call), typically returning an int, processes may request their own
termination. This int is passed along to the parent if it is doing a wait(), and is typically zero
on successful completion and some non-zero code in the event of any problem.
Processes may also be terminated by the system for a variety of reasons, including :
Cooperating processes
Cooperating processes are those that can affect or are affected by other processes running on
the system. Cooperating processes may share data with each other.
There may be many reasons for the requirement of cooperating processes. Some of these are
given as follows −
Modularity
Modularity involves dividing complicated tasks into smaller subtasks. These subtasks
can completed by different cooperating processes. This leads to faster and more
efficient completion of the required tasks.
Information Sharing
Convenience
There are many tasks that a user needs to do such as compiling, printing, editing etc.
It is convenient if these tasks can be managed by cooperating processes.
Computation Speedup
Subtasks of a single task can be performed parallely using cooperating processes. This
increases the computation speedup as the task can be executed faster. However, this is
only possible if the system has multiple processing elements.
Methods of Cooperation
Cooperating processes can coordinate with each other using shared data or messages. Details
about these are given as follows −
Cooperation by Sharing
The cooperating processes can cooperate with each other using shared data such as memory,
variables, files, databases etc. Critical section is used to provide data integrity and writing is
mutually exclusive to prevent inconsistent data.
Cooperation by Communication
The cooperating processes can cooperate with each other using messages. This may lead to
deadlock if each process is waiting for a message from the other to perform a operation.
Starvation is also possible if a process never receives a message.
In the above diagram, Process P1 and P2 can cooperate with each other using messages to
communicate.