Operating System UNIT-1
Operating System UNIT-1
We need a system which can act as an intermediary and manage all the processes and resources
present in the system.
An Operating System can be defined as an interface between user and hardware. It is responsible
for the execution of all the processes, Resource Allocation, CPU management, File Management and
many other tasks.
The purpose of an operating system is to provide an environment in which a user can execute
programs in convenient and efficient manner.
Structure of a Computer System
In the 1970s, Batch processing was very popular. In this technique, similar types of jobs were
batched together and executed in time. People were used to having a single computer which was
called a mainframe.
In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then executes the
jobs one by one. The users collect their respective output when all the jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to another as soon
as the job was completed. It contained a small set of programs called the resident monitor that
always resided in one part of the main memory. The remaining part is used for servicing jobs.
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.
Disadvantages of Batch OS
1. Starvation
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very high,
then the other four jobs will never be executed, or they will have to wait for a very long time. Hence
the other processes get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires the
input of two numbers from the console, then it will never get it in the batch processing scenario
since the user is not present at the time of execution.
Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each
process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the execution of
other processes. Therefore, multiprogramming improves the efficiency of the system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Increased reliability: Due to the multiprocessing system, processing tasks can be distributed
among several processors. This increases reliability as if one processor fails, the task can be
given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.
Disadvantages of Multiprocessing operating System
o The multiple processors are busier at the same time to complete any task in a multitasking
environment, so the CPU generates more heat.
o In this type of operating system, network traffic reduces due to the division between clients
and the server.
o This type of system is less expensive to set up and maintain.
o In this type of operating system, the failure of any node in a system affects the whole
system.
o Security and performance are important issues. So trained network administrators are
required for network administration.
In Real-Time Systems, each job carries a certain deadline within which the job is supposed to be
completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.
The Application of a Real-Time system exists in the case of military applications, if you want to drop
a missile, then the missile is supposed to be dropped with a certain precision.
Advantages of Real-time operating system:
o Easy to layout, develop and execute real-time applications under the real-time operating
system.
o In a Real-time operating system, the maximum utilization of devices and systems.
In the Time Sharing operating system, computer resources are allocated in a time-dependent fashion
to several programs simultaneously. Thus it helps to provide a large number of user's direct access
to the main computer. It is a logical extension of multiprogramming. In time-sharing, the CPU is
switched among multiple programs given by different users on a scheduled basis.
The Distributed Operating system is not installed on a single machine, it is divided into parts, and
these parts are loaded on different machines. A part of the distributed Operating system is installed
on each machine to make their communication possible. Distributed Operating systems are much
more complex, large, and sophisticated than Network operating systems because they also have to
take care of varying networking protocols.
Process Management in OS
A Program does nothing unless its instructions are executed by a CPU. A program in execution is
called a process. In order to accomplish its task, process needs the computer resources.
There may exist more than one process in the system which may require the same resource at the
same time. Therefore, the operating system has to manage all the processes and the resources in a
convenient and efficient way.
Attributes of a process
The Attributes of the process are used by the Operating System to create the process control block
(PCB) for each of them. This is also called context of the process. Attributes which are stored in the
PCB are described below.
1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.
2. Program counter
A program counter stores the address of the last instruction of the process on which the process was
suspended. The CPU uses this address when the execution of this process is resumed.
3. Process State
The Process, from its creation to the completion, goes through various states which are new, ready,
running and waiting. We will discuss about them later in detail.
4. Priority
Every process has its own priority. The process with the highest priority among the processes gets
the CPU first. This is also stored on the process control block.
Every process has its own set of registers which are used to hold the data which is generated during
the execution of the process.
During the Execution, Every process uses some files which need to be present in the main memory.
OS also maintains a list of open files in the PCB.
OS also maintain the list of all open devices which are used during the execution of the process.
Process States
State Diagram
1. New
A program which is going to be picked up by the OS into the main memory is called a new process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU to
be assigned. The OS picks the new processes from the secondary memory and put all of them in the
main memory.
The processes which are ready for the execution and reside in the main memory are called ready
state processes. There can be many processes present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS depending upon the scheduling
algorithm. Hence, if we have only one CPU in our system, the number of running processes for a
particular time will always be one. If we have n processors in the system then we can have n
processes running simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending upon
the scheduling algorithm or the intrinsic behaviour of the process.
When a process waits for a certain resource to be assigned or for the input from the user then the
OS move this process to the block or wait state and assigns the CPU to the other processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the
process (Process Control Block) will also be deleted the process will be terminated by the Operating
system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the main memory due to
lack of the resources (mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS have to
make the room for the process in the main memory by throwing the lower priority process out into
the secondary memory. The suspend ready processes remain in the secondary memory until the
main memory gets available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the blocked process
which is waiting for some resources in the main memory. Since it is already waiting for some
resource to get available hence it is better if it waits in the secondary memory and make room for
the higher priority process. These processes complete their execution once the main memory gets
available and their wait is finished.
1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory) and will
be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one process
and start executing it. Selecting the process which is to be executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process may come
to the blocked or wait state during the execution then in that case the processor starts executing the
other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of the
process (PCB) will be deleted and the process gets terminated by the Operating system.
Operating system uses various schedulers for the process scheduling described below.
Long term scheduler is also known as job scheduler. It chooses the processes from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term
scheduler is to choose a perfect mix of IO bound and CPU bound processes among the jobs present
in the pool.
If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked
state all the time and the CPU will remain idle most of the time. This will reduce the degree of
Multiprogramming. Therefore, the Job of long term scheduler is very critical and may affect the
system for a very long time.
Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready
queue and dispatch to the CPU for the execution.
A scheduling algorithm is used to select which job is going to be dispatched for the execution. The
Job of the short term scheduler can be very critical in the sense that if it selects job whose CPU burst
time is very high then all the jobs after that, will have to wait in the ready queue for a very long time.
This problem is called starvation which may arise if the short term scheduler makes some mistakes
while selecting the job.
Medium term scheduler is used for this purpose. It removes the process from the running state to
make room for the other processes. Such processes are the swapped out processes and this
procedure is called swapping. The medium term scheduler is responsible for suspending and
resuming the processes.
It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of
processes in the ready queue.
Process Queues
The Operating system manages various types of queues for each of the process states. The PCB
related to the process is also stored in the queue of the same state. If the Process is moved from one
state to another state then its PCB is also unlinked from the corresponding queue and added to the
other state queue in which the transition is made.
1. Job Queue
In starting, all the processes get stored in the job queue. It is maintained in the secondary memory.
The long term scheduler (Job scheduler) picks some of the jobs and put them in the primary
memory.
2. Ready Queue
Ready queue is maintained in primary memory. The short term scheduler picks the job from the
ready queue and dispatch to the CPU for the execution.
3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes the state
of the process from running to waiting. The context (PCB) associated with the process gets stored on
the waiting queue which will be used by the Processor when the process finishes the IO.
A thread is a single sequence stream within a process. Threads are also called lightweight processes
as they possess some of the properties of processes. Each thread belongs to exactly one process. In
an operating system that supports multithreading, the process can consist of many threads. But
threads can be effective only if the CPU is more than 1 otherwise two threads have to context switch
for that single CPU.
What is Thread in Operating Systems?
In a process, a thread refers to a single sequential activity being executed. these activities are also
known as thread of execution or thread control. Now, any operating system process can execute a
thread. we can say, that a process can have multiple threads.
Why Do We Need Thread?
Threads run in parallel improving the application performance. Each such thread has its own
CPU state and stack, but they share the address space of the process and the environment.
Threads can share common data so they do not need to use inter-process communication. Like
the processes, threads also have states like ready, executing, blocked, etc.
Priority can be assigned to the threads just like the process, and the highest priority thread is
scheduled first.
Each thread has its own Thread Control Block (TCB). Like the process, a context switch occurs for
the thread, and register contents are saved in (TCB). As threads share the same address space
and resources, synchronization is also required for the various activities of the thread.
Components of Threads
These are the basic components of the Operating System.
Stack Space
Register Set
Program Counter
Types of Thread in Operating System
Threads are of two types. These are described below.
User Level Thread
Kernel Level Thread
Threads
Processes can coordinate and interact with one another using a method called inter-process
communication (IPC) . Through facilitating process collaboration, it significantly contributes to
improving the effectiveness, modularity, and ease of software systems.
Types of Process
Independent process
Co-operating process
An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes. Though one can think that those processes,
which are running independently, will execute very efficiently, in reality, there are many situations
when cooperative nature can be utilized for increasing computational speed, convenience, and
modularity. Inter-process communication (IPC) is a mechanism that allows processes to
communicate with each other and synchronize their actions. The communication between these
processes can be seen as a method of cooperation between them. Processes can communicate with
each other through both:
Approaches to Inter process Communication
1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means that the data in this type
of data channel can be moved in only a single direction at a time. Still, one can use two-channel of
this type, so that he can able to send and receive data in two processes. Typically, it uses the
standard methods for input and output. These pipes are used in all types of POSIX systems and in
different versions of window operating systems as well.
Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and Windows operating systems as well.
Message Queue:-
In general, several different messages are allowed to read and write the data to the message queue.
In the message queue, the messages are stored or stay in the queue unless their recipients retrieve
them. In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.
To understand the concept of Message queue and Shared memory in more detail, let's take a look at
its diagram given below:
Message Passing:-
It is a type of mechanism that allows processes to synchronize and communicate with each other.
However, by using the message passing, the processes can communicate with each other without
restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are as follows:
o send (message)
o received (message)
Direct Communication:-
In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link can
exist.
Indirect Communication
Indirect communication can only exist or be established when processes share a common mailbox,
and each pair of these processes shares multiple communication links. These shared links can be
unidirectional or bi-directional.
FIFO:-
It is a type of general communication between two unrelated processes. It can also be considered as
full-duplex, which means that one process can communicate with another process and vice versa.
o Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data sent
between processes on the same computer or data sent between different computers on the same
network. Hence, it used by several types of operating systems.
o File:-
A file is a type of data record or a document stored on the disk and can be acquired on demand by
the file server. Another most important thing is that several processes can access that file as
required or needed.
o Signal:-
As its name implies, they are a type of signal used in inter process communication in a minimal way.
Typically, they are the massages of systems that are sent by one process to another. Therefore, they
are not used for sending data but for remote commands between multiple processes.
Usually, they are not used to send the data but to remote commands in between several processes.
There are numerous reasons to use inter-process communication for sharing the data. Here are
some of the most important reasons that are given below: