0% found this document useful (0 votes)
19 views25 pages

Operating System UNIT-1

for bca students

Uploaded by

mayur474645
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views25 pages

Operating System UNIT-1

for bca students

Uploaded by

mayur474645
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT-1

Operating System Definition and Function


In the Computer System (comprises of Hardware and software), Hardware can only understand
machine code (in the form of 0 and 1) which doesn't make any sense to a naive user.

We need a system which can act as an intermediary and manage all the processes and resources
present in the system.

An Operating System can be defined as an interface between user and hardware. It is responsible
for the execution of all the processes, Resource Allocation, CPU management, File Management and
many other tasks.

The purpose of an operating system is to provide an environment in which a user can execute
programs in convenient and efficient manner.
Structure of a Computer System

Types of Operating Systems (OS)

An operating system is a well-organized collection of programs that manages the computer


hardware. It is a type of system software that is responsible for the smooth functioning of the
computer system.
Batch Operating System

In the 1970s, Batch processing was very popular. In this technique, similar types of jobs were
batched together and executed in time. People were used to having a single computer which was
called a mainframe.

In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.

The system put all of the jobs in a queue on the basis of first come first serve and then executes the
jobs one by one. The users collect their respective output when all the jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to another as soon
as the job was completed. It contained a small set of programs called the resident monitor that
always resided in one part of the main memory. The remaining part is used for servicing jobs.
Advantages of Batch OS

o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.

Disadvantages of Batch OS

1. Starvation

Batch processing suffers from starvation.

For Example:

There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very high,
then the other four jobs will never be executed, or they will have to wait for a very long time. Hence
the other processes get starved.

2. Not Interactive

Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires the
input of two numbers from the console, then it will never get it in the batch processing scenario
since the user is not present at the time of execution.

Multiprogramming Operating System

Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each
process needs two types of system time: CPU time and IO time.

In a multiprogramming environment, when a process does its I/O, The CPU can start the execution of
other processes. Therefore, multiprogramming improves the efficiency of the system.
Advantages of Multiprogramming OS

o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.

Disadvantages of Multiprogramming OS

o Multiprogramming systems provide an environment in which various systems resources are


used efficiently, but they do not provide any user interaction with the computer system.
Multiprocessing Operating System
In Multiprocessing, Parallel computing is achieved. There are more than one processors present in
the system which can execute more than one process at the same time. This will increase the
throughput of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the system
can execute more than one process simultaneously, which will increase the throughput of the
system.

Advantages of Multiprocessing operating system:

o Increased reliability: Due to the multiprocessing system, processing tasks can be distributed
among several processors. This increases reliability as if one processor fails, the task can be
given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.
Disadvantages of Multiprocessing operating System

o Multiprocessing operating system is more complex and sophisticated as it takes care of


multiple CPUs simultaneously.
Multitasking Operating System

The multitasking operating system is a logical extension of a multiprogramming system that


enables multiple programs simultaneously. It allows a user to perform more than one computer task
at the same time.
Advantages of Multitasking operating system

o This operating system is more suited to supporting multiple users simultaneously.


o The multitasking operating systems have well-defined memory management.

Disadvantages of Multitasking operating system

o The multiple processors are busier at the same time to complete any task in a multitasking
environment, so the CPU generates more heat.

Network Operating System


An Operating system, which includes software and associated protocols to communicate with other
computers via a network conveniently and cost-effectively, is called Network Operating System.

Advantages of Network Operating System

o In this type of operating system, network traffic reduces due to the division between clients
and the server.
o This type of system is less expensive to set up and maintain.

Disadvantages of Network Operating System

o In this type of operating system, the failure of any node in a system affects the whole
system.
o Security and performance are important issues. So trained network administrators are
required for network administration.

Real Time Operating System

In Real-Time Systems, each job carries a certain deadline within which the job is supposed to be
completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.

The Application of a Real-Time system exists in the case of military applications, if you want to drop
a missile, then the missile is supposed to be dropped with a certain precision.
Advantages of Real-time operating system:

o Easy to layout, develop and execute real-time applications under the real-time operating
system.
o In a Real-time operating system, the maximum utilization of devices and systems.

Disadvantages of Real-time operating system:

o Real-time operating systems are very costly to develop.


o Real-time operating systems are very complex and can consume critical CPU cycles.

Time-Sharing Operating System

In the Time Sharing operating system, computer resources are allocated in a time-dependent fashion
to several programs simultaneously. Thus it helps to provide a large number of user's direct access
to the main computer. It is a logical extension of multiprogramming. In time-sharing, the CPU is
switched among multiple programs given by different users on a scheduled basis.

A time-sharing operating system allows many users to be served simultaneously, so sophisticated


CPU scheduling schemes and Input/output management are required.

Time-sharing operating systems are very difficult and expensive to build.

Advantages of Time Sharing Operating System


o The time-sharing operating system provides effective utilization and sharing of resources.
o This system reduces CPU idle and response time.

Disadvantages of Time Sharing Operating System

o Data transmission rates are very high in comparison to other methods.


o Security and integrity of user programs loaded in memory and data need to be maintained
as many users access the system at the same time.

Distributed Operating System

The Distributed Operating system is not installed on a single machine, it is divided into parts, and
these parts are loaded on different machines. A part of the distributed Operating system is installed
on each machine to make their communication possible. Distributed Operating systems are much
more complex, large, and sophisticated than Network operating systems because they also have to
take care of varying networking protocols.

Advantages of Distributed Operating System

o The distributed operating system provides sharing of resources.


o This type of system is fault-tolerant.

Process Management in OS

A Program does nothing unless its instructions are executed by a CPU. A program in execution is
called a process. In order to accomplish its task, process needs the computer resources.
There may exist more than one process in the system which may require the same resource at the
same time. Therefore, the operating system has to manage all the processes and the resources in a
convenient and efficient way.

1. Scheduling processes and threads on the CPUs.


2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanisms for process synchronization.
5. Providing mechanisms for process communication.

Attributes of a process

The Attributes of the process are used by the Operating System to create the process control block
(PCB) for each of them. This is also called context of the process. Attributes which are stored in the
PCB are described below.

1. Process ID

When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.

2. Program counter

A program counter stores the address of the last instruction of the process on which the process was
suspended. The CPU uses this address when the execution of this process is resumed.

3. Process State

The Process, from its creation to the completion, goes through various states which are new, ready,
running and waiting. We will discuss about them later in detail.

4. Priority

Every process has its own priority. The process with the highest priority among the processes gets
the CPU first. This is also stored on the process control block.

5. General Purpose Registers

Every process has its own set of registers which are used to hold the data which is generated during
the execution of the process.

6. List of open files

During the Execution, Every process uses some files which need to be present in the main memory.
OS also maintains a list of open files in the PCB.

7. List of open devices

OS also maintain the list of all open devices which are used during the execution of the process.
Process States

State Diagram

1. New

A program which is going to be picked up by the OS into the main memory is called a new process.

2. Ready

Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU to
be assigned. The OS picks the new processes from the secondary memory and put all of them in the
main memory.
The processes which are ready for the execution and reside in the main memory are called ready
state processes. There can be many processes present in the ready state.

3. Running

One of the processes from the ready state will be chosen by the OS depending upon the scheduling
algorithm. Hence, if we have only one CPU in our system, the number of running processes for a
particular time will always be one. If we have n processors in the system then we can have n
processes running simultaneously.

4. Block or wait

From the Running state, a process can make the transition to the block or wait state depending upon
the scheduling algorithm or the intrinsic behaviour of the process.

When a process waits for a certain resource to be assigned or for the input from the user then the
OS move this process to the block or wait state and assigns the CPU to the other processes.

5. Completion or termination

When a process finishes its execution, it comes in the termination state. All the context of the
process (Process Control Block) will also be deleted the process will be terminated by the Operating
system.

6. Suspend ready

A process in the ready state, which is moved to secondary memory from the main memory due to
lack of the resources (mainly primary memory) is called in the suspend ready state.

If the main memory is full and a higher priority process comes for the execution then the OS have to
make the room for the process in the main memory by throwing the lower priority process out into
the secondary memory. The suspend ready processes remain in the secondary memory until the
main memory gets available.

7. Suspend wait

Instead of removing the process from the ready queue, it's better to remove the blocked process
which is waiting for some resources in the main memory. Since it is already waiting for some
resource to get available hence it is better if it waits in the secondary memory and make room for
the higher priority process. These processes complete their execution once the main memory gets
available and their wait is finished.

Operations on the Process

1. Creation

Once the process is created, it will be ready and come into the ready queue (main memory) and will
be ready for the execution.
2. Scheduling

Out of the many processes present in the ready queue, the Operating system chooses one process
and start executing it. Selecting the process which is to be executed next, is known as scheduling.

3. Execution

Once the process is scheduled for the execution, the processor starts executing it. Process may come
to the blocked or wait state during the execution then in that case the processor starts executing the
other processes.

4. Deletion/killing

Once the purpose of the process gets over then the OS will kill the process. The Context of the
process (PCB) will be deleted and the process gets terminated by the Operating system.

Process Scheduling in OS (Operating System)

Operating system uses various schedulers for the process scheduling described below.

1. Long term scheduler

Long term scheduler is also known as job scheduler. It chooses the processes from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.

Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term
scheduler is to choose a perfect mix of IO bound and CPU bound processes among the jobs present
in the pool.

If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked
state all the time and the CPU will remain idle most of the time. This will reduce the degree of
Multiprogramming. Therefore, the Job of long term scheduler is very critical and may affect the
system for a very long time.

2. Short term scheduler

Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready
queue and dispatch to the CPU for the execution.

A scheduling algorithm is used to select which job is going to be dispatched for the execution. The
Job of the short term scheduler can be very critical in the sense that if it selects job whose CPU burst
time is very high then all the jobs after that, will have to wait in the ready queue for a very long time.

This problem is called starvation which may arise if the short term scheduler makes some mistakes
while selecting the job.

3. Medium term scheduler


Medium term scheduler takes care of the swapped out processes.If the running state processes
needs some IO time for the completion then there is a need to change its state from running to
waiting.

Medium term scheduler is used for this purpose. It removes the process from the running state to
make room for the other processes. Such processes are the swapped out processes and this
procedure is called swapping. The medium term scheduler is responsible for suspending and
resuming the processes.

It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of
processes in the ready queue.

Process Queues

The Operating system manages various types of queues for each of the process states. The PCB
related to the process is also stored in the queue of the same state. If the Process is moved from one
state to another state then its PCB is also unlinked from the corresponding queue and added to the
other state queue in which the transition is made.

There are the following queues maintained by the Operating system.

1. Job Queue

In starting, all the processes get stored in the job queue. It is maintained in the secondary memory.
The long term scheduler (Job scheduler) picks some of the jobs and put them in the primary
memory.

2. Ready Queue
Ready queue is maintained in primary memory. The short term scheduler picks the job from the
ready queue and dispatch to the CPU for the execution.

3. Waiting Queue

When the process needs some IO operation in order to complete its execution, OS changes the state
of the process from running to waiting. The context (PCB) associated with the process gets stored on
the waiting queue which will be used by the Processor when the process finishes the IO.

Thread in Operating System

Last Updated : 27 Jun, 2024


A thread is a single sequence stream within a process. Threads are also called lightweight processes
as they possess some of the properties of processes. Each thread belongs to exactly one process. In
an operating system that supports multithreading, the process can consist of many threads. But
threads can be effective only if the CPU is more than 1 otherwise two threads have to context switch
for that single CPU.
What is Thread in Operating Systems?
In a process, a thread refers to a single sequential activity being executed. these activities are also
known as thread of execution or thread control. Now, any operating system process can execute a
thread. we can say, that a process can have multiple threads.
Why Do We Need Thread?
 Threads run in parallel improving the application performance. Each such thread has its own
CPU state and stack, but they share the address space of the process and the environment.
 Threads can share common data so they do not need to use inter-process communication. Like
the processes, threads also have states like ready, executing, blocked, etc.
 Priority can be assigned to the threads just like the process, and the highest priority thread is
scheduled first.
 Each thread has its own Thread Control Block (TCB). Like the process, a context switch occurs for
the thread, and register contents are saved in (TCB). As threads share the same address space
and resources, synchronization is also required for the various activities of the thread.
Components of Threads
These are the basic components of the Operating System.
 Stack Space
 Register Set
 Program Counter
Types of Thread in Operating System
Threads are of two types. These are described below.
 User Level Thread
 Kernel Level Thread
Threads

1. User Level Threads


User Level Thread is a type of thread that is not created using system calls. The kernel has no work in
the management of user-level threads. User-level threads can be easily implemented by the user. In
case when user-level threads are single-handed processes, kernel-level thread manages them. Let’s
look at the advantages and disadvantages of User-Level Thread.
Advantages of User-Level Threads
 Implementation of the User-Level Thread is easier than Kernel Level Thread.
 Context Switch Time is less in User Level Thread.
 User-Level Thread is more efficient than Kernel-Level Thread.
 Because of the presence of only Program Counter, Register Set, and Stack Space, it has a simple
representation.
Disadvantages of User-Level Threads
 There is a lack of coordination between Thread and Kernel.
 In case of a page fault, the whole process can be blocked.
2. Kernel Level Threads
A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel Level
Threads has its own thread table where it keeps track of the system. The operating System Kernel
helps in managing threads. Kernel Threads have somehow longer context switching time. Kernel
helps in the management of threads.
Advantages of Kernel-Level Threads
 It has up-to-date information on all threads.
 Applications that block frequency are to be handled by the Kernel-Level Threads.
 Whenever any process requires more time to process, Kernel-Level Thread provides more time
to it.
Disadvantages of Kernel-Level threads
 Kernel-Level Thread is slower than User-Level Thread.
 Implementation of this type of thread is a little more complex than a user-level thread.
For more, refer to the Difference Between User-Level Thread and Kernel-Level Thread.
Difference Between Process and Thread
The primary difference is that threads within the same process run in a shared memory space, while
processes run in separate memory spaces. Threads are not independent of one another like
processes are, and as a result, threads share with other threads their code section, data section, and
OS resources (like open files and signals). But, like a process, a thread has its own program counter
(PC), register set, and stack space.
For more, refer to Difference Between Process and Thread.
What is Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads. MS
Word uses multiple threads: one thread to format the text, another thread to process inputs, etc.
More advantages of multithreading are discussed below.
Multithreading is a technique used in operating systems to improve the performance and
responsiveness of computer systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such as the CPU, memory, and I/O
devices.

Single Threaded vs Multi-threaded Process

Benefits of Thread in Operating System


 Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
 Faster context switch: Context switch time between threads is lower compared to the process
context switch. Process context switching requires more overhead from the CPU.
 Effective utilization of multiprocessor system: If we have multiple threads in a single process,
then we can schedule multiple threads on multiple processors. This will make process execution
faster.
 Resource sharing: Resources like code, data, and files can be shared among all threads within a
process. Note: Stacks and registers can’t be shared among the threads. Each thread has its own
stack and registers.
 Communication: Communication between multiple threads is easier, as the threads share a
common address space. while in the process we have to follow some specific communication
techniques for communication between the two processes.
 Enhanced throughput of the system: If a process is divided into multiple threads, and each
thread function is considered as one job, then the number of jobs completed per unit of time is
increased, thus increasing the throughput of the system.
Conclusion
Threads in operating systems are lightweight processes that improve application speed by executing
concurrently within the same process. They share the process’s address space and resources, which
allows for more efficient communication and resource utilisation. Threads are classified as either
user-level or kernel-level, with each having advantages and drawbacks. Multithreading enhances
system response time, context switching speed, resource sharing, and overall throughput. This
technique is critical for improving the speed and responsiveness of current computing systems.

Inter Process Communication (IPC)


Processes can coordinate and interact with one another using a method called inter-process
communication (IPC) . Through facilitating process collaboration, it significantly contributes to
improving the effectiveness, modularity, and ease of software systems.
Types of Process
 Independent process
 Co-operating process
An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes. Though one can think that those processes,
which are running independently, will execute very efficiently, in reality, there are many situations
when cooperative nature can be utilized for increasing computational speed, convenience, and
modularity. Inter-process communication (IPC) is a mechanism that allows processes to
communicate with each other and synchronize their actions. The communication between these
processes can be seen as a method of cooperation between them. Processes can communicate with
each other through both:
Approaches to Inter process Communication

These are a few different approaches for Inter- Process Communication:

1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means that the data in this type
of data channel can be moved in only a single direction at a time. Still, one can use two-channel of
this type, so that he can able to send and receive data in two processes. Typically, it uses the
standard methods for input and output. These pipes are used in all types of POSIX systems and in
different versions of window operating systems as well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and Windows operating systems as well.

Message Queue:-

In general, several different messages are allowed to read and write the data to the message queue.
In the message queue, the messages are stored or stay in the queue unless their recipients retrieve
them. In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's take a look at
its diagram given below:
Message Passing:-

It is a type of mechanism that allows processes to synchronize and communicate with each other.
However, by using the message passing, the processes can communicate with each other without
restoring the hared variables.

Usually, the inter-process communication mechanism provides two operations that are as follows:

o send (message)
o received (message)
Direct Communication:-

In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link can
exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common mailbox,
and each pair of these processes shares multiple communication links. These shared links can be
unidirectional or bi-directional.

FIFO:-

It is a type of general communication between two unrelated processes. It can also be considered as
full-duplex, which means that one process can communicate with another process and vice versa.

Some other different approaches

o Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data sent
between processes on the same computer or data sent between different computers on the same
network. Hence, it used by several types of operating systems.

o File:-
A file is a type of data record or a document stored on the disk and can be acquired on demand by
the file server. Another most important thing is that several processes can access that file as
required or needed.

o Signal:-
As its name implies, they are a type of signal used in inter process communication in a minimal way.
Typically, they are the massages of systems that are sent by one process to another. Therefore, they
are not used for sending data but for remote commands between multiple processes.

Usually, they are not used to send the data but to remote commands in between several processes.

Why we need inter process communication?

There are numerous reasons to use inter-process communication for sharing the data. Here are
some of the most important reasons that are given below:

o It helps to speedup modularity


o Computational
o Privilege separation
o Convenience
o Helps operating system to communicate with each other and synchronize their actions as
well.

You might also like