0% found this document useful (0 votes)
7 views34 pages

Notes Unit-1 Os

Uploaded by

giggii2425
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views34 pages

Notes Unit-1 Os

Uploaded by

giggii2425
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT-1 NOTES

OPERATING SYSTEM
Introduction
An Operating System (OS) is an interface between a computer user and computer hardware.
An operating system is a software which performs all the basic tasks like file management,
memory management, process management, handling input and output, and controlling
peripheral devices such as disk drives and printers.
An operating system is software that enables applications to interact with a computer's
hardware. The software that contains the core components of the operating system is called
the kernel.
Some popular Operating Systems include Linux Operating System, Windows Operating
System, VMS, OS/400, AIX, z/OS, etc. Today, Operating systems is found almost in every
device like mobile phones, personal computers, mainframe computers, automobiles, TV, Toys
etc.
Goals of the Operating System
There are two types of goals of an Operating System i.e. Primary Goals and Secondary Goal.
• Primary Goal: The primary goal of an Operating System is to provide a user-friendly
and convenient environment. We know that it is not compulsory to use the Operating
System, but things become harder when the user has to perform all the process
scheduling and converting the user code into machine code is also very difficult. So,
we make the use of an Operating System to act as an intermediate between us and the
hardware. All you need to do is give commands to the Operating System and the
Operating System will do the rest for you. So, the Operating System should be
convenient to use.
• Secondary Goal: The secondary goal of an Operating System is efficiency. The
Operating System should perform all the management of resources in such a way that
the resources are fully utilised and no resource should be held idle if some request to
that resource is there at that instant of time.

Architecture
We can draw a generic architecture diagram of an Operating System which is as follows:
Operating System Generations
Operating systems have been evolving over the years. We can categorise this evaluation based
on different generations which is briefed below:
0th Generation
The term 0th generation is used to refer to the period of development of computing when
Charles Babbage invented the Analytical Engine and later John Atanasoff created a computer
in 1940. The hardware component technology of this period was electronic vacuum tubes.
There was no Operating System available for this generation computer and computer programs
were written in machine language. This computers in this generation were inefficient and
dependent on the varying competencies of the individual programmer as operators.
First Generation (1951-1956)
The first generation marked the beginning of commercial computing including the introduction
of Eckert and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701.
System operation was performed with the help of expert operators and without the benefit of
an operating system for a time though programs began to be written in higher level, procedure-
oriented languages, and thus the operator’s routine expanded. Later mono-programmed
operating system was developed, which eliminated some of the human intervention in running
job and provided programmers with a number of desirable functions. These systems still
continued to operate under the control of a human operator who used to follow a number of
steps to execute a program. Programming language like FORTRAN was developed by John W.
Backus in 1956.
Second Generation (1956-1964)
The second generation of computer hardware was most notably characterised by transistors
replacing vacuum tubes as the hardware component technology. The first operating system
GMOS was developed by the IBM computer. GMOS was based on single stream batch
processing system, because it collects all similar jobs in groups or batches and then submits the
jobs to the operating system using a punch card to complete all jobs in a machine. Operating
system is cleaned after completing one job and then continues to read and initiates the next job
in punch card.
Researchers began to experiment with multiprogramming and multiprocessing in their
computing services called the time-sharing system. A noteworthy example is the Compatible
Time Sharing System (CTSS), developed at MIT during the early 1960s.
Third Generation (1964-1979)
The third generation officially began in April 1964 with IBM’s announcement of its
System/360 family of computers. Hardware technology began to use integrated circuits (ICs)
which yielded significant advantages in both speed and economy.
Operating system development continued with the introduction and widespread adoption of
multiprogramming. The idea of taking fuller advantage of the computer’s data channel I/O
capabilities continued to develop.
Another progress which leads to developing of personal computers in fourth generation is a
new development of minicomputers with DEC PDP-1. The third generation was an exciting
time, indeed, for the development of both computer hardware and the accompanying operating
system.
Fourth Generation (1979 – Present)
The fourth generation is characterised by the appearance of the personal computer and the
workstation. The component technology of the third generation, was replaced by very large
scale integration (VLSI). Many Operating Systems which we are using today like Windows,
Linux, MacOS etc developed in the fourth generation.

What is a kernel?
Operating systems are, at their core, the kernel. It is a conduit between the applications and the
hardware’s accurate data processing (CPU, disc memory, etc.). The operating system’s core
component, the kernel, controls communication between user-level programmes and the
hardware connected to the system. In other words, it is a platform that contains a certain
collection of libraries and architecture for newly created applications and facilitates
communication between them
Types of Kernels:
1. Monolithic Kernel
All operating system services run in kernel space in a monolithic kernel, one of the different
types of kernels. The parts of the system are interdependent. It isn’t very easy and has many
lines of code.
1. Micro Kernel
This type of kernel has a simple approach. It has thread scheduling and virtual memory. With
fewer services running in kernel space, it is more stable. User space is laid to rest by it.
1. The Hybrid Kernel
Both monolithic kernels and microkernels are combined. It has the modularity and stability of
a microkernel and the speed and design of a monolithic kernel.
1. The Exo Kernel
is a kernel that adheres to the end-to-end philosophy. It contains as few hardware abstractions
as is practical. Applications are given physical resources to use.
1. The Nano Kernel.
This particular kernel lacks system services but provides hardware abstraction. The Micro
Kernel and Nano Kernel are comparable since the Micro Kernel likewise lacks system services.

What is a Shell?
This is the user interface that interacts with the kernel, which in turn, interacts with the
underlying hardware. It is a command line interface (CLI) or a graphical user interface (GUI)
through which users can communicate with the computer and execute various commands and
programs. The shell interprets commands entered by the user and sends instructions to the OS
to perform tasks. It provides features like scripting, exploring and writing to a file system,
automation and process management.
Types of shell:
1. The Bourne Shell –
The first shell was created by Steve Bourne at AT&T Bell Labs and is known as sh. For shell
programming, it is the favored shell due to its speed and compactness. One flaw with the
Bourne shell is that it doesn’t have interactive capabilities like the capacity to remember past
commands (history). Additionally lacking from the Bourne shell is built-in expression handling
for math and logic.
1. The C Shell –
incorporated interactive features like command history and aliases. A command’s full path
name includes practical programming tools, including built-in math and expression syntax
similar to C. /bin/sh. The default prompt for non-root users is hostname%. Hostname # is the
standard prompt for the root user.
1. The Korn Shell
The Bourne shell is a superset of this. Everything in the Bourne shell is supported.
Similar to the C shell’s interactive features, it has them. Includes useful programming features
such as created arithmetic and arrays, functions, and string manipulation tools reminiscent of
those found in C. It outperforms the C shell in speed. For the Bourne shell, scripts are written
to execute.
Difference between Linux and Windows

S.NO Linux Windows

While windows are the not the


1. Linux is a open source operating system.
open source operating system.

2. Linux is free of cost. While it is costly.

While it’s file name is case-


3. It’s file name case-sensitive.
insensitive.

While in this, hybrid kernel is


4. In linux, monolithic kernel is used.
used.

Linux is more efficient in comparison of


5. While windows are less efficient.
windows.

There is forward slash is used for Separating While there is back slash is used
6.
the directories. for Separating the directories.

While it provides less security


7. Linux provides more security than windows.
than linux.

Linux is widely used in hacking purpose based While windows does not provide
8.
systems. much efficiency in hacking.
S.NO Linux Windows

There are 4 types of user account


There are 3 types of user account – –
9.
(1) Regular , (2) Root , (3) Service account (1) Administrator , (2) Standard ,
(3) Child , (4) Guest

Administrator user has all


Root user is the super user and has all
10. administrative privileges of
administrative privileges.
computers.

Linux file naming convention in case sensitive. In Windows, you cannot have 2
11. Thus, sample and SAMPLE are 2 different files with the same name in the
files in Linux/Unix operating system. same folder.

Functions of Operation System


The operating system provides various functionalities, which are discussed below:
1.Controlling System Performance
One of the most important functions of the operating system is controlling the system's health.
For this purpose, it monitors and observes the delay time between a service being requested
and the requested service being served. This delay time is reduced as much as possible to
improve the overall performance of the system.
2.Memory Management
In a computer system, we have primary memory, which is also called main memory, and it is
made by the sequence of bytes. Each byte in the main memory has a unique address which
helps it to check if it is occupied or not. For executing each program in its computer system, it
should be in the main memory. Operating systems can easily check which bytes of main
memory are empty and which are not. So it allocates the main memory for the program
execution, and when the program is completed or terminated, then it deallocates the memory.
Operating systems also keep a record that which byte of memory is assigned to which program.
3.Device Management
The operating system manages the communication between the computer system and the
peripheral devices connected to the I/O port of the system. Peripheral devices use their
respective drivers to communicate with the system. The operating system determines which
program or process is accessed by which connection and device. It also makes sure when a
program is executed or terminated, it will stop the communication between the device and the
computer system.
4.Process Management
The process is a program under the execution. The operating system manages all the processes
so that each process gets the CPU for a specific time to execute itself, and there will be less
waiting time for each process. This management is also called process scheduling.
5.Resource Allocation
The operating system manages all the resources of the computer system. It ensures the use of
all the resources by managing which resource is used by which process and for how much time.
6.Handling the I/O Operations
Input/output devices like mouse, keyboards, printers etc., are external I/O devices, and there
are different natures of each device. The operating system appropriately manages the input and
output operation of these devices.
7.Security:
Computer security is a very important aspect of any operating system. The reliability of an
operating system is determined by how much better security it provides us. Modern operating
systems use a firewall for security. A firewall is a security system that monitors every activity
happening in the computer and blocks that activity in case of any threat.
8. Coordination between other software and the user
There is other software in the system which is coordinated by the operating system so that users
can access them efficiently, like assemblers, compilers, etc.

Types of Operating Systems (OS)


Batch Operating System
In the 1970s, Batch processing was very popular. In this technique, similar types of jobs were
batched together and executed in time. People were used to having a single computer which
was called a mainframe.
In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then executes
the jobs one by one. The users collect their respective output when all the jobs get executed.

Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.
Disadvantages of Batch OS
1. Starvation
Batch processing suffers from starvation.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires
the input of two numbers from the console, then it will never get it in the batch processing
scenario since the user is not present at the time of execution.
Multiprogramming Operating System
Multiprogramming is an extension to batch processing where the CPU is always kept busy.
Each process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.

Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems
resources are used efficiently, but they do not provide any user interaction with the
computer system.

Multiprocessing Operating System

o In Multiprocessing, Parallel computing is achieved. There are more than one processors
present in the system which can execute more than one process at the same time. This
will increase the throughput of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the throughput
of the system.
Advantages of Multiprocessing operating system:
o Increased reliability: Due to the multiprocessing system, processing tasks can be
distributed among several processors. This increases reliability as if one processor fails,
the task can be given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.
Disadvantages of Multiprocessing operating System
o Multiprocessing operating system is more complex and sophisticated as it takes care of
multiple CPUs simultaneously.
Multitasking Operating System
The multitasking operating system is a logical extension of a multiprogramming system that
enables multiple programs simultaneously. It allows a user to perform more than one computer
task at the same time.
Advantages of Multitasking operating system
o This operating system is more suited to supporting multiple users simultaneously.
o The multitasking operating systems have well-defined memory management.
Disadvantages of Multitasking operating system
o The multiple processors are busier at the same time to complete any task in a
multitasking environment, so the CPU generates more heat.
Network Operating System

An Operating system, which includes software and associated protocols to communicate with
other computers via a network conveniently and cost-effectively, is called Network Operating
System.
Advantages of Network Operating System
o In this type of operating system, network traffic reduces due to the division between
clients and the server.
o This type of system is less expensive to set up and maintain.
Disadvantages of Network Operating System
o In this type of operating system, the failure of any node in a system affects the whole
system.
o Security and performance are important issues. So trained network administrators are
required for network administration.

Real Time Operating System

In Real-Time Systems, each job carries a certain deadline within which the job is supposed to
be completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.

The Application of a Real-Time system exists in the case of military applications, if you want
to drop a missile, then the missile is supposed to be dropped with a certain precision.
The various examples of Real-time operating systems are:
o MTS
o Lynx
o QNX
o VxWorks etc.
1,Hard Real-Time operating system:
n Hard RTOS, all critical tasks must be completed within the specified time duration, i.e.,
within the given deadline. Not meeting the deadline would result in critical failures such as
damage to equipment or even loss of human life.
For Example,
Let's take an example of airbags provided by carmakers along with a handle in the driver's seat.
When the driver applies brakes at a particular instance, the airbags grow and prevent the driver's
head from hitting the handle. Had there been some delay even of milliseconds, then it would
have resulted in an accident.
Similarly, consider an on-stock trading software. If someone wants to sell a particular share,
the system must ensure that command is performed within a given critical time. Otherwise, if
the market falls abruptly, it may cause a huge loss to the trader.
2,Soft Real-Time operating system:
Soft RTOS accepts a few delays via the means of the Operating system. In this kind of RTOS,
there may be a closing date assigned for a particular job, but a delay for a small amount of time
is acceptable. So, cut off dates are treated softly via means of this kind of RTOS.
For Example,
This type of system is used in Online Transaction systems and Livestock price quotation
Systems.
3,Firm Real-Time operating system:
In Firm RTOS additionally want to observe the deadlines. However, lacking a closing date
might not have a massive effect, however may want to purposely undesired effects, like a
massive discount within the fine of a product.
For Example, this system is used in various forms of Multimedia applications.

Advantages of Real-time operating system:


o Easy to layout, develop and execute real-time applications under the real-time operating
system.
o In a Real-time operating system, the maximum utilization of devices and systems.
Disadvantages of Real-time operating system:
o Real-time operating systems are very costly to develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.

Time-Sharing Operating System

In the Time Sharing operating system, computer resources are allocated in a time-dependent
fashion to several programs simultaneously. Thus it helps to provide a large number of user's
direct access to the main computer. It is a logical extension of multiprogramming. In time-
sharing, the CPU is switched among multiple programs given by different users on a scheduled
basis.

A time-sharing operating system allows many users to be served simultaneously, so


sophisticated CPU scheduling schemes and Input/output management are required.
Time-sharing operating systems are very difficult and expensive to build.
Advantages of Time Sharing Operating System
o The time-sharing operating system provides effective utilization and sharing of
resources.
o This system reduces CPU idle and response time.
Disadvantages of Time Sharing Operating System
o Data transmission rates are very high in comparison to other methods.
o Security and integrity of user programs loaded in memory and data need to be
maintained as many users access the system at the same time.
Distributed Operating System

The Distributed Operating system is not installed on a single machine, it is divided into parts,
and these parts are loaded on different machines. A part of the distributed Operating system is
installed on each machine to make their communication possible. Distributed Operating
systems are much more complex, large, and sophisticated than Network operating systems
because they also have to take care of varying networking protocols.

Advantages of Distributed Operating System


o The distributed operating system provides sharing of resources.
o This type of system is fault-tolerant.
Disadvantages of Distributed Operating System
Protocol overhead can dominate computation cost.

Processes
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion. When a program is loaded into the memory and it becomes a process. A
process is an 'active' entity as opposed to the program which is considered to be a 'passive'
entity. Attributes held by the process include hardware state, memory, CPU, etc.
States of a Process in Operating Systems
In an operating system, a process is a program that is being executed. During its execution, a
process goes through different states. Understanding these states helps us see how the operating
system manages processes, ensuring that the computer runs efficiently.
There must be a minimum of five states. Even though the process could be in one of these states
during execution, the names of the states are not standardized. Each process goes through
several stages throughout its life cycle. In this article, We discuss different states in detail.

Process States in Operating System


The states of a process are as follows:
• New State: In this step, the process is about to be created but not yet created. It is the
program that is present in secondary memory that will be picked up by the OS to create
the process.
• Ready State: New -> Ready to run. After the creation of a process, the process enters
the ready state i.e. the process is loaded into the main memory. The process here is
ready to run and is waiting to get the CPU time for its execution. Processes that are
ready for execution by the CPU are maintained in a queue called a ready queue for
ready processes.
• Run State: The process is chosen from the ready queue by the OS for execution and
the instructions within the process are executed by any one of the available processors.
• Blocked or Wait State: Whenever the process requests access to I/O needs input from
the user or needs access to a critical region(the lock for which is already acquired) it
enters the blocked or waits state. The process continues to wait in the main memory and
does not require CPU. Once the I/O operation is completed the process goes to the ready
state.
• Terminated or Completed State: Process is killed as well as PCB is deleted. The
resources allocated to the process will be released or deallocated.
• Suspend Ready: Process that was initially in the ready state but was swapped out of
main memory(refer to Virtual Memory topic) and placed onto external storage by the
scheduler is said to be in suspend ready state. The process will transition back to a
ready state whenever the process is again brought onto the main memory.
• Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process
which was performing I/O operation and lack of main memory caused them to move
to secondary memory. When work is finished it may go to suspend ready.
• CPU and I/O Bound Processes: If the process is intensive in terms of CPU
operations, then it is called CPU bound process. Similarly, If the process is intensive
in terms of I/O operations then it is called I/O bound process.

Categories of Scheduling
Scheduling falls into one of two categories:
• Non-Preemptive: In this case, a process’s resource cannot be taken before the process
has finished running. When a running process finishes and transitions to a waiting state,
resources are switched.
• Preemptive: In this case, the OS assigns resources to a process for a predetermined
period. The process switches from running state to ready state or from waiting state to
ready state during resource allocation. This switching happens because the CPU may
give other processes priority and substitute the currently active process for the higher
priority process.
Types of Process Schedulers

There are three types of process schedulers:


1. Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming,
i.e., the number of processes present in a ready state at any point in time. It is important that
the long-term scheduler make a careful selection of both I/O and CPU-bound processes. I/O-
bound tasks are which use much of their time in input and output operations while CPU-
bound processes are which spend their time on the CPU. The job scheduler increases
efficiency by maintaining a balance between the two. They operate at a high level and are
typically used in batch-processing systems.

2. Short-Term or CPU Scheduler


It is responsible for selecting one process from the ready state for scheduling it on the running
state. Note: Short-term scheduler only selects the process to schedule it doesn’t load the
process on running. Here is when all the scheduling algorithms are used. The CPU scheduler
is responsible for ensuring no starvation due to high burst time processes.
The dispatcher is responsible for loading the process selected by the Short-term scheduler on
the CPU (Ready to Running State) Context switching is done by the dispatcher only. A
dispatcher does the following:
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.
3. Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve
the process mix or because a change in memory requirements has overcommitted available
memory, requiring memory to be freed up. It is helpful in maintaining a perfect balance
between the I/O bound and the CPU bound. It reduces the degree of multiprogramming.

Context Switching
In order for a process execution to be continued from the same point at a later time, context
switching is a mechanism to store and restore the state or context of a CPU in the Process
Control block. A context switcher makes it possible for multiple processes to share a single
CPU using this method. A multitasking operating system must include context switching
among its features.
The state of the currently running process is saved into the process control block when the
scheduler switches the CPU from executing one process to another. The state used to set the
computer, registers, etc. for the process that will run next is then loaded from its own PCB.
After that, the second can start processing.

Context Switching

Structure of the Process Control Block


A Process Control Block (PCB) is a data structure used by the operating system to manage
information about a process. The process control keeps track of many important pieces of
information needed to manage processes efficiently. The diagram helps explain some of these
key data items.
Process ID(PID): A distinct Process ID (PID) on the PCB serves as the process's identifier
within the operating system. The operating system uses this ID to keep track of, manage, and
differentiate among processes.
Process State: The state of the process, such as running, waiting, ready, or terminated, is
indicated. The operating system makes use of this data to schedule and manage operations.
Program Counter(PC): The program counter value, which indicates the address of the
following instruction to be performed in the process, is stored on the PCB. The program counter
is saved in the PCB of the running process during context switches and then restored to let
execution continue where it left off.
CPU registers: Looks at how the process's associated CPU registers are now working.
Examples include stack pointers, general-purpose registers, and program status flags.
Processes can continue operating uninterrupted during context changes by saving and restoring
register values.
Memory Management Information: Includes the process's memory allocation information,
such as the base and limit registers or page tables. This information allows the operating
system to manage the process's memory requirements appropriately.
Priority:Some operating systems provide a priority value to each process to decide the order
in which processes receive CPU time. The PCB may have a priority field that determines the
process's priority level, allowing the scheduler to distribute CPU resources appropriately.
Parent Process ID(PPID):The PID of the parent process that gave rise to the present process.
This data is important for process management and tracking process linkages, particularly in
scenarios requiring process hierarchy or process tree architectures.
I/O status: The PCB maintains information about I/O devices and data related to the process.
Open file descriptors, I/O buffers, and pending I/O requests are all included. Storing this
information enables the operating system to manage I/O operations and efficiently handle
input/output requests.
Accounting information: Keeps track of the process's resource utilization data, such as CPU
time, memory usage, and I/O activities. This data aids in performance evaluation and
resource allocation choices.
Inter-Process Communication (IPC) information: If a process communicates with other
processes, the PCB may contain fields or pointers to communication channels, message queues,
shared memory regions, or synchronization primitives. This allows processes to communicate
and share data successfully.

Interprocess Communication
Interprocess communication (IPC) is a process that allows different processes of a computer
system to share information. IPC lets different programs run in parallel, share data, and
communicate with each other. It’s important for two reasons: First, it speeds up the execution
of tasks, and secondly, it ensures that the tasks run correctly and in the order that they were
executed.

Types of Processes:
Cooperating Process in the operating system is a process that gets affected by other processes
under execution or can affect any other process under execution. It shares data with other
processes in the system by directly sharing a logical space which is both code and data or by
sharing data through files or messages.
Whereas, an independent process in an operating system is one that does not affect or impact
any other process of the system. It does not share any data with other processes.
Why Interprocess Communication is Necessary
IPC lets different programs run in parallel, share data, and communicate with each other. It’s
important for two reasons:
• It speeds up the execution of tasks.
• It ensures that the tasks run correctly and in the order that they were executed.
• IPC is essential for the efficient operation of an operating system.
• Operating systems use IPC to exchange data with tools and components that the system
uses to interact with the user, such as the keyboard, the mouse, and the graphical user
interface (GUI).
• IPC also lets the system run multiple programs at the same time. For example, the
system might use IPC to provide information to the windowing system about the status
of a window on the screen.
Advantages of Interprocess Communication
• Interprocess communication allows one application to manage another and enables
glitch-free data sharing.
• Interprocess communication helps send messages efficiently between processes.
• The program is easy to maintain and debug because it is divided into different sections
of code that work separately.
• Programmers can perform a variety of other tasks at the same time, including Editing,
listening to music, compiling, etc.
• Data can be shared between different programs at the same time.
• Tasks can be subdivided and run on special types of processors. You can then exchange
data via IPC.
Disadvantages of Interprocess Communication
• The program cannot write to similar locations.
• Processes or programs that use the shared memory model must make sure that they are
not writing to similar memory locations.
• The shared storage model can cause problems such as storage synchronization and
protection that need to be addressed.
• It’s slower than a direct function call.
Methods of Cooperating Process in OS
Cooperating processes in OS requires a communication method that will allow the processes
to exchange data and information.
There are two methods by which the cooperating process in OS can communicate:
• Cooperation by Memory Sharing
• Cooperation by Message Passing
Details about the methods are given below:
Cooperation by Sharing
The cooperation processes in OS can communicate with each other using the shared resource
which includes data, memory, variables, files, etc.
Processes can then exchange the information by reading or writing data to the shared region.
We can use a critical section that provides data integrity and avoids data inconsistency.
Let's see a diagram to understand more clearly the communication by shared region:

Cooperation by Message Passing


The cooperating processes in OS can communicate with each other with the help of message
passing. The production process will send the message and the consumer process will receive
the same message.
There is no concept of shared memory instead the producer process will first send the message
to the kernel and then the kernel sends that message to the consumer process.
A kernel is known as the heart and core of an operating system. The kernel interacts with the
hardware to execute the processes given by the user space. It works as a bridge between the
user space and hardware. Functions of the kernel include process management, file
management, memory management, and I/O management.
If a consumer process waits for a message from another process to execute a particular task
then this may cause a problem of deadlock and if the consumer process does not receive the
message then this may cause a problem of process starvation.

Need of Cooperating Processes in OS


One process will write to the file and the other process reads the file. Therefore, every process
in the system could be affected by the other process.
The need for cooperating processes in OS can be divided into four types:
1. Information Sharing
2. Computation Speed
3. Convenience
4. Modularity

Dual Mode operations in OS


An error in one program can adversely affect many processes, it might modify data of another
program or also can affect the operating system. For example, if a process stuck in the infinite
loop then this infinite loop could affect the correct operation of other processes. So to ensure
the proper execution of the operating system, there are two modes of operation:
User mode –
When the computer system is run by user applications like creating a text document or using
any application program, then the system is in user mode. When the user application requests
for a service from the operating system or an interrupt occurs or system call, then there will
be a transition from user to kernel mode to fulfill the requests.
Kernel Mode: When the system boots, the hardware starts in kernel mode and when the
operating system is loaded, it starts user application in user mode. To provide protection to the
hardware, we have privileged instructions which execute only in kernel mode. If the user
attempts to run privileged instruction in user mode then it will treat instruction as illegal and
traps to OS. Some of the privileged instructions are:
1. Handling Interrupts
2. To switch from user mode to kernel mode.
3. Input-Output management.

System Calls in Operating System (OS)


A system call is a method for a computer program to request a service from the kernel of
the operating system on which it is running. A system call is a method of interacting with the
operating system via programs. A system call is a request from computer software to an
operating system's kernel.
The Application Program Interface (API) connects the operating system's functions to user
programs. It acts as a link between the operating system and a process, allowing user-level
programs to request operating system services. The kernel system can only be accessed using
system calls. System calls are required for any programs that use resources.
Why do you need system calls in Operating System?
There are various situations where you must require system calls in the operating system.
Following of the situations are as follows:
1. It is must require when a file system wants to create or delete a file.
2. Network connections require the system calls to sending and receiving data packets.
3. If you want to read or write a file, you need to system calls.
4. If you want to access hardware devices, including a printer, scanner, you need a system
call.
5. System calls are used to create and manage new processes.

Types of System Calls

Process Control
Process control is the system call that is used to direct the processes. Some process control
examples include creating, load, abort, end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of device
management include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some
examples of information maintenance, including getting system data, set time or date, get time
or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples of
communication, including create, delete communication connections, send, receive messages,
etc.
Examples of Windows and Unix system calls
There are various examples of Windows and Unix system calls. These are as listed below in
the table:

Process Windows Unix

CreateProcess() Fork()
Process Control ExitProcess() Exit()
WaitForSingleObject() Wait()

CreateFile() Open()
ReadFile() Read()
File Manipulation
WriteFile() Write()
CloseHandle() Close()

SetConsoleMode() Ioctl()
Device Management ReadConsole() Read()
WriteConsole() Write()

GetCurrentProcessID() Getpid()
Information Maintenance SetTimer() Alarm()
Sleep() Sleep()
CreatePipe() Pipe()
Communication CreateFileMapping() Shmget()
MapViewOfFile() Mmap()

SetFileSecurity() Chmod()
Protection InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

open()
The open() system call allows you to access a file on a file system. It allocates resources to the
file and provides a handle that the process may refer to. Many processes can open a file at once
or by a single process only. It's all based on the file system and structure.
read()
It is used to obtain data from a file on the file system. It accepts three arguments in general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.
wait()
In some systems, a process may have to wait for another process to complete its execution
before proceeding. When a parent process makes a child process, the parent process execution
is suspended until the child process is finished. The wait() system call is used to suspend the
parent process. Once the child process has completed its execution, control is returned to the
parent process.
write()
It is used to write data from a user buffer to a device like a file. This system call is one way for
a program to generate data. It takes three arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is one of the most
common ways to create processes in operating systems. When a parent process spawns a child
process, execution of the parent process is interrupted until the child process completes. Once
the child process has completed its execution, control is returned to the parent process.
close()
It is used to end file system access. When this system call is invoked, it signifies that the
program no longer requires the file, and the buffers are flushed, the file information is altered,
and the file resources are de-allocated as a result.
exec()
When an executable file replaces an earlier executable file in an already executing process, this
system function is invoked. As a new process is not built, the old process identification stays,
but the new process replaces data, stack, data, head, etc.
exit()
The exit() is a system call that is used to end program execution. This call indicates that the
thread execution is complete, which is especially useful in multi-threaded environments. The
operating system reclaims resources spent by the process following the use of the exit() system
function.

Process Synchronization
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization techniques
such as semaphores, monitors, and critical sections are used.
The procedure involved in preserving the appropriate order of execution of cooperative
processes is known as Process Synchronization. There are various synchronization mechanisms
that are used to synchronize the processes.
Race Condition
A Race Condition typically occurs when two or more threads try to read, write and possibly
make the decisions based on the memory that they are accessing concurrently.
A race condition is a situation that may occur inside a critical section. This happens when the
result of multiple thread execution in critical section differs according to the order in which the
threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent
race conditions.

Critical Section
The critical section in a code segment where the shared variables can be accessed. Atomic
action is required in a critical section i.e. only one process can execute in its critical section at
a time. All the other processes have to wait to execute in their critical sections.
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);
In the above diagram, the entry sections handles the entry into the critical section. It acquires
the resources needed for execution by the process. The exit section handles the exit from the
critical section. It releases the resources and also informs the other processes that critical
section is free.
The critical section problem needs a solution to synchronise the different processes. The
solution to the critical section problem must satisfy the following conditions −
1.Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any time. If
any other processes require the critical section, they must wait until it is free.
2.Progresss
Progress means that if a process is not using the critical section, then it should not stop any
other process from accessing it. In other words, any process can enter a critical section if it is
free.
3.Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. Itt should not wait
endlessly to access the critical section.
4.Semaphore
A semaphore is a signalling mechanism and a thread that is waiting on a semaphore can be
signalled by another thread. This is different than a mutex as the mutex can be signalled only
by the thread that called the wait function.

Semaphores
The Semaphore is just a normal integer. The Semaphore cannot be negative. The least value
for a Semaphore is zero (0). The Maximum value of a Semaphore can be anything. The
Semaphores usually have two operations. The two operations have the capability to decide the
values of the semaphores.
The two Semaphore Operations are:
1. Wait ( )
2. Signal ( )
Wait Semaphore Operation
The Wait Operation is used for deciding the condition for the process to enter the critical state
or wait for execution of process. Here, the wait operation has many different names. The
different names are:
1. Sleep Operation
2. Down Operation
3. Decrease Operation
4. P Function (most important alias name for wait operation)
The Wait Operation works on the basis of Semaphore or Mutex Value.
Signal Semaphore Operation
The Signal Semaphore Operation is used to update the value of Semaphore. The Semaphore
value is updated when the new processes are ready to enter the Critical Section.
The Signal Operation is also known as:
1. Wake up Operation
2. Up Operation
3. Increase Operation
4. V Function (most important alias name for signal operation)

Types of Semaphores

There are two types of Semaphores.


They are:
1. Binary Semaphore
Here, there are only two values of Semaphore in Binary Semaphore Concept. The two values
are 1 and 0.
If the Value of Binary Semaphore is 1, then the process has the capability to enter the critical
section area. If the value of Binary Semaphore is 0 then the process does not have the capability
to enter the critical section area.
2. Counting Semaphore
Here, there are two sets of values of Semaphore in Counting Semaphore Concept. The two
types of values are values greater than and equal to one and other type is value equal to zero.
If the Value of Binary Semaphore is greater than or equal to 1, then the process has the
capability to enter the critical section area. If the value of Binary Semaphore is 0 then the
process does not have the capability to enter the critical section area.
Advantages of a Semaphore
o Semaphores are machine independent since their implementation and codes are written
in the microkernel's machine independent code area.
o They strictly enforce mutual exclusion and let processes enter the crucial part one at a
time (only in the case of binary semaphores).
o With the use of semaphores, no resources are lost due to busy waiting since we do not
need any processor time to verify that a condition is met before allowing a process
access to the crucial area.
o Semaphores have the very good management of resources
o They forbid several processes from entering the crucial area. They are significantly
more effective than other synchronization approaches since mutual exclusion is made
possible in this way.
Disadvantages of a Semaphore
o Due to the employment of semaphores, it is possible for high priority processes to reach
the vital area before low priority processes.
o Because semaphores are a little complex, it is important to design the wait and signal
actions in a way that avoids deadlocks.
o Programming a semaphore is very challenging, and there is a danger that mutual
exclusion won't be achieved.
o The wait ( ) and signal ( ) actions must be carried out in the appropriate order to prevent
deadlocks.
A flowchart demonstrating a typical process of an entity accessing a vital region
controlled by a counting semaphore is shown below:
Threads
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.
A thread is a flow of execution through the process code, with its own program counter that
keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history.

Components of Thread
A thread has the following three components:
1. Program Counter
2. Register Set
3. Stack space
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread contexts. The
application starts with a single thread.

Kernel Level Threads


In this case, thread management is done by the Kernel. There is no thread management code in
the application area. Kernel threads are supported directly by the operating system. Any
application can be programmed to be multithreaded. All of the threads within an application
are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals threads
within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs
thread creation, scheduling and management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.
Advantages
• Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
• Kernel routines themselves can be multithreaded.
Disadvantages
• Kernel threads are generally slower to create and manage than the user threads.
• Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Why Multithreading?
In Multithreading, the idea is to divide a single process into multiple threads instead of creating
a whole new process. Multithreading is done to achieve parallelism and to improve the
performance of the applications as it is faster in many ways which were discussed above. The
other advantages of multithreading are mentioned below.
• Resource Sharing: Threads of a single process share the same resources such
as code, data/file.
• Responsiveness: Program responsiveness enables a program to run even if part of the
program is blocked or executing a lengthy operation. Thus, increasing the
responsiveness to the user.
• Economy: It is more economical to use threads as they share the resources of a single
process. On the other hand, creating processes is expensive.

You might also like