Notes Unit-1 Os
Notes Unit-1 Os
OPERATING SYSTEM
Introduction
An Operating System (OS) is an interface between a computer user and computer hardware.
An operating system is a software which performs all the basic tasks like file management,
memory management, process management, handling input and output, and controlling
peripheral devices such as disk drives and printers.
An operating system is software that enables applications to interact with a computer's
hardware. The software that contains the core components of the operating system is called
the kernel.
Some popular Operating Systems include Linux Operating System, Windows Operating
System, VMS, OS/400, AIX, z/OS, etc. Today, Operating systems is found almost in every
device like mobile phones, personal computers, mainframe computers, automobiles, TV, Toys
etc.
Goals of the Operating System
There are two types of goals of an Operating System i.e. Primary Goals and Secondary Goal.
• Primary Goal: The primary goal of an Operating System is to provide a user-friendly
and convenient environment. We know that it is not compulsory to use the Operating
System, but things become harder when the user has to perform all the process
scheduling and converting the user code into machine code is also very difficult. So,
we make the use of an Operating System to act as an intermediate between us and the
hardware. All you need to do is give commands to the Operating System and the
Operating System will do the rest for you. So, the Operating System should be
convenient to use.
• Secondary Goal: The secondary goal of an Operating System is efficiency. The
Operating System should perform all the management of resources in such a way that
the resources are fully utilised and no resource should be held idle if some request to
that resource is there at that instant of time.
Architecture
We can draw a generic architecture diagram of an Operating System which is as follows:
Operating System Generations
Operating systems have been evolving over the years. We can categorise this evaluation based
on different generations which is briefed below:
0th Generation
The term 0th generation is used to refer to the period of development of computing when
Charles Babbage invented the Analytical Engine and later John Atanasoff created a computer
in 1940. The hardware component technology of this period was electronic vacuum tubes.
There was no Operating System available for this generation computer and computer programs
were written in machine language. This computers in this generation were inefficient and
dependent on the varying competencies of the individual programmer as operators.
First Generation (1951-1956)
The first generation marked the beginning of commercial computing including the introduction
of Eckert and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701.
System operation was performed with the help of expert operators and without the benefit of
an operating system for a time though programs began to be written in higher level, procedure-
oriented languages, and thus the operator’s routine expanded. Later mono-programmed
operating system was developed, which eliminated some of the human intervention in running
job and provided programmers with a number of desirable functions. These systems still
continued to operate under the control of a human operator who used to follow a number of
steps to execute a program. Programming language like FORTRAN was developed by John W.
Backus in 1956.
Second Generation (1956-1964)
The second generation of computer hardware was most notably characterised by transistors
replacing vacuum tubes as the hardware component technology. The first operating system
GMOS was developed by the IBM computer. GMOS was based on single stream batch
processing system, because it collects all similar jobs in groups or batches and then submits the
jobs to the operating system using a punch card to complete all jobs in a machine. Operating
system is cleaned after completing one job and then continues to read and initiates the next job
in punch card.
Researchers began to experiment with multiprogramming and multiprocessing in their
computing services called the time-sharing system. A noteworthy example is the Compatible
Time Sharing System (CTSS), developed at MIT during the early 1960s.
Third Generation (1964-1979)
The third generation officially began in April 1964 with IBM’s announcement of its
System/360 family of computers. Hardware technology began to use integrated circuits (ICs)
which yielded significant advantages in both speed and economy.
Operating system development continued with the introduction and widespread adoption of
multiprogramming. The idea of taking fuller advantage of the computer’s data channel I/O
capabilities continued to develop.
Another progress which leads to developing of personal computers in fourth generation is a
new development of minicomputers with DEC PDP-1. The third generation was an exciting
time, indeed, for the development of both computer hardware and the accompanying operating
system.
Fourth Generation (1979 – Present)
The fourth generation is characterised by the appearance of the personal computer and the
workstation. The component technology of the third generation, was replaced by very large
scale integration (VLSI). Many Operating Systems which we are using today like Windows,
Linux, MacOS etc developed in the fourth generation.
What is a kernel?
Operating systems are, at their core, the kernel. It is a conduit between the applications and the
hardware’s accurate data processing (CPU, disc memory, etc.). The operating system’s core
component, the kernel, controls communication between user-level programmes and the
hardware connected to the system. In other words, it is a platform that contains a certain
collection of libraries and architecture for newly created applications and facilitates
communication between them
Types of Kernels:
1. Monolithic Kernel
All operating system services run in kernel space in a monolithic kernel, one of the different
types of kernels. The parts of the system are interdependent. It isn’t very easy and has many
lines of code.
1. Micro Kernel
This type of kernel has a simple approach. It has thread scheduling and virtual memory. With
fewer services running in kernel space, it is more stable. User space is laid to rest by it.
1. The Hybrid Kernel
Both monolithic kernels and microkernels are combined. It has the modularity and stability of
a microkernel and the speed and design of a monolithic kernel.
1. The Exo Kernel
is a kernel that adheres to the end-to-end philosophy. It contains as few hardware abstractions
as is practical. Applications are given physical resources to use.
1. The Nano Kernel.
This particular kernel lacks system services but provides hardware abstraction. The Micro
Kernel and Nano Kernel are comparable since the Micro Kernel likewise lacks system services.
What is a Shell?
This is the user interface that interacts with the kernel, which in turn, interacts with the
underlying hardware. It is a command line interface (CLI) or a graphical user interface (GUI)
through which users can communicate with the computer and execute various commands and
programs. The shell interprets commands entered by the user and sends instructions to the OS
to perform tasks. It provides features like scripting, exploring and writing to a file system,
automation and process management.
Types of shell:
1. The Bourne Shell –
The first shell was created by Steve Bourne at AT&T Bell Labs and is known as sh. For shell
programming, it is the favored shell due to its speed and compactness. One flaw with the
Bourne shell is that it doesn’t have interactive capabilities like the capacity to remember past
commands (history). Additionally lacking from the Bourne shell is built-in expression handling
for math and logic.
1. The C Shell –
incorporated interactive features like command history and aliases. A command’s full path
name includes practical programming tools, including built-in math and expression syntax
similar to C. /bin/sh. The default prompt for non-root users is hostname%. Hostname # is the
standard prompt for the root user.
1. The Korn Shell
The Bourne shell is a superset of this. Everything in the Bourne shell is supported.
Similar to the C shell’s interactive features, it has them. Includes useful programming features
such as created arithmetic and arrays, functions, and string manipulation tools reminiscent of
those found in C. It outperforms the C shell in speed. For the Bourne shell, scripts are written
to execute.
Difference between Linux and Windows
There is forward slash is used for Separating While there is back slash is used
6.
the directories. for Separating the directories.
Linux is widely used in hacking purpose based While windows does not provide
8.
systems. much efficiency in hacking.
S.NO Linux Windows
Linux file naming convention in case sensitive. In Windows, you cannot have 2
11. Thus, sample and SAMPLE are 2 different files with the same name in the
files in Linux/Unix operating system. same folder.
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.
Disadvantages of Batch OS
1. Starvation
Batch processing suffers from starvation.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires
the input of two numbers from the console, then it will never get it in the batch processing
scenario since the user is not present at the time of execution.
Multiprogramming Operating System
Multiprogramming is an extension to batch processing where the CPU is always kept busy.
Each process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems
resources are used efficiently, but they do not provide any user interaction with the
computer system.
o In Multiprocessing, Parallel computing is achieved. There are more than one processors
present in the system which can execute more than one process at the same time. This
will increase the throughput of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the throughput
of the system.
Advantages of Multiprocessing operating system:
o Increased reliability: Due to the multiprocessing system, processing tasks can be
distributed among several processors. This increases reliability as if one processor fails,
the task can be given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.
Disadvantages of Multiprocessing operating System
o Multiprocessing operating system is more complex and sophisticated as it takes care of
multiple CPUs simultaneously.
Multitasking Operating System
The multitasking operating system is a logical extension of a multiprogramming system that
enables multiple programs simultaneously. It allows a user to perform more than one computer
task at the same time.
Advantages of Multitasking operating system
o This operating system is more suited to supporting multiple users simultaneously.
o The multitasking operating systems have well-defined memory management.
Disadvantages of Multitasking operating system
o The multiple processors are busier at the same time to complete any task in a
multitasking environment, so the CPU generates more heat.
Network Operating System
An Operating system, which includes software and associated protocols to communicate with
other computers via a network conveniently and cost-effectively, is called Network Operating
System.
Advantages of Network Operating System
o In this type of operating system, network traffic reduces due to the division between
clients and the server.
o This type of system is less expensive to set up and maintain.
Disadvantages of Network Operating System
o In this type of operating system, the failure of any node in a system affects the whole
system.
o Security and performance are important issues. So trained network administrators are
required for network administration.
In Real-Time Systems, each job carries a certain deadline within which the job is supposed to
be completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.
The Application of a Real-Time system exists in the case of military applications, if you want
to drop a missile, then the missile is supposed to be dropped with a certain precision.
The various examples of Real-time operating systems are:
o MTS
o Lynx
o QNX
o VxWorks etc.
1,Hard Real-Time operating system:
n Hard RTOS, all critical tasks must be completed within the specified time duration, i.e.,
within the given deadline. Not meeting the deadline would result in critical failures such as
damage to equipment or even loss of human life.
For Example,
Let's take an example of airbags provided by carmakers along with a handle in the driver's seat.
When the driver applies brakes at a particular instance, the airbags grow and prevent the driver's
head from hitting the handle. Had there been some delay even of milliseconds, then it would
have resulted in an accident.
Similarly, consider an on-stock trading software. If someone wants to sell a particular share,
the system must ensure that command is performed within a given critical time. Otherwise, if
the market falls abruptly, it may cause a huge loss to the trader.
2,Soft Real-Time operating system:
Soft RTOS accepts a few delays via the means of the Operating system. In this kind of RTOS,
there may be a closing date assigned for a particular job, but a delay for a small amount of time
is acceptable. So, cut off dates are treated softly via means of this kind of RTOS.
For Example,
This type of system is used in Online Transaction systems and Livestock price quotation
Systems.
3,Firm Real-Time operating system:
In Firm RTOS additionally want to observe the deadlines. However, lacking a closing date
might not have a massive effect, however may want to purposely undesired effects, like a
massive discount within the fine of a product.
For Example, this system is used in various forms of Multimedia applications.
In the Time Sharing operating system, computer resources are allocated in a time-dependent
fashion to several programs simultaneously. Thus it helps to provide a large number of user's
direct access to the main computer. It is a logical extension of multiprogramming. In time-
sharing, the CPU is switched among multiple programs given by different users on a scheduled
basis.
The Distributed Operating system is not installed on a single machine, it is divided into parts,
and these parts are loaded on different machines. A part of the distributed Operating system is
installed on each machine to make their communication possible. Distributed Operating
systems are much more complex, large, and sophisticated than Network operating systems
because they also have to take care of varying networking protocols.
Processes
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion. When a program is loaded into the memory and it becomes a process. A
process is an 'active' entity as opposed to the program which is considered to be a 'passive'
entity. Attributes held by the process include hardware state, memory, CPU, etc.
States of a Process in Operating Systems
In an operating system, a process is a program that is being executed. During its execution, a
process goes through different states. Understanding these states helps us see how the operating
system manages processes, ensuring that the computer runs efficiently.
There must be a minimum of five states. Even though the process could be in one of these states
during execution, the names of the states are not standardized. Each process goes through
several stages throughout its life cycle. In this article, We discuss different states in detail.
Categories of Scheduling
Scheduling falls into one of two categories:
• Non-Preemptive: In this case, a process’s resource cannot be taken before the process
has finished running. When a running process finishes and transitions to a waiting state,
resources are switched.
• Preemptive: In this case, the OS assigns resources to a process for a predetermined
period. The process switches from running state to ready state or from waiting state to
ready state during resource allocation. This switching happens because the CPU may
give other processes priority and substitute the currently active process for the higher
priority process.
Types of Process Schedulers
Context Switching
In order for a process execution to be continued from the same point at a later time, context
switching is a mechanism to store and restore the state or context of a CPU in the Process
Control block. A context switcher makes it possible for multiple processes to share a single
CPU using this method. A multitasking operating system must include context switching
among its features.
The state of the currently running process is saved into the process control block when the
scheduler switches the CPU from executing one process to another. The state used to set the
computer, registers, etc. for the process that will run next is then loaded from its own PCB.
After that, the second can start processing.
Context Switching
Interprocess Communication
Interprocess communication (IPC) is a process that allows different processes of a computer
system to share information. IPC lets different programs run in parallel, share data, and
communicate with each other. It’s important for two reasons: First, it speeds up the execution
of tasks, and secondly, it ensures that the tasks run correctly and in the order that they were
executed.
Types of Processes:
Cooperating Process in the operating system is a process that gets affected by other processes
under execution or can affect any other process under execution. It shares data with other
processes in the system by directly sharing a logical space which is both code and data or by
sharing data through files or messages.
Whereas, an independent process in an operating system is one that does not affect or impact
any other process of the system. It does not share any data with other processes.
Why Interprocess Communication is Necessary
IPC lets different programs run in parallel, share data, and communicate with each other. It’s
important for two reasons:
• It speeds up the execution of tasks.
• It ensures that the tasks run correctly and in the order that they were executed.
• IPC is essential for the efficient operation of an operating system.
• Operating systems use IPC to exchange data with tools and components that the system
uses to interact with the user, such as the keyboard, the mouse, and the graphical user
interface (GUI).
• IPC also lets the system run multiple programs at the same time. For example, the
system might use IPC to provide information to the windowing system about the status
of a window on the screen.
Advantages of Interprocess Communication
• Interprocess communication allows one application to manage another and enables
glitch-free data sharing.
• Interprocess communication helps send messages efficiently between processes.
• The program is easy to maintain and debug because it is divided into different sections
of code that work separately.
• Programmers can perform a variety of other tasks at the same time, including Editing,
listening to music, compiling, etc.
• Data can be shared between different programs at the same time.
• Tasks can be subdivided and run on special types of processors. You can then exchange
data via IPC.
Disadvantages of Interprocess Communication
• The program cannot write to similar locations.
• Processes or programs that use the shared memory model must make sure that they are
not writing to similar memory locations.
• The shared storage model can cause problems such as storage synchronization and
protection that need to be addressed.
• It’s slower than a direct function call.
Methods of Cooperating Process in OS
Cooperating processes in OS requires a communication method that will allow the processes
to exchange data and information.
There are two methods by which the cooperating process in OS can communicate:
• Cooperation by Memory Sharing
• Cooperation by Message Passing
Details about the methods are given below:
Cooperation by Sharing
The cooperation processes in OS can communicate with each other using the shared resource
which includes data, memory, variables, files, etc.
Processes can then exchange the information by reading or writing data to the shared region.
We can use a critical section that provides data integrity and avoids data inconsistency.
Let's see a diagram to understand more clearly the communication by shared region:
Process Control
Process control is the system call that is used to direct the processes. Some process control
examples include creating, load, abort, end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of device
management include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some
examples of information maintenance, including getting system data, set time or date, get time
or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples of
communication, including create, delete communication connections, send, receive messages,
etc.
Examples of Windows and Unix system calls
There are various examples of Windows and Unix system calls. These are as listed below in
the table:
CreateProcess() Fork()
Process Control ExitProcess() Exit()
WaitForSingleObject() Wait()
CreateFile() Open()
ReadFile() Read()
File Manipulation
WriteFile() Write()
CloseHandle() Close()
SetConsoleMode() Ioctl()
Device Management ReadConsole() Read()
WriteConsole() Write()
GetCurrentProcessID() Getpid()
Information Maintenance SetTimer() Alarm()
Sleep() Sleep()
CreatePipe() Pipe()
Communication CreateFileMapping() Shmget()
MapViewOfFile() Mmap()
SetFileSecurity() Chmod()
Protection InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()
open()
The open() system call allows you to access a file on a file system. It allocates resources to the
file and provides a handle that the process may refer to. Many processes can open a file at once
or by a single process only. It's all based on the file system and structure.
read()
It is used to obtain data from a file on the file system. It accepts three arguments in general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.
wait()
In some systems, a process may have to wait for another process to complete its execution
before proceeding. When a parent process makes a child process, the parent process execution
is suspended until the child process is finished. The wait() system call is used to suspend the
parent process. Once the child process has completed its execution, control is returned to the
parent process.
write()
It is used to write data from a user buffer to a device like a file. This system call is one way for
a program to generate data. It takes three arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is one of the most
common ways to create processes in operating systems. When a parent process spawns a child
process, execution of the parent process is interrupted until the child process completes. Once
the child process has completed its execution, control is returned to the parent process.
close()
It is used to end file system access. When this system call is invoked, it signifies that the
program no longer requires the file, and the buffers are flushed, the file information is altered,
and the file resources are de-allocated as a result.
exec()
When an executable file replaces an earlier executable file in an already executing process, this
system function is invoked. As a new process is not built, the old process identification stays,
but the new process replaces data, stack, data, head, etc.
exit()
The exit() is a system call that is used to end program execution. This call indicates that the
thread execution is complete, which is especially useful in multi-threaded environments. The
operating system reclaims resources spent by the process following the use of the exit() system
function.
Process Synchronization
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization techniques
such as semaphores, monitors, and critical sections are used.
The procedure involved in preserving the appropriate order of execution of cooperative
processes is known as Process Synchronization. There are various synchronization mechanisms
that are used to synchronize the processes.
Race Condition
A Race Condition typically occurs when two or more threads try to read, write and possibly
make the decisions based on the memory that they are accessing concurrently.
A race condition is a situation that may occur inside a critical section. This happens when the
result of multiple thread execution in critical section differs according to the order in which the
threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent
race conditions.
Critical Section
The critical section in a code segment where the shared variables can be accessed. Atomic
action is required in a critical section i.e. only one process can execute in its critical section at
a time. All the other processes have to wait to execute in their critical sections.
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);
In the above diagram, the entry sections handles the entry into the critical section. It acquires
the resources needed for execution by the process. The exit section handles the exit from the
critical section. It releases the resources and also informs the other processes that critical
section is free.
The critical section problem needs a solution to synchronise the different processes. The
solution to the critical section problem must satisfy the following conditions −
1.Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any time. If
any other processes require the critical section, they must wait until it is free.
2.Progresss
Progress means that if a process is not using the critical section, then it should not stop any
other process from accessing it. In other words, any process can enter a critical section if it is
free.
3.Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. Itt should not wait
endlessly to access the critical section.
4.Semaphore
A semaphore is a signalling mechanism and a thread that is waiting on a semaphore can be
signalled by another thread. This is different than a mutex as the mutex can be signalled only
by the thread that called the wait function.
Semaphores
The Semaphore is just a normal integer. The Semaphore cannot be negative. The least value
for a Semaphore is zero (0). The Maximum value of a Semaphore can be anything. The
Semaphores usually have two operations. The two operations have the capability to decide the
values of the semaphores.
The two Semaphore Operations are:
1. Wait ( )
2. Signal ( )
Wait Semaphore Operation
The Wait Operation is used for deciding the condition for the process to enter the critical state
or wait for execution of process. Here, the wait operation has many different names. The
different names are:
1. Sleep Operation
2. Down Operation
3. Decrease Operation
4. P Function (most important alias name for wait operation)
The Wait Operation works on the basis of Semaphore or Mutex Value.
Signal Semaphore Operation
The Signal Semaphore Operation is used to update the value of Semaphore. The Semaphore
value is updated when the new processes are ready to enter the Critical Section.
The Signal Operation is also known as:
1. Wake up Operation
2. Up Operation
3. Increase Operation
4. V Function (most important alias name for signal operation)
Types of Semaphores
Components of Thread
A thread has the following three components:
1. Program Counter
2. Register Set
3. Stack space
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
In this case, the thread management kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread contexts. The
application starts with a single thread.