0% found this document useful (0 votes)
2 views56 pages

Introduction to Operating System 1$2 Topic

An operating system (OS) is essential system software that manages hardware and software resources, providing services for computer programs. It acts as an interface between users and hardware, facilitating job scheduling, file management, and multiprogramming. The document also covers the history, architecture, and types of operating systems, highlighting their evolution from batch systems to modern distributed systems.

Uploaded by

shillacatayliina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views56 pages

Introduction to Operating System 1$2 Topic

An operating system (OS) is essential system software that manages hardware and software resources, providing services for computer programs. It acts as an interface between users and hardware, facilitating job scheduling, file management, and multiprogramming. The document also covers the history, architecture, and types of operating systems, highlighting their evolution from batch systems to modern distributed systems.

Uploaded by

shillacatayliina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 56

INTRODUCTION TO OPERATING SYSTEM

Introduction to Operating system


An operating system (OS) is system software that manages computer hardware
and software resources and provides common services for computer programs.
All computer programs, excluding firmware, require an operating system to
function.

An operating system is a program that acts as an interface between the user and
the computer hardware and controls the execution of all kinds of application
programs and assistant system software programs (i.e. Utilities).

Basic importance of the operating system


1) Operating system behaves as a resource manager. It utilizes the computer in a
cost effective manner. It keeps account of different jobs and the where about
of their results and locations in the memory.
2) Operating system schedules jobs according to their priority passing control
from one program to the next. The overall function of job control is
especially important when there are several uses (a multi user environment)
3) Operating system makes a communication link user and the system and helps
the user to run application programs properly and get the required output
4) Operating system has the ability to fetch the programs in the memory when
required and not all the Operating system to be loaded in the memory at the
1
same time, thus giving the user the space to work in the required package
more conveniently and easily
5) Operating system helps the user in file management. Making directories and
saving files in them is a very important feature of operating system to
organize data according to the needs of the user
6) Multiprogramming is a very important feature of operating system. It
schedules and controls the running of several programs at ones
7) It provides program editors that help the user to modify and update the
program lines
8) Debugging aids provided by the operating system helps the user to detect and
rename errors in programs
9) Disk maintenance ability of operating system checks the validity of data
stored on diskettes and other storage to make corrections to erroneous data

Operating Systems Terminology’s


Processes: A process is an instance of a program running in a computer.
A program in the execution is called a Process. Process is not the same as
program. A process is more than a program code (Program Code + Data +
Execution status).

Files: A collection of data or information that has a name, called the filename.
Almost all information stored in a computer must be in a file. There are many
different types of files: data files, text files , program files, directory files, and so
on. Different types of files store different types of information. For example,
program files store programs, whereas text files store text.

A system call is a way for programs to interact with the operating system. A
computer program makes a system call when it makes a request to the operating
system's kernel. System calls are used for hardware services, to create or execute
a process, and for communicating with kernel services, including application and
process scheduling.

Shell AND kernel


A shell is a software interface that's often a command line interface that
enables the user to interact with the computer. Some examples of shells are
MS-DOS Shell, command.com, csh, ksh, and sh. Below is a picture and
example of what a Terminal window with an open shell. A Kernel is first
section of the operating system to load into memory. As the center of the
operating system, the kernel needs to be small, efficient and loaded into a
protected area in the memory; so as not to be overwritten. It can be
responsible for such things as disk drive management, interrupt handler, file
management, memory management, process management, etc.
2
Virtual Machines: A virtual machine (VM) is a software program or operating
system that not only exhibits the behavior of a separate computer, but is also
capable of performing tasks such as running applications and programs like a
separate computer. A virtual machine, usually known as a guest is created within
another computing environment referred as a "host." Multiple virtual machines
can exist within a single host at one time.

The History of Operating Systems


The first operating system was created by General Motors in 1956 to run a single
IBM mainframe computer. Other IBM mainframe owners followed suit and
created their own operating systems. As you can imagine, the earliest operating
systems varied wildly from one computer to the next, and while they did make it
easier to write programs, they did not allow programs to be used on more than
one mainframe without a complete rewrite.

In the 1960s, IBM was the first computer manufacturer to take on the task of
operating system development and began distributing operating systems with
their computers. However, IBM wasn't the only vendor creating operating
systems during this time. Control Data Corporation, Computer Sciences
Corporation, Burroughs Corporation, GE, Digital Equipment Corporation, and
Xerox all released mainframe operating systems in the 1960s as well.

In the late 1960s, the first version of the Unix operating system was developed.
Written in C, and freely available during it's earliest years, Unix was easily
ported to new systems and rapidly achieved broad acceptance. Many modern
operating systems, including Apple OS X and all Linux flavors, trace their roots
back to Unix.

Microsoft Windows was developed in response to a request from IBM for an


operating system to run its range of personal computers. The first OS built by
Microsoft wasn't called Windows, it was called MS-DOS and was built in 1981
by purchasing the 86-DOS operating system from Seattle Computer Products and
modifying it to meet IBM's requirements. The name Windows was first used in
1985 when a graphical user interface was created and paired with MS-DOS.

Apple OS X, Microsoft Windows, and the various forms of Linux (including


Android) now command the vast majority of the modern operating system
market.

The First Generation (1940's to early 1950's)


When electronic computers where first introduced in the 1940's they were
created without any operating systems. All programming was done in absolute
3
machine language, often by wiring up plug boards to control the machine's basic
functions. During this generation computer were generally used to solve simple
math calculations, operating systems were not necessarily needed.

The Second Generation (1955-1965)


The first operating system was introduced in the early 1950's, it was called
GMOS and was created by General Motors for IBM's machine the 701.
Operating systems in the 1950's were called single-stream batch processing
systems because the data was submitted in groups. These new machines were
called mainframes, and they were used by professional operators in large
computer rooms. Since there was such as high price tag on these machines, only
government agencies or large corporations were able to afford them.
The Third Generation (1965-1980)
By the late 1960's operating systems designers were able to develop the system
of multiprogramming in which a computer program will be able to perform
multiple jobs at the same time. The introduction of multiprogramming was a
major part in the development of operating systems because it allowed a CPU to
be busy nearly 100 percent of the time that it was in operation. Another major
development during the third generation was the phenomenal growth of
minicomputers, starting with the DEC PDP-1 in 1961. The PDP-1 had only 4K
of 18bit words, but at $120,000 per machine (less than 5 percent of the price of a
7094), it sold like .hotcakes. These microcomputers help create a whole new
industry and the development of more PDP's. These PDP's helped lead to the
creation of personal computers which are created in the fourth generation.
The Fourth Generation (1980-Present Day)
The fourth generation of operating systems saw the creation of personal
computing. Although these computers were very similar to the minicomputers
developed in the third generation, personal computers cost a very small fraction
of what minicomputers cost. A personal computer was so affordable that it made
it possible for a single individual could be able to own one for personal use while
minicomputers where still at such a high price that only corporations could
afford to have them. One of the major factors in the creation of personal
computing was the birth of Microsoft and the Windows operating system. The
windows Operating System was created in 1975 when Paul Allen and Bill Gates
had a vision to take personal computing to the next level. They introduced the
MS-DOS in 1981 although it was effective it created much difficulty for people
who tried to understand its cryptic commands. Windows went on to become the
largest operating system used in technology today with releases of Windows 95,
Windows 98, Windows XP (Which is currently the most used operating system
to this day), and their newest operating system Windows 7. Along with
Microsoft, Apple is the other major operating system created in the 1980's. Steve
Jobs, co-founder of Apple, created the Apple Macintosh which was a huge
success due to the fact that it was so user friendly. Windows development
4
throughout the later years were influenced by the Macintosh and it created a
strong competition between the two companies. Today all of our electronic
devices run off of operating systems, from our computers and smartphones, to
ATM machines and motor vehicles. And as technology advances, so do
operating systems.

Operating System Structure


An operating system might have many structure. According to the structure
of the operating system; operating systems can be classified into many
categories. Some of the main structures used in operating systems are:

1. Monolithic architecture of operating system

It is the oldest architecture used for developing operating system. Operating


system resides on kernel for anyone to execute. System call is involved i.e.
Switching from user mode to kernel mode and transfer control to operating
system shown as event 1. Many CPU has two modes, kernel mode, for the
operating system in which all instructions are allowed and user mode for user
program in which I/O devices and certain other instruction are not allowed. Two
operating system then examines the parameter of the call to determine which
system call is to be carried out shown in event 2. Next, the operating system
indices into a table that contains procedure that carries out system call. This
operation is shown in events. Finally, it is called when the work has been
completed and the system call is finished, control is given back to the user mode
as shown in event 4.

5
2. Layered Architecture of operating system
The layered Architecture of operating system was developed in 60‟s in this
approach; the operating system is broken up into number of layers. The bottom
layer (layer 0) is the hardware layer and the highest layer (layer n) is the user
interface layer as shown in the figure.

The layered are selected such that each user functions and services of only lower
level layer. The first layer can be debugged wit out any concern for the rest of the
system. It user basic hardware to implement this function once the first layer is
debugged., its correct functioning can be assumed while the second layer is
debugged & soon . If an error is found during the debugged of particular layer,
the layer must be on that layer, because the layer below it already debugged.
Because of this design of the system is simplified when operating system is
broken up into layer. Os/2 operating system is example of layered architecture of
operating system another example is earlier version of Windows NT.
The main disadvantage of this architecture is that it requires an appropriate
definition of the various layers & a careful planning of the proper placement of
the layer.

3. Virtual memory architecture of operating system

Virtual machine is an illusion of a real machine. It is created by a real machine


operating system, which make a single real machine appears to be several real
machine. The architecture of virtual machine is shown above.
6
The best example of virtual machine architecture is IBM 370 computer. In this
system each user can choose a different operating system. Actually, virtual
machine can run several operating systems at once, each of them on its virtual
machine.
Its multiprogramming shares the resource of a single machine in different
manner.
The concepts of virtual machine are: -
a) Control program (cp): - cp creates the environment in which virtual machine
can execute. It gives to each user facilities of real machine such as processor,
storage I/0 devices.
b) conversation monitor system (cons): - cons is a system application having
features of developing program. It contains editor, language translator, and
various application packages.
c) Remote spooling communication system (RSCS): - provide virtual machine
with the ability to transmit and receive file in distributed system.
d) IPCS (interactive problem control system): - it is used to fix the virtual
machine software problems.

4. client/server architecture of operating system


A trend in modern operating system is to move maximum code into the higher
level and remove as much as possible from operating system, minimizing the
work of the kernel. The basic approach is to implement most of the operating
system functions in user processes to request a service, such as request to read a
particular file, user send a request to the server process, server checks the
parameter and finds whether it is valid or not, after that server does the work and
send back the answer to client server model works on request- response
technique i.e. Client always send request to the side in order to perform the task,
and on the other side, server gates complementing that request send back
response. The figure below shows client server architecture.

Client Cleint Process Terminal …………. File server Memory


Process Process Server server server
Kernel

Fig: The client-server model

In this model, the main task of the kernel is to handle all the communication
between the client and the server by splitting the operating system into number of
ports, each of which only handle some specific task. I.e. file server, process
server, terminal server and memory service.

7
Another advantage of the client-server model is its adaptability to user in
distributed system. If the client communicates with the server by sending it the
message, the client need not know whether it was send a ……. Is the network to
a server on a remote machine? As in case of client, same thing happens and
occurs in client side that is a request was send and a reply come back.

Operating System ─ Types


Batch Operating System
The users of a batch operating system do not interact with the computer
directly. Each user prepares his job on an off-line device like punch cards and
submits it to the computer operator. To speed up processing, jobs with
similar needs are batched together and run as a group. The programmers
leave their programs with the operator and the operator then sorts the
programs with similar requirements into batches.

The problems with Batch Systems are as follows:


• Lack of interaction between the user and the job. 
• CPU is often idle, because the speed of the mechanical I/O devices is
slower than the CPU. 
• Difficult to provide the desired priority. 

Time-sharing Operating Systems


Time-sharing is a technique which enables many people, located at various
terminals, to use a particular computer system at the same time. Time-sharing or
multitasking is a logical extension of multiprogramming. Processor's time which
is shared among multiple users simultaneously is termed as time-sharing.

The main difference between Multiprogrammed Batch Systems and Time-


Sharing Systems is that in case of Multiprogrammed batch systems, the objective
is to maximize processor use, whereas in Time-Sharing Systems, the objective is
to minimize response time.

Multiple jobs are executed by the CPU by switching between them, but the
switches occur so frequently. Thus, the user can receive an immediate response.
For example, in a transaction processing, the processor executes each user
program in a short burst or quantum of computation. That is, if n users are
present, then each user can get a time quantum. When the user submits the
command, the response time is in few seconds at most.

The operating system uses CPU scheduling and multiprogramming to provide


each user with a small portion of a time. Computer systems that were designed
primarily as batch systems have been modified to time-sharing systems.

Advantages of Timesharing operating systems are as follows:

8
• Provides the advantage of quick response
• Avoids duplication of software
• Reduces CPU idle time

Disadvantages of Time-sharing operating system


are as follows:
 Problem of reliability
 Question of security and integrity of user programs and data
 Problem of data communication

Distributed Operating System


Distributed systems use multiple central processors to serve multiple real-time
applications and multiple users. Data processing jobs are distributed among the
processors accordingly.

The processors communicate with one another through various communication


lines (such as high-speed buses or telephone lines). These are referred as loosely
coupled systems or distributed systems. Processors in a distributed system may
vary in size and function. These processors are referred as sites, nodes,
computers, and so on.

The advantages of distributed systems are as follows:


• With resource sharing facility, a user at one site may be able to use
the resources available at another. 
• Speedup the exchange of data with one another via electronic mail. 
• If one site fails in a distributed system, the remaining sites can
potentially continue operating. 
• Better service to the customers. 
• Reduction of the load on the host computer. 
• Reduction of delays in data processing. 

Network Operating System


A Network Operating System runs on a server and provides the server the
capability to manage data, users, groups, security, applications, and other
networking functions. The primary purpose of the network operating system is to
allow shared file and printer access among multiple computers in a network,
typically a local area network (LAN), a private network or to other networks.

Examples of network operating systems include Microsoft Windows Server


2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell
NetWare, and BSD.

9
The advantages of network operating systems are as follows:
• Centralized servers are highly stable. 
• Security is server managed. 
• Upgrades to new technologies and hardware can be easily integrated
into the system. 
• Remote access to servers is possible from different locations and
types of systems.
The disadvantages of network operating systems are as follows: 
• High cost of buying and running a server. 
• Dependency on a central location for most operations.  
Regular maintenance and updates are required. 

Real-Time Operating System


A real-time system is defined as a data processing system in which the time
interval required to process and respond to inputs is so small that it controls the
environment. The time taken by the system to respond to an input and display of
required updated information is termed as the response time. So in this method,
the response time is very less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the
operation of a processor or the flow of data and real-time systems can be used as
a control device in a dedicated application. A real-time operating system must
have well-defined, fixed time constraints, otherwise the system will fail. For
example, Scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.

There are two types of real-time operating systems.


1) Hard real-time systems: Hard real-time systems guarantee that critical
tasks complete on time. In hard real-time systems, secondary storage is limited
or missing and the data is stored in ROM. In these systems, virtual memory is
almost never found.
2) Soft real-time systems: Soft real-time systems are less restrictive. A
critical real-time task gets priority over other tasks and retains the priority until it
completes. Soft real-time systems have limited utility than hard real-time
systems. For example, multimedia, virtual reality, Advanced Scientific Projects
like undersea exploration and planetary rovers, etc.

Operating System ─ Properties


Batch Processing
Batch processing is a technique in which an Operating System collects the
programs and data together in a batch before processing starts. An operating
system does the following activities related to batch processing:

10
• The OS defines a job which has predefined sequence of commands,
programs and data as a single unit.
• The OS keeps a number a jobs in memory and executes them without any
manual information.
• Jobs are processed in the order of submission, i.e., first come first served
fashion.
• When a job completes its execution, its memory is released and the
output for the job
gets copied into an output spool for later printing or processing.

Advantages

• Batch processing takes much of the work of the operator to the computer.
• Increased performance as a new job get started as soon as the previous
job is finished, without any manual intervention.

Disadvantages

• Difficult to debug programs.


• A job could enter an infinite loop.

11
• Due to lack of protection scheme, one batch job can affect other
pending jobs.

Multitasking
Multitasking is when multiple jobs are executed by the CPU simultaneously by
switching between them. Switches occur so frequently that the users may interact
with each program while it is running. An OS does the following activities
related to multitasking:

• The user gives instructions to the operating system or to a program


directly, and receives an immediate response.
• The OS handles multitasking in the way that it can handle multiple
operations / executes multiple programs at a time.
• Multitasking Operating Systems are also known as Time-sharing
systems.
• These Operating Systems were developed to provide interactive use
of a computer system at a reasonable cost.
• A time-shared operating system uses the concept of CPU scheduling
and multiprogramming to provide each user with a small portion of a
time-shared CPU.
• Each user has at least one separate program in memory.

12
• A program that is loaded into memory and is executing is commonly
referred to as a process.
• When a process executes, it typically executes for only a very short
time before it either finishes or needs to perform I/O.
• Since interactive I/O typically runs at slower speeds, it may take a
long time to complete. During this time, a CPU can be utilized by
another process.
• The operating system allows the users to share the computer
simultaneously. Since each action or command in a time-shared
system tends to be short, only a little CPU time is needed for each
user.
• As the system switches CPU rapidly from one user/program to the
next, each user is given the impression that he/she has his/her own
CPU, whereas actually one CPU is being shared among many users.

Multiprogramming
Sharing the processor, when two or more programs reside in memory at the same
time, is referred as multiprogramming. Multiprogramming assumes a single
shared processor. Multiprogramming increases CPU utilization by organizing
jobs so that the CPU always has one to execute
The following figure shows the memory layout for a multiprogramming system.

13
An OS does the following activities related to multiprogramming.
• The operating system keeps several jobs in memory at a time.
• This set of jobs is a subset of the jobs kept in the job pool.
• The operating system picks and begins to execute one of the jobs
in the memory.
• Multiprogramming operating systems monitor the state of all
active programs and system resources using memory management programs
to ensures that the CPU is never idle, unless there are no jobs to process.
Advantage

• High and efficient CPU utilization.


• User feels that many programs are allotted CPU almost
simultaneously.

Disadvantages

• CPU scheduling is required.


• To accommodate many jobs in memory, memory management is
required.

Interactivity
Interactivity refers to the ability of users to interact with a computer system. An
Operating system does the following activities related to interactivity:
• Provides the user an interface to interact with the system.
• Manages input devices to take inputs from the user. For example,
keyboard.
• Manages output devices to show outputs to the user. For example,
Monitor.
The response time of the OS needs to be short, since the user submits and waits
for the result.

Real-Time Systems
Real-time systems are usually dedicated, embedded systems. An operating
system does the following activities related to real-time system activity.
• In such systems, Operating Systems typically read from and react to
sensor data.

14
• The Operating system must guarantee response to events within fixed
periods of time to ensure correct performance.
Distributed Environment
A distributed environment refers to multiple independent CPUs or processors in a
computer system. An operating system does the following activities related to
distributed environment:
• The OS distributes computation logics among several physical
processors.
• The processors do not share memory or a clock. Instead, each processor
has its own local memory.
• The OS manages the communications between the processors. They
communicate with each other through various communication lines.
Spooling
Spooling is an acronym for simultaneous peripheral operations on line. Spooling
refers to putting data of various I/O jobs in a buffer. This buffer is a special area
in memory or hard disk which is accessible to I/O devices.

An operating system does the following activities related to distributed


environment:
• Handles I/O device data spooling as devices have different data access
rates.
• Maintains the spooling buffer which provides a waiting station where
data can rest while the slower device catches up.
• Maintains parallel computation because of spooling process as a
computer can perform I/O in parallel fashion. It becomes possible to have
the computer read data from a tape, write data to disk and to write out to
a tape printer while it is doing its computing task.

15
Advantages

 The spooling operation uses a disk as a very large buffer.

Spooling is capable of overlapping I/O operation for one job with processor
operations for another job.

Job control
job control refers to the control of multiple tasks or jobs on a computer system,
ensuring that they each have access to adequate resources to perform correctly, that
competition for limited resources does not cause a deadlock where two or more jobs
are unable to complete, resolving such situations where they do occur, and terminating
jobs that, for any reason, are not performing as expected.

Job control language (JCL)


Short for Job Control Language, JCL is a scripting language that is used to
communicate with the operating system. Using JCL, a user can submit a statement to
the operating system, which it then uses to execute a job. JCL also enables the user to
view resources needed to run a job and minor control details. A language used to
construct statements that identify a particular job to be run and specify the job's
requirements to the operating system under which it will run.
A programming language used to specify the manner, timing, and other requirements
of execution of a task or set of tasks submitted for execution, especially in background,
on a multitasking computer; a programming language for controlling job execution.

Command language
Sometimes referred to as a command script, a command language is a
language used for executing a series of commands instructions that would
otherwise be executed at the prompt (text or symbols used to represent the system's
readiness to perform the next). A good example of a command language is
Microsoft Windows batch files(A batch file or batch job is a collection, or list, of
commands that are processed in sequence often without requiring user input or
intervention). Although command languages are useful for executing a series of
commands, their functionality is limited to what is available at the command line
which can make them easier to learn.
Advantages of command
languages  Very easy

16
for all types of users to
write.
• Do not require the files to be compiled.
• Easy to modify and make additional commands.
• Very small files.
• Do not require any additional programs or files that are not already found on the
operating system.

Disadvantages of command languages


• Can be limited when comparing with other programming languages or scripting
languages.
• May not execute as fast as other languages or compiled programs.
• Some command languages often offer little more than using the commands
available for the operating system used.

17
PROCESS MANAGEMENT

Definition of a Process and terms


A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.

A process is defined as an entity which represents the basic unit of work to be


implemented in a system

To put it in simple terms, we write our computer programs in a text file and when
we execute this program, it becomes a process which performs all the tasks
mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image
shows a simplified layout of a process inside main memory:

S.N. Component & Description


1 Stack: The process Stack contains the temporary data such as
method/function parameters, return address, and local variables.
2 Heap This is a dynamically allocated memory to a process during its runtime.
3 Text This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.

18
4 Data This section contains the global and static variables.

The Process Model


Process models are processes of the same nature that are classified together into a
model. Thus, a process model is a description of a process at the type level. Since
the process model is at the type level, a process is an instantiation of it. The same
process model is used repeatedly for the development of many applications and
thus, has many instantiations. One possible use of a process model is to prescribe
how things must/should/could be done in contrast to the process itself which is
really what happens. A process model is roughly an anticipation of what the
process will look like. What the process shall be will be determined during actual
system development.

The goals of a process model are to be:


• Descriptive o Track what actually happens
during a process
o Take the point of view of an external observer who looks at the way a
process has been performed and determines the improvements that must be
made to make it perform more effectively or efficiently.
• Prescriptive o Define the desired
processes and how they
should/could/might be performed.
o Establish rules, guidelines, and behavior patterns which, if followed, would
lead to the desired process performance. They can range from strict
enforcement to flexible guidance.
• Explanatory o Provide explanations about
the rationale of processes.
o Explore and evaluate the several possible courses of action based on rational
arguments. o Establish an explicit link between processes and the
requirements that the model needs to fulfill. o Pre-defines points at
which data can be extracted for reporting purposes.

Process Levels
A process hierarchy is defined by its levels and the information given in these levels. It
is key to have a defined information base on each level (e.g. a process step is always
performed by a specific role instead of an abstract organizational unit), otherwise
process levels are realized in threads.

Threads
Despite of the fact that a thread must execute in process, the process and its
associated threads are different concept. Processes are used to group resources
together and threads are the entities scheduled for execution on the CPU.

19
A thread is a single sequence stream within in a process. Because threads have
some of the properties of processes, they are sometimes called lightweight
processes. In a process, threads allow multiple executions of streams. In many
respect, threads are popular way to improve application through parallelism. The
CPU switches rapidly back and forth among the threads giving illusion that the
threads are running in parallel. Like a traditional process i.e., process with one
thread, a thread can be in any of several states (Running, Blocked, Ready or
Terminated). Each thread has its own stack. Since thread will generally call
different procedures and thus a different execution history. This is why thread
needs its own stack. An operating system that has thread facility, the basic unit of
CPU utilization is a thread. A thread has or consists of a program counter (PC), a
register set, and a stack space. Threads are not independent of one other like
processes as a result threads shares with other threads their code section, data
section, OS resources also known as task, such as open files and signals.

Processes Vs Threads
As we mentioned earlier that in many respect threads operate in the same way as
that of processes. Some of the similarities and differences are:
Similarities
• Like processes threads share CPU and only one thread active (running) at a
time.
• Like processes, threads within a processes, threads within a processes execute
sequentially.
• Like processes, thread can create children.
• And like process, if one thread is blocked, another thread can run.

Differences
• Unlike processes, threads are not independent of one another.
• Unlike processes, all threads can access every address in the task .
• Unlike processes, thread are design to assist one other. Note that processes
might or might not assist one another because processes may originate from
different users.

Why Threads?
Following are some reasons why we use threads in designing operating systems.
1. A process with multiple threads make a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess
communication.
3. Because of the very nature, threads can take advantage of multiprocessors.

Threads are cheap in the sense that


1. They only need a stack and storage for registers therefore, threads are cheap to
create.
20
2. Threads use very little resources of an operating system in which they are
working. That is, threads do not need new address space, global data, program
code or operating system resources.
3. Context switching are fast when working with threads. The reason is that we
only have to save and/or restore PC, SP and registers.

But this cheapness does not come free - the biggest drawback is that there is no
protection between threads.

Thread levels (User-level & Kernel-level)


User-Level Threads
User-level threads implement in user-level libraries, rather than via systems calls,
so thread switching does not need to call operating system and to cause interrupt
to the kernel. In fact, the kernel knows nothing about user-level threads and
manages them as if they were single-threaded processes.

Advantages:
The most obvious advantage of this technique is that a user-level threads package
can be implemented on an Operating System that does not support threads. Some
other advantages are
• User-level threads does not require modification to operating systems.
• Simple Representation:
Each thread is represented simply by a PC, registers, stack and a small control
block, all stored in the user process address space.
• Simple Management:

21
This simply means that creating a thread, switching between threads and
synchronization between threads can all be done without intervention of the
kernel.
• Fast and Efficient:
Thread switching is not much more expensive than a procedure call.

Disadvantages:
• There is a lack of coordination between threads and operating system kernel.
Therefore, process as whole gets one time slice irrespect of whether process has
one thread or 1000 threads within. It is up to each thread to relinquish control to
other threads.
• User-level threads requires non-blocking systems call i.e., a multithreaded kernel.
Otherwise, entire process will blocked in the kernel, even if there are runable
threads left in the processes. For example, if one thread causes a page fault, the
process blocks.

Kernel-Level Threads
In this method, the kernel knows about and manages the threads. No runtime
system is needed in this case. Instead of thread table in each process, the kernel
has a thread table that keeps track of all threads in the system. In addition, the
kernel also maintains the traditional process table to keep track of processes.
Operating Systems kernel provides system call to create and manage threads.

Advantages:
• Because kernel has full knowledge of all threads, Scheduler may decide to give
more time to a process having large number of threads than process having
small number of threads.
• Kernel-level threads are especially good for applications that frequently block.

Disadvantages:

• The kernel-level threads are slow and inefficient. For instance, threads
operations are hundreds of times slower than that of user-level threads.
• Since kernel must manage and schedule threads as well as processes. It
requires a full thread control block (TCB) for each thread to maintain
information about threads. As a result, there is significant overhead and
increased in kernel complexity.

Multithreading Models
Some operating system provide a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in

22
parallel on multiple processors and a blocking system call need not block the
entire process. Multithreading models are three types

• Many to many relationships.


• Many to one relationship.
• One to one relationship.

Many to Many Model


The many-to-many model multiplexes any number of user threads onto an equal
or smaller number of kernel threads.

The following diagram shows the many-to-many threading model where 6 user
level threads are multiplexing with 6 kernel level threads. In this model,
developers can create as many user threads as necessary and the corresponding
Kernel threads can run in parallel on a multiprocessor machine. This model
provides the best accuracy on concurrency and when a thread performs a
blocking system call, the kernel can schedule another thread for execution.

Many to One Model


Many-to-one model maps many user level threads to one Kernel-level thread.
Thread management is done in user space by the thread library. When thread
makes a blocking system call, the entire process will be blocked. Only one thread
can access the Kernel at a time, so multiple threads are unable to run in parallel
on multiprocessors.

23
If the user-level thread libraries are implemented in the operating system in such
a way that the system does not support them, then the Kernel threads use the
many-to-one relationship modes.

One to One Model


There is one-to-one relationship of user-level thread to the kernel-level thread.
This model provides more concurrency than the many-to-one model. It also
allows another thread to run when a thread makes a blocking system call. It
supports multiple threads to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread.
OS/2, windows NT and windows 2000 use one to one relationship model.

24
Advantages of Threads over Multiple Processes
• Context Switching Threads are very inexpensive to create and destroy, and
they are inexpensive to represent. For example, they require space to store,
the PC, the SP, and the general-purpose registers, but they do not require
space to share memory information, Information about open files of I/O
devices in use, etc. With so little context, it is much faster to switch between
threads. In other words, it is relatively easier for a context switch using threads.
• Sharing Treads allow the sharing of a lot resources that cannot be shared in
process, for example, sharing code section, data section, Operating System
resources like open file etc.

Disadvantages of Threads over Multiprocessors


• Blocking The major disadvantage if that if the kernel is single threaded, a
system call of one thread will block the whole process and CPU may be idle
during the blocking period.
• Security Since there is, an extensive sharing among threads there is a
potential problem of security. It is quite possible that one thread over writes
the stack of another thread (or damaged shared data) although it is very
unlikely since threads are meant to cooperate on a single task.

Application that Benefits from Threads


A proxy server satisfying the requests for a number of computers on a LAN
would be benefited by a multi-threaded process. In general, any program that has
to do more than one task at a time could benefit from multitasking. For example,
a program that reads input, process it, and outputs could have three threads, one
for each task.

25
Application that cannot benefit from Threads
Any sequential process that cannot be divided into parallel task will not benefit
from thread, as they would block until the previous one completes. For example,
a program that displays the time of the day would not benefit from multiple
threads.

Resources used in Thread Creation and Process Creation

When a new thread is created it shares its code section, data section and
operating system resources like open files with other threads. But it is allocated
its own stack, register set and a program counter.

The creation of a new process differs from that of a thread mainly in the fact that
all the shared resources of a thread are needed explicitly for each process. So
though two processes may be running the same piece of code they need to have
their own copy of the code in the main memory to be able to run. Two processes
also do not share other resources with each other. This makes the creation of a
new process very costly compared to that of a new thread.

Context Switch
To give each process on a multiprogrammed machine a fair share of the CPU, a
hardware clock generates interrupts periodically. This allows the operating
system to schedule all processes in main memory (using scheduling algorithm) to
run on the CPU at equal intervals. Each time a clock interrupt occurs, the
interrupt handler checks how much time the current running process has used. If
it has used up its entire time slice, then the CPU scheduling algorithm (in kernel)
picks a different process to run. Each switch of the CPU from one process to
another is called a context switch.

Major Steps of Context Switching


• The values of the CPU registers are saved in the process table of the process that
was running just before the clock interrupt occurred.
• The registers are loaded from the process picked by the CPU scheduler to run next.

26
In a multiprogrammed uniprocessor computing system, context switches occur
frequently enough that all processes appear to be running concurrently. If a
process has more than one thread, the Operating System can use the context
switching technique to schedule the threads so they appear to execute in parallel.
This is the case if threads are implemented at the kernel level. Threads can also
be implemented entirely at the user level in run-time libraries. Since in this case
no thread scheduling is provided by the Operating System, it is the responsibility
of the programmer to yield the CPU frequently enough in each thread so all
threads in the process can make progress.

Action of Kernel to Context Switch Among Threads


The threads share a lot of resources with other peer threads belonging to the same
process. So a context switch among threads for the same process is easy. It
involves switch of register set, the program counter and the stack. It is relatively
easy for the kernel to accomplished this task.

Action of kernel to Context Switch Among Processes


Context switches among processes are expensive. Before a process can be
switched its process control block (PCB) must be saved by the operating system.
The PCB consists of the following information:

• The process state.


• The program counter, PC.
• The values of the different registers.
• The CPU scheduling information for the process.
• Memory management information regarding the process.
• Possible accounting information for this process.
• I/O status information of the process.

When the PCB of the currently executing process is saved the operating system
loads the PCB of the next process that has to be run on CPU. This is a heavy task
and it takes a lot of time.

Process States Life Cycle


When a process executes, it passes through different states. These stages
may differ in different operating systems, and the names of these states are
also not standardized.

In general, a process can have one of the following five states at a time.

S.N. State & Description


1 Start This is the initial state when a process is first started/created.
2 Ready The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the operating system so that
27
they can run. Process may come into this state after Start state or while running it
by but interrupted by the scheduler to assign CPU to some other process.
3 Running Once the process has been assigned to a processor by the OS scheduler,
the process state is set to running and the processor executes its instructions.
4 Waiting Process moves into the waiting state if it needs to wait for a resource,
such as waiting for user input, or waiting for a file to become available.
5 Terminated or Exit Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to be removed from
main memory.

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System
for every process. The PCB is identified by an integer process ID (PID). A PCB
keeps all the information needed to keep track of a process as listed below in the
table −

S.N. Information & Description


1 Process State: The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2
Process privileges: This is required to allow/disallow access to system resources.
3
Process ID: Unique identification for each of the process in the operating system.
4
Pointer: A pointer to parent process.
5 Program Counter: Program Counter is a pointer to the address of the next instruction to be
executed for this process.
6 CPU registers: Various CPU registers where process need to be stored for execution for
running state.
7 CPU Scheduling Information: Process priority and other scheduling information which is
required to schedule the process.
28
8 Memory management information: This includes the information of page table, memory
limits, Segment table depending on memory used by the operating system.
9 Accounting information: This includes the amount of CPU used for process execution,
time limits, execution ID etc.
10
IO status information: This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and
may contain different information in different operating systems. Here is a
simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once
the process terminates.

Inter-process communication
Inter-process communication or inter-process communication (IPC) refers specifically
to the mechanisms an operating system provides to allow the processes to manage
shared data.

Race Conditions
In operating systems, processes that are working together share some common
storage (main memory, file etc.) that each process can read and write. When two
or more processes are reading or writing some shared data and the final result
depends on who runs precisely when, are called race conditions. Concurrently

29
executing threads that share data need to synchronize their operations and
processing in order to avoid race condition on shared data. Only one „customer‟
thread at a time should be allowed to examine and update the shared variable.
Race conditions are also possible in Operating Systems. If the ready queue is
implemented as a linked list and if the ready queue is being manipulated during
the handling of an interrupt, then interrupts must be disabled to prevent another
interrupt before the first one completes. If interrupts are not disabled than the
linked list could become corrupt.

Critical Section
How to avoid race conditions?

The key to preventing trouble involving shared storage is find some way to
prohibit more than one process from reading and writing the shared data
simultaneously. That part of the program where the shared memory is accessed is
called the Critical Section. To avoid race conditions and flawed results, one must
identify codes in Critical Sections in each thread. The characteristic properties of
the code that form a Critical Section are

• Codes that reference one or more variables in a “read-update-write” fashion


while any of those variables is possibly being altered by another thread.
• Codes that alter one or more variables that are possibly being referenced in
“read-update-write” fashion by another thread.
• Codes use a data structure while any part of it is possibly being altered by
another thread.
• Codes alter any part of a data structure while it is possibly in use by another
thread.

Here, the important point is that when one process is executing shared modifiable
data in its critical section, no other process is to be allowed to execute in its

30
critical section. Thus, the execution of critical sections by the processes is
mutually exclusive in time.

Mutual Exclusion
A way of making sure that if one process is using a shared modifiable data, the
other processes will be excluded from doing the same thing.

Formally, while one process executes the shared variable, all other processes
desiring to do so at the same time moment should be kept waiting; when that
process has finished executing the shared variable, one of the processes waiting;
while that process has finished executing the shared variable, one of the
processes waiting to do so should be allowed to proceed. In this fashion, each
process executing the shared data (variables) excludes all others from doing so
simultaneously. This is called Mutual Exclusion.

Note that mutual exclusion needs to be enforced only when processes access
shared modifiable data - when processes are performing operations that do not
conflict with one another they should be allowed to proceed concurrently.
Mutual Exclusion Conditions
If we could arrange matters such that no two processes were ever in their critical
sections simultaneously, we could avoid race conditions. We need four
conditions to hold to have a good solution for the critical section problem
(mutual exclusion).

• No two processes may at the same moment inside their critical sections.
• No assumptions are made about relative speeds of processes or number of
CPUs.
• No process should outside its critical section should block other processes.
• No process should wait arbitrary long to enter its critical section.

Proposals for Achieving Mutual Exclusion


The mutual exclusion problem is to devise a pre-protocol (or entry protocol) and
a post-protocol (or exist protocol) to keep two or more threads from being in
their critical sections at the same time. Tanenbaum examine proposals for
critical-section problem or mutual exclusion problem.

Problem
When one process is updating shared modifiable data in its critical section, no
other process should allowed to enter in its critical section.
Proposal 1 -Disabling Interrupts (Hardware Solution)

31
Each process disables all interrupts just after entering in its critical section and
re-enable all interrupts just before leaving critical section. With interrupts turned
off the CPU could not be switched to other process. Hence, no other process will
enter its critical and mutual exclusion achieved.

Conclusion
Disabling interrupts is sometimes a useful interrupts is sometimes a useful
technique within the kernel of an operating system, but it is not appropriate as a
general mutual exclusion mechanism for users process. The reason is that it is
unwise to give user process the power to turn off interrupts.

Proposal 2 - Lock Variable (Software Solution)

In this solution, we consider a single, shared, (lock) variable, initially 0. When a


process wants to enter in its critical section, it first test the lock. If lock is 0, the
process first sets it to 1 and then enters the critical section. If the lock is already
1, the process just waits until (lock) variable becomes 0. Thus, a 0 means that no
process in its critical section, and 1 means hold your horses - some process is in
its critical section.

Conclusion
The flaw in this proposal can be best explained by example. Suppose process A
sees that the lock is 0. Before it can set the lock to 1 another process B is
scheduled, runs, and sets the lock to 1.
When the process A runs again, it will also set the lock to 1, and two processes
will be in their critical section simultaneously.

Proposal 3 - Strict Alteration

In this proposed solution, the integer variable 'turn' keeps track of whose turn is
to enter the critical section. Initially, process A inspect turn, finds it to be 0, and
enters in its critical section. Process B also finds it to be 0 and sits in a loop
continually testing 'turn' to see when it becomes 1.Continuously testing a variable
waiting for some value to appear is called the Busy-Waiting.

Conclusion
Taking turns is not a good idea when one of the processes is much slower than
the other. Suppose process 0 finishes its critical section quickly, so both
processes are now in their noncritical section. This situation violates above
mentioned condition 3.

32
Using Systems calls 'sleep' and 'wakeup'
Basically, what above mentioned solution do is this: when a processes wants to
enter in its critical section , it checks to see if then entry is allowed. If it is not,
the process goes into tight loop and waits (i.e., start busy waiting) until it is
allowed to enter. This approach waste CPU time

Now look at some inter process communication primitives is the pair of steep-
wakeup.

o Sleep: It is a system call that causes the caller to block, that is, be suspended
until some other process wakes it up.
o Wakeup: It is a system call that wakes up the process.
Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory
address used to match up 'sleeps' and 'wakeups'

The Bounded Buffer Producers and Consumers


The bounded buffer producers and consumers assumes that there is a fixed buffer
size i.e., a finite numbers of slots are available.

Statement
To suspend the producers when the buffer is full, to suspend the consumers when
the buffer is empty, and to make sure that only one process at a time manipulates
a buffer so there are no race conditions or lost updates.

As an example how sleep-wake up system calls are used, consider the producer-
consumer problem also known as bounded buffer problem.

Two processes share a common, fixed-size (bounded) buffer. The producer puts
information into the buffer and the consumer takes information out.
Trouble arises when
1. The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has
removed data.
2. The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes
consumer up.

Conclusion
This approaches also leads to same race conditions we have seen in earlier
approaches. Race condition can occur due to the fact that access to 'count' is
unconstrained. The essence of the problem is that a wakeup call, sent to a process
that is not sleeping, is lost.
33
Semaphore&
Monitor

Definition
Semaphore
Being a process s synchronization tool, Semaphore is an integer variable S.
This integer variable S is initialized to the number of resources present in the
system. The value of semaphore S can be modified only by two functions wait ()
and signal () apart from initialization.

The wait () and signal () operation modifies the value of the semaphore S
indivisibly. Which means when a process is modifying the value of the
semaphore, no other process can simultaneously modify the value of the
semaphore. Further, the operating system distinguishes the semaphore in two
categories Counting semaphores and Binary semaphore.

In Counting Semaphore, the value of semaphore S is initialized to the number


of resources present in the system. Whenever a process wants to access the
shared resources, it performs wait () operation on the semaphore which
decrements the value of semaphore by one. When it releases the shared
resource, it performs a signal () operation on the semaphore which increments
the value of semaphore by one. When the semaphore count goes to 0, it means
all resources are occupied by the processes. If a process need to use a resource
when semaphore count is 0, it executes wait () and get blocked until a process
utilizing the shared resources releases it and the value of semaphore becomes
greater than 0.

In Binary semaphore, the value of semaphore ranges between 0 and 1. It is


similar to mutex lock, but mutex is a locking mechanism whereas, the semaphore
is a signaling mechanism. In binary semaphore, if a process wants to access the
resource it performs wait () operation on the semaphore and decrements the
value of semaphore from 1 to 0. When process releases the resource, it performs
a signal () operation on the semaphore and increments its value to 1. If the value
of the semaphore is 0 and a process want to access the resource it performs wait
() operation and block itself till the current process utilizing the resources
releases the resource.

Definition of Monitor
To overcome the timing errors that occurs while using semaphore for process
synchronization, the researchers have introduced a high-level synchronization
34
construct i.e. the monitor type. A monitor type is an abstract data type that is
used for process synchronization.

Being an abstract data type monitor type contains the shared data variables that
are to be shared by all the processes and some programmer-defined operations
that allow processes to execute in mutual exclusion within the monitor. A
process can not directly access the shared data variable in the monitor; the
process has to access it through procedures defined in the monitor which allow
only one process to access the shared variables in a monitor at a time.

The syntax of monitor is as follow:

1. monitor monitor_name
2. {
3. //shared variable declarations
4. procedure P1 ( . . . ) {
5. }
6. procedure P2 ( . . . ) { 7. }
8. procedure Pn ( . . . ) {
9. }
10. initialization code ( . . . ) {
11. }
12. }

A monitor is a construct such as only one process is active at a time within the
monitor. If other process tries to access the shared variable in monitor, it gets
blocked and is lined up in the queue to get the access to shared data when
previously accessing process releases it.

Conditional variables were introduced for additional synchronization


mechanism. The conditional variable allows a process to wait inside the
monitor and allows a waiting process to resume immediately when the other
process releases the resources.

The conditional variable can invoke only two operation wait () and signal ().
Where if a process P invokes a wait () operation it gets suspended in the monitor
till other process Q invoke signal () operation i.e. a signal () operation invoked
by a process resumes the suspended process.

Key Differences Between Semaphore and Monitor


1. The basic difference between semaphore and monitor is that the semaphore is
an integer variable S which indicate the number of resources available in the

35
system whereas, the monitor is the abstract data type which allows only one
process to execute in critical section at a time.
2. The value of semaphore can be modified by wait () and signal () operation only.
On the other hand, a monitor has the shared variables and the procedures only
through which shared variables can be accessed by the processes.
3. In Semaphore when a process wants to access shared resources the process
performs wait () operation and block the resources and when it releases the
resources it performs signal () operation. In monitors when a process needs to
access shared resources, it has to access them through procedures in monitor.
4. Monitor type has condition variables which semaphore does not have.

Process scheduling
Definition
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating


systems. Such operating systems allow more than one process to be loaded into
the executable memory at a time and the loaded process shares the CPU using
time multiplexing.

Process Scheduling Queues


The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process is
changed, its PCB is unlinked from its current queue and moved to its new state
queue.

Note: Scheduling queues refers to queues of processes or devices. When the


process enters into the system, then this process is put into a job queue. This
queue consists of all processes in the system. The operating system also maintains
other queues such as device queue. Device queue is a queue for which multiple
processes are waiting for a particular I/O device. Each device has its own device
queue.
This figure shows the queuing diagram of process scheduling.
- Queue is represented by rectangular box.
- The circles represent the resources that serve the queues.
- The arrows indicate the process flow in the system.

36
A newly arrived process is put in the ready queue. Processes waits in ready queue
for allocating the CPU. Once the CPU is assigned to a process, then that process
will execute. While executing the process, any one of the following events can
occur.
- The process could issue an I/O request and then it would be placed in an I/O
queue.
- The process could create new sub process and will wait for its termination.

The process could be removed forcibly from the CPU, as a result of interrupt and put back in the
ready queue.

The Operating System maintains the following important process scheduling


queues:
• Job queue - This queue keeps all the processes in the system.
• Ready queue - This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in
this queue.
• Device queues - The processes which are blocked due to unavailability
of an I/O device
constitute this queue.

37
The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.).
The OS scheduler determines how to move processes between the ready and
run queues which can only have one entry per processor core on the system; in
the above diagram, it has been merged with the CPU.

Two-State Process Model


Two-state process model refers to running and non-running states which are
described below.
S.N. State & Description
1 Running When a new process is created, it enters into the system as in the running
state.
2 Not Running Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular process. Queue
is implemented by using linked list. Use of dispatcher is as follows. When a process
is interrupted, that process is transferred in the waiting queue. If the process has
completed or aborted, the process is discarded. In either case, the dispatcher then
selects a process from the queue to execute.

Process scheduling and Job scheduling


Job scheduling is the process of allocating system resources to many different tasks by
an operating system (OS). The system handles prioritized job queues that are awaiting
CPU time and it should determine which job to be taken from which queue and the
amount of time to be allocated for the job. This type of scheduling makes sure that all
jobs are carried out fairly and on time.

Job scheduling is performed using job schedulers. Job schedulers are programs
that enable scheduling and, at times, track computer "batch" jobs, or units of
38
work like the operation of a payroll program. Job schedulers have the ability to
start and control jobs automatically by running prepared job-control-language
statements or by means of similar communication with a human operator.
Generally, the present-day job schedulers include a graphical user interface
(GUI) along with a single point of control.

Organizations wishing to automate unrelated IT workload could also use more


sophisticated attributes from a job scheduler, for example:

• Real-time scheduling in accordance with external, unforeseen events


• Automated restart and recovery in case of failures
• Notifying the operations personnel
• Generating reports of incidents
• Audit trails meant for regulation compliance purposes

In-house developers can write these advanced capabilities; however, these are
usually offered by providers who are experts in systems-management software.

In scheduling, many different schemes are used to determine which specific job
to run. Some parameters that may be considered are as follows:

• Job priority
• Availability of computing resource
• License key if the job is utilizing a licensed software
• Execution time assigned to the user
• Number of parallel jobs permitted for a user
• Projected execution time
• Elapsed execution time
• Presence of peripheral devices \
• Number of cases of prescribed events

What is the difference between job scheduling and process scheduling?


Jobs or processes are the same. Whereas, Job scheduler and CPU scheduler are
two different terms. Job scheduler is also called long term scheduler and
CPU/Process scheduler is called short term scheduler

Schedulers
Schedulers are special system software which handle process scheduling in
various ways. Their main task is to select the jobs to be submitted into the
system and to decide which process to run. Schedulers are of three types:
• Long-Term Scheduler 

39
• Short-Term Scheduler   Medium-
Term Scheduler 

Long-Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes
from the queue and loads them into memory for execution. Process loads
into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of


jobs, such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the
average rate of process creation must be equal to the average departure
rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal.


Time-sharing operating systems have no long term scheduler. When a
process changes the state from new to ready, then there is use of long-term
scheduler.

Short-Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of
ready state to running

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term


Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing
minimal in time sharing sharing system systems.
system
5 It selects processes from It selects those processes It can re-introduce the
pool and loads them into which are ready to execute process into memory and
memory for execution execution can be
continued.

40
Medium-Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes
from the memory. It reduces the degree of multiprogramming. The medium-
term scheduler is in charge of handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A


suspended processes cannot make any progress towards completion. In this
condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix.

Comparison among Schedulers


S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term
Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than Speed is fastest among Speed is in between both
short term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing
minimal in time sharing sharing system systems.
system
5 It selects processes from It selects those processes It can re-introduce the
pool and loads them into which are ready to process into memory and
memory for execution execute execution can be
continued.
Context Switch
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from
the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a
single CPU.
Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process
control block. After this, the state for the process to run next is loaded from its
own PCB and used to set the PC, registers, etc. At that point, the second process
can start executing.

41
Context switches are computationally intensive since register and memory state
must be saved and restored. To avoid the amount of context switching time,
some hardware systems employ two or more sets of processor registers. When
the process is switched, the following information is stored for later use.
• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information

Process Scheduling Algorithms


A Process Scheduler schedules different processes to be assigned to the
CPU based on particular scheduling algorithms. There are six popular
process scheduling algorithms which we are going to discuss in this
chapter:
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Priority Scheduling
• Shortest Remaining Time
• Round Robin(RR) Scheduling
42
• Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-


preemptive algorithms are designed so that once a process enters the
running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler
may preempt a low priority running process anytime when a high priority
process enters into a ready state.

First Come, First Served (FCFS)


• Jobs are executed on first come, first served basis.
• It is a non-preemptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance, as average wait time is high.

Wait time of each process is as follows:


Process Wait Time : Service Time - Arrival Time
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75

43
Shortest Job Next (SJN)
• This is also known as shortest job first, or SJF.
• This is a non-preemptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known
in advance.
• Impossible to implement in interactive systems where the required CPU
time is not known.
• The processer should know in advance how much time a process will
take.

Wait time of each process is as follows:


Process Wait Time : Service Time - Arrival Time
P0 3-0=3
P1 0-0=0
P2 16 - 2 = 14
P3 8-3=5
Average Wait Time: (3+0+14+5) / 4 = 5.50

Priority Based Scheduling


• Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is to
be executed first and so on.
• Processes with same priority are executed on first come first served
basis.
44
• Priority can be decided based on memory requirements, time
requirements or any other resource requirement.

Wait time of each process is as follows:


Process Wait Time : Service Time - Arrival Time
P0 9-0=9
P1 6-1=5
P2 14 - 2 = 12
P3 0-0=0
Average Wait Time: (9+5+12+0) / 4 = 6.5

Shortest Remaining Time


• Shortest remaining time (SRT) is the preemptive version of the SJN
algorithm.
• The processor is allocated to the job closest to completion but it can be
preempted by a newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU time
is not known.  It is often used in batch environments where short jobs
need to be given preference.

Round Robin Scheduling


• Round Robin is a preemptive process scheduling algorithm.
• Each process is provided a fix time to execute; it is called a quantum.

45
• Once a process is executed for a given time period, it is preempted and
other process executes for a given time period.
• Context switching is used to save states of preempted processes.

Wait time of each process is as follows:


Process Wait Time : Service Time - Arrival Time
P0 (0-0)+(12-3)=9
P1 (3-1)=2
P2 (6-2)+(14-9)+(20-17)=12
P3 (9-3)+(17-12)=11
Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling


Multiple-level queues are not an independent scheduling algorithm. They make
use of other existing algorithms to group and schedule jobs with common
characteristics.
• Multiple queues are maintained for processes with common
characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.

For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound
jobs in another queue. The Process Scheduler then alternately selects jobs from
each queue and assigns them to the CPU based on the algorithm assigned to the
queue.

Deadlock
Introduction
In a multiprogramming environment, several processes may compete for a finite
number of resources. A process requests resources; if the resources are not
46
available at that time, the process enters a wait state. It may happen that waiting
processes will never again change state, because the resources they have
requested are held by other waiting processes. This situation is called deadlock.

If a process requests an instance of a resource type, the allocation of any instance


of the type will satisfy the request. If it will not, then the instances are not
identical, and the resource type classes have not been defined properly.
A process must request a resource before using it, and must release the resource
after using it. A process may request as many resources as it requires to carry out
its designated task.
Under the normal mode of operation, a process may utilize a resource in only the
following sequence:
1. Request: If the request cannot be granted immediately, then the requesting
process must wait until it can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases the resource

Deadlock Characterization
In deadlock, processes never finish executing and system resources are tied up,
preventing other jobs from ever starting.
Necessary Conditions
A deadlock situation can arise if the following four conditions hold
simultaneously in a system: 1. Mutual exclusion: At least one resource must be
held in a non-sharable mode; that is, only one process at a time can use the
resource. If another process requests that resource, the requesting process must
be delayed until the resource has been released.
2. Hold and wait: There must exist a process that is holding at least one
resource and is waiting to acquire additional resources that are currently being
held by other processes.
3. No preemption: Resources cannot be preempted; that is, a resource can
be released only voluntarily by the process holding it, after that process, has
completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting processes
such that P0 is waiting for a resource that is held by P1, P1 is waiting for a
resource that is held by P2, …., Pn-1 is waiting for a resource that is held by Pn,
and Pn is waiting for a resource that is held by P0.

Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a directed graph called a
system resource allocation graph. The set of vertices V is partitioned into two
different types of nodes P = {P1, P2,
47
… Pn} the set consisting of all the active processes in the system; and R = {R1,
R2, …, R1}, the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj, it
signifies that process Pi requested an instance of resource type Rj and is currently
waiting for that resource. A directed edge from resource type Rj to process Pi is
denoted by Rj_ Pi it signifies that an instance of resource type Rj has been
allocated to process Pi. A directed edge Pi_ Rj is called a request edge; a directed
edge Rj _ Pi is called an assignment edge.
When process Pi requests an instance of resource type Rj, a request edge is
inserted in the resource-allocation graph. When this request can be fulfilled, the
request edge is instantaneously transformed to an assignment edge. When the
process no longer needs access to the, resource it releases the resource, and as
a result the assignment edge is deleted.

Definition of a resource-allocation graph, it can be shown that, if the graph


contains no cycles, then no process in the system is deadlocked. If, on the other
hand, the graph contains the cycle, then a deadlock must exist.
If each resource type has several instances, then a cycle implies that a deadlock
has occurred. If the cycle involves only a set of resources types, each of which
has only a single instance, then a deadlock has occurred. Each process involved
in the cycle is deadlocked. In this case, a cycle in the graph is both a necessary
and a sufficient condition for the existence of deadlock.
A set of vertices V and a set of edges E. V is partitioned
into two types: o P = {P1, P2, …, Pn}, the set consisting
of all the processes in the system.

o R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
request edge – directed edge P1 Rj
assignment edge – directed edge Rj Pi

Fig. Resource Allocation Graph


48
If each resource type has several instances, then a cycle does not necessarily
imply that a deadlock incurred. In this case, a cycle in the graph is a necessary
but not a sufficient condition for the existence of deadlock.
Suppose that process P3 requests an instance of resource type R2 Since no
resource instance is currently available, a request edge P3 → R2 is added to the
graph. At this point, two minimal cycles exist in the system:
P1 → R1 → P2 → R3 → P3 → R2 → P1
P2 → R3 → P3 → R2 → P2

Fig. Resource Allocation Graph with Deadlock


Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3,
which is held by process P3. Process P3, on the other hand, is waiting for either
process P1 or process P2 to release resource R2. In addition, process PI is waiting
for process P2 to release resource R1.

Method for Handling Deadlock //Detection


There are three different methods for dealing with the deadlock problem:
• We can use a protocol to ensure that the system will never enter a deadlock
state.
• We can allow the system to enter a deadlock state and then recover. • We
can ignore the problem all together, and pretend that deadlocks never occur in
the system. This solution is the one used by most operating systems, including
UNIX.
Deadlock avoidance, on the other hand, requires that the operating system be
given in advance additional information concerning which resources a process
will request and use during its lifetime. With this additional knowledge, we can
decide for each request whether or not the process should wait. Each request
requires that the system consider the resources currently available, the resources
49
currently allocated to each process, and the future requests and releases of each
process, to decide whether the current request can be satisfied or must be
delayed. If a system does not employ either a deadlock-prevention or a deadlock
avoidance algorithm, then a deadlock situation may occur If a system does not
ensure that a deadlock will never occur, and also does not provide a mechanism
for deadlock detection and recovery, then we may arrive at a situation where the
system is in a deadlock state yet has no way of recognizing what has happened.
1. Deadlock Prevention
For a deadlock to occur, each of the four necessary-conditions must hold. By
ensuring that at least on one these conditions cannot hold, we can prevent the
occurrence of a deadlock.

a) Mutual Exclusion
The mutual-exclusion condition must hold for non-sharable resources. For
example, a printer cannot be simultaneously shared by several processes.
Sharable resources, on the other hand, do not require mutually exclusive access,
and thus cannot be involved in a deadlock.
b) Hold and Wait
1. When whenever a process requests a resource, it does not hold any other
resources. One protocol that be used requires each process to request and be
allocated all its resources before it begins execution.
2. An alternative protocol allows a process to request resources only when
the process has none. A process may request some resources and use them.
Before it can request any additional resources, however it must release all the
resources that it is currently allocated here are two main disadvantages to these
protocols. First, resource utilization may be low, since many of the resources
may be allocated but unused for a long period. In the example given, for
instance, we can release the tape drive and disk file, and then again request the
disk file and printer, only if we can be sure that our data will remain on the disk
file. If we cannot be assured that they will, then we must request all resources at
the beginning for both protocols.

Second,
starvatioe.

c)No Preemption
If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are
preempted. That is this resources are implicitly released. The preempted
resources are added to the list of resources for which the process is waiting
process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting.

50
d) Circular Wait
Circular-wait condition never holds is to impose a total ordering of all resource
types, and to require that each process requests resources in an increasing order
of enumeration.

Let R = {R1, R2, ..., Rn} be the set of resource types. We assign to each resource
type a unique integer number, which allows us to compare two resources and to
determine whether one precedes another in our ordering. Formally, we define a
one-to-one function F: R _ N, where N is the set of natural numbers.

2. Deadlock Avoidance
Prevent deadlocks requests can be made. The restraints ensure that at least one of
the necessary conditions for deadlock cannot occur, and, hence, that deadlocks
cannot hold. Possible side effects of preventing deadlocks by this, melted,
however, are Tow device utilization and reduced system throughput.
An alternative method for avoiding deadlocks is to require additional information
about how resources are to be requested. For example, in a system with one tape
drive and one printer, we might be told that process P will request first the tape
drive, and later the printer, before releasing both resources. Process Q on the
other hand, will request first the printer, and then the tape drive.
With this knowledge of the complete sequence of requests and releases for each
process we can decide for each request whether or not the process should wait.
A deadlock-avoidance algorithm dynamically examines the resource-
allocation state to ensure that there can never be a circular wait condition. The
resource allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes. a. Safe State
A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock. More formally, a system is in
a safe state only if there exists a safe sequence. A sequence of processes <P1,
P2, .. Pn> is a safe sequence for the current allocation state if, for each Pi the
resources that Pj can still request can be satisfied by the currently available
resources plus the resources held by all the Pj, with j < i. In this situation, if the
resources that process Pi needs are not immediately available, then Pi can wait
until all Pj have finished. When they have finished, Pi can obtain all of its needed
resources, complete its designated task return its allocated resources, and
terminate. When Pi terminates, Pi + 1 can obtain its needed resources, and so
on.

51
Fig. Safe, Unsafe & Deadlock State
If no such sequence exists, then the system state is said to be unsafe.

b. Resource-Allocation Graph Algorithm


Suppose that process Pi requests resource Rj. The request can be granted only if
converting the request edge Pi → Rj to an assignment edge Rj → Pi does not
result in the formation of a cycle in the resource-allocation graph. c. Banker's
Algorithm
The resource-allocation graph algorithm is not applicable to a resource-
allocation system with multiple instances of each resource type. The deadlock-
avoidance algorithm that we describe next is applicable to such a system, but is
less efficient than the resource-allocation graph scheme. This algorithm is
commonly known as the banker's algorithm.

3. Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock
avoidance algorithm, then a deadlock situation may occur.
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred.
• An algorithm to recover from the deadlock.
a) Single Instance of Each Resource Type
If all resources have only a single instance, then we can define a deadlock
detection algorithm that uses a variant of the resource-allocation graph, called a
wait-for graph. We obtain this graph from the resource-allocation graph by
removing the nodes of type resource and collapsing the appropriate edges.
b) Several Instances of a Resource Type
The wait-for graph scheme is not applicable to a resource-allocation system with
multiple instances of each resource type.

52
The algorithm used are :
• Available: A vector of length m indicates the number of available resources of
each type.
• Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process.
• Request: An n x m matrix indicates the current request of each process. If
Request [i, j] = k, then process P, is requesting k more instances of resource
type Rj.

c) Detection-Algorithm Usage
If deadlocks occur frequently, then the detection algorithm should be invoked
frequently.
Resources allocated to deadlocked processes will be idle until the deadlock can
be broken.

4. Recovery from Deadlock


When a detection algorithm determines that a deadlock exists, several
alternatives exist. One possibility is to inform the operator that a deadlock has
spurred, and to let the operator deal with the deadlock manually. The other
possibility is to let the system recover from the deadlock automatically. There are
two options for breaking a deadlock. One solution is simply to abort one or more
processes to break the circular wait. The second option is to preempt some
resources from one or more of the deadlocked processes.
a) Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In
both methods, the system reclaims all resources allocated to the terminated
processes.
• Abort all deadlocked processes: This method clearly will break the
dead – lock cycle, but at a great expense, since these processes may have
computed for a long time, and the results of these partial computations must be
discarded, and probably must be recomputed.
• Abort one process at a time until the deadlock cycle is eliminated:
This method incurs considerable overhead, since after each process is aborted a
deadlock-detection algorithm must be invoked to determine whether processes
are still deadlocked.
b) Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt
some resources from processes and give these resources to other processes until
he deadlocks cycle is broken.
The three issues are considered to recover from deadlock
1. Selecting a victim
2. Rollback

53
3. Starvation

Summary
A deadlocked state occurs when two or more processes are waiting indefinitely
for an event that can be caused only one of the waiting processes. There are three
principal methods for dealing with deadlocks:
Use some protocol to prevent or avoid deadlocks, entering that the system will
never enter a deadlocked state.
Allow the system to enter a deadlocked state, detect it, and then recover.
Ignore the problem altogether and pretend that deadlocks never occur in the
system.

Deadlock prevention is a set of methods for ensuring that at least one of the
necessary condition cannot hold. Deadlock avoidance requires additional
information about how resources are to be requested. Deadlock avoidance
algorithm dynamically examines the resource allocation state to ensure that a
circular wait condition can never exist. Deadlock occurs only when some process
makes a request that cannot be granted immediately.

Description of Error Diagnosis


Diagnostic error can be defined as a diagnosis that is missed, wrong or delayed,
as detected by some subsequent definitive test or finding. In computing,
operating system failure result to error messages or sound that is subject to the
type of failure.

Diagnosis System Error Logs/beeps and other critical errors can occur when your
Windows operating system becomes corrupted. Opening programs will be slower
and response times will lag. When you have multiple applications running, you
may experience crashes and freezes. There can be numerous causes of this error
including excessive startup entries, registry errors, hardware/RAM decline,
fragmented files, unnecessary or redundant program installations and so on.

Diagnosis System Error Logs/beeps repair tool


There are many reasons why Diagnosis System Error Logs/beeps, including
having malware, spyware, or programs not installing properly. You can have all
kinds of system conflicts, registry errors, and Active X errors. Reimage
specializes in Windows repair. It scans and diagnoses, then repairs, your
damaged PC with technology that not only fixes your Windows Operating
System, but also reverses the damage already done with a full database of
replacement files.

54
A Scan (approx. 5 minutes) into your PC‟s Windows Operating System detects
problems divided into 3 categories – Hardware, Security and Stability. At the end
of the scan, you can review your PC‟s Hardware, Security and Stability in
comparison with a worldwide average. You can review a summary of the
problems detected during your scan.
Windows Errors
A Windows error is an error that happens when an unexpected condition occurs
or when a desired operation has failed. When you have an error in Windows, it
may be critical and cause your programs to freeze and crash or it may be
seemingly harmless yet annoying.

Blue Screen of Death


A stop error screen or bug check screen, commonly called a blue screen of death
(also known as a BSoD, bluescreen), is caused by a fatal system error and is the
error screen displayed by the Microsoft Windows family of operating systems
upon encountering a critical error, of a no recoverable nature, that causes the
system to “crash”.

Damaged DLLs
One of the biggest causes of DLL‟s becoming corrupt/damaged is the practice of
constantly installing and uninstalling programs. This often means that DLL‟s
will get overwritten by newer versions when a new program is installed, for
example. This causes problems for those applications and programs that still
need the old version to operate. Thus, the program begins to malfunction and
crash.

Freezing Computer
Computer hanging or freezing occurs when either a program or the whole system
ceases to respond to inputs. In the most commonly encountered scenario, a
program freezes and all windows belonging to the frozen program become static.
Almost always, the only way to recover from a system freeze is to reboot the
machine, usually by power cycling with an on/off or reset button.

Virus Damage
Once your computer has been infected with a virus, it’s no longer the same. After
removing it with your anti-virus software, you’re often left with lingering side-
effects. Technically, your computer might no longer be infected, but that doesn’t
mean its error-free. Even simply removing a virus can actually harm your
system.

Operating System Recovery


Reimage repairs and replaces all critical Windows system files needed to run and
restart correctly, without harming your user data. Reimage also restores

55
compromised system settings and registry values to their default Microsoft
settings. You may always return your system to its pre-repair condition.

Reimage patented technology, is the only PC Repair program of its kind that
actually reverses the damage done to your operating system. The online database
is comprised of over 25,000,000 updated essential components that will replace
any damaged or missing file on a Windows operating system with a healthy
version of the file so that your PC‟s performance, stability & security will be
restored and even improve. The repair will deactivate then quarantine all
Malware found then remove virus damage. All System Files, DLLs, and Registry
Keys that have been corrupted or damaged will be replaced with new healthy
files from our continuously updated online database.

56

You might also like