0% found this document useful (0 votes)
9 views

OS-Unit.1

An Operating System (OS) serves as an interface between users and computer hardware, evolving through several generations from simple task execution to multitasking and personal computing. Various types of operating systems include batch, multi-programming, and distributed systems, each with specific functionalities like process and memory management. The document also discusses system calls, OS structures, and the importance of resource management and security in operating systems.

Uploaded by

appachud
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

OS-Unit.1

An Operating System (OS) serves as an interface between users and computer hardware, evolving through several generations from simple task execution to multitasking and personal computing. Various types of operating systems include batch, multi-programming, and distributed systems, each with specific functionalities like process and memory management. The document also discusses system calls, OS structures, and the importance of resource management and security in operating systems.

Uploaded by

appachud
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

OS UNIT-1 OS UNIT-1

Unit 1 - Chapter 1 Example of OS:


Introduction to Operating System • Microsoft Windows
• Mac OS
OS Definition:
• Android OS
• Linux
An Operating System (OS) is an interface between a computer user and computer hardware. • Ubuntu
Operating System goals:
History of Operating System 1. Executer user program and make solving user problems easier.
First Generation: 2. Make the computer system convenient to use.
 Time Period: The 1940-1956 3. Use the computer hardware in an efficient manner.
 Size and Cost: Early computers were huge. And they came with a big price tag too.
 Basic Functions: Despite their size and cost, they could only do simple tasks. Computer System Organization
 No Operating System: They just did tasks one by one. The computer system is a combination of many parts such as peripheral devices, secondary memory, CPU,
 Serial Processing: This means they finish one task before starting the next. etc.
 No multitasking here
Second Generation (Batch System)
• Time Period: 1956-1963
• GMOSIS, the first operating system (OS) was developed in the early 1950s.
• System gathers all related jobs into groups or batches.
• Jobs are submitting them to the operating system using a punch card to finish all of them.
• Computer moved from binary machine language to symbolic / assembly language.
Third Generation (Multi-Programmed Batch System)
• Time Period: 1964-1971
• Here, instead of punch card user interact with 3rd gen computers through keyboards and monitors.
• One or more CPUs, device controllers connect through common bus providing access to shared
• Introduce, Multitasking OS
memory
Multi programming OS
• Concurrent execution of CPUs and devices competing for memory cycles
Focus on time sharing OS
• I/O devices and the CPU can execute concurrently Each device controller is in charge of a particular
Fourth Generation (1980 – Present Day)
device type
• Introduce personal computer.
• Each device controller has a local buffer
• Today, Windows has become the most popular and most commonly used operating system
• CPU moves data from/to main memory to/from local buffers
technology.
• I/O is from the device to local buffer of controller
• And then, Windows released various operating systems such as Windows 95, Windows 98,
• Device controller informs CPU that it has finished its operation by causing an interrupt
Windows XP and the latest operating system, Windows 7.
• Interrupt architecture must save the address of the interrupted instruction
• Currently, most Windows users use the Windows 10 operating system.

1 2
OS UNIT-1 OS UNIT-1
• Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt
• The operating system preserves the state of the CPU by storing registers and the program counter.

Types of Operating System


1. Batch operating system
2. Multi programming operating systems
3. Multiprocessing operating System Multiprocessing operating System
4. Multi-Tasking Operating System • In a uni-processor system, only one process executes at a time.
5. Distributed operating System • Multiprocessing is the use of two or more CPUs (processors) within a single Computer system.
• The term also refers to the ability of a system to support more than one processor within a single
Batch operating system computer system.
• In this type of system, there is no direct interaction between user and the computer. • Say processes P1, P2, P3 and P4 are waiting for execution. Now in a single processor system, firstly one
• The user has to submit a job (written on cards or tape) to a computer operator. process will execute, then the other, then the other and so on.
• Then computer operator places a batch of several jobs on an input device.
• To speed up processing, jobs with similar needs are batched together and run as a group.
• Jobs are batched together by type of languages and requirement.

Multi-Tasking Operating System


• Multi-tasking operating systems allow multiple users to perform multiple tasks at the same time.
• The allocation of system resources such as input/output devices, CPU and memory among processes can
be easily managed by multi-tasking operating system.
• There are various states through which a processor passes to complete particular or multiple executions.
Multi programming operating systems • Multitasking is the ability of an OS to execute more than one task simultaneously on a CPU machine.
• The OS picks and begins to execute one job from memory. Once the job needs an I/O operation the OS
switches to another job(CPU or OS is always busy).
• The number of jobs in memory is less than the number of jobs in disk.
• Job scheduling- if several jobs are ready to be brought into memory and there is not enough room for all
of them, then the system choose jobs among them.
• CPU scheduling- if several jobs are ready to run at the same time, the system must choose among them.
• In Multiprogramming system, CPU will never be idle and keeps on processing. Distributed operating System
• In non-multi programmed system, CPU sits idle. • It distributes the computation among several physical processors
• Distributed systems also called as loosely coupled systems.

3 4
OS UNIT-1 OS UNIT-1
• Distributed systems use multiple central processors to serve multiple real- time applications and • It also makes sure when a program is executed or terminated, it will stop the communication between
multiple users. the device and the computer system.
• Data processing jobs are distributed among the processors accordingly.
• The processors communicate with one another through various communication lines. Process Management
• The process is a program under the execution.
• The operating system manages all the processes so that each process gets the CPU for a specific time
to execute itself, and there will be less waiting time for each process.
• This management is also called process scheduling.

Storage Management
• Storage management is a procedure that allows users to maximize the utilization of storage devices
Functions of an operating System
while also protecting data.
1. Memory Management
• The operating system is in charge of storing and accessing files.
2. Device Management
• The creation of files, the creation of directories, the reading and writing of data from files and
3. Process Management
directories, as well as the copying of the contents of files and directories from one location to another
4. Storage Management
are all included in storage management.
5. Resource Management

Resource Management
Memory Management
• Resource Management in Operating System is the process to manage all the resources efficiently
• In a computer system, we have primary memory, which is also called main memory, and it is made
like CPU, memory, input/output devices, and other hardware resources
by the sequence of bytes.
• Resource management is an important thing because resources of a computer are limited and multiple
• Each byte in the main memory has a unique address which helps it to check if it is occupied or not.
processes or users may require access to the same resources like CPU, memory etc. at the same time.
• For executing each program in its computer system, it should be in the main memory.
• The operating system has to manage and ensure that all processes get the resources they need to
• Operating systems can easily check which bytes of main memory are empty and which are not.
execute, without any problems.
• So it allocates the main memory for the program execution, and when the program is completed or
terminated, then it deallocates the memory.
Security and Protection
• Operating systems also keep a record that which byte of memory is assigned to which program.
• The operating system uses password protection to protect user data and similar other
techniques.
Device Management
• It also prevents unauthorized access to programs and user data. The operating system
• The operating system manages the communication between the computer system and the peripheral
provides various techniques which assure the integrity and confidentiality of user data.
devices connected to the I/O port of the system.
• The following security measures are used to protect user data:
• Peripheral devices use their respective drivers to communicate with the system.
• Protection against unauthorized access through login.
• The operating system determines which program or process is accessed by which connection and
• Protection against intrusion by keeping the firewall active.
device.
• Protecting the system memory against malicious access.

5 6
OS UNIT-1 OS UNIT-1
3. Device Management:

System Calls: • Request device, release device: If there are multiple users of the system, we first request the device.
After we finished with the device, we must release it.
“System calls provide the interface between a process & the OS”. These are usually available in the form of
• Read, write, reposition: Once the device has been requested & allocated to us, we can read, write &
assembly language instruction.
reposition the device.
System calls can be roughly grouped into 5 major categories.
1. Process Control.
4. Information maintenance:
2. File Manipulation
• Get time or date, set time or date: Most systems have a system call to return the current date & time or
3. Device Management
set the current date & time.
4. Information maintenance
• Get system data, set system data: Other system calls may return information about the system like
5. Communication
number of current users, version number of OS, amount of free memory etc.
1. Process Control:
• Get process attributes, set process attributes: The OS keeps information about all its processes & there
• End, abort: A running program needs to be able to has its execution either normally (end) or abnormally
are system calls to access this information.
(abort).
• Load, execute: A process or job executing one program may want to load and executes another
5. Communication:
program.
There are two modes of communication such as:
• Create Process, terminate process: There is a system call specifying for the purpose of creating a new
• Message passing model: Information is exchanged through an inter process communication facility
process. We may want to terminate a job or process that we created
provided by operating system. The source of the communication known as client & receiver known
• Get process attributes, set process attributes: If we create a new job or process we should able to control
as server exchange messages by read message & write message system calls.
its execution. This control requires the ability to determine & reset the attributes of a job
• Shared memory model: processes use map memory system calls to access regions of memory owned
• Wait time: After creating new jobs or processes, we may need to wait for them to finish their execution
by other processes. They exchange information by reading & writing data in the shared areas.
(wait time).
• Wait event, signal event: We may wait for a specific event to occur (wait event). The jobs or processes
then signal when that event has occurred (signal event).
Operating Systems Structures
• Operating systems have complex structures.

2. File Manipulation: • we want a structure that is easy to understand so that we can adapt an operating system to meet our

• Create file, delete file: We first need to be able to create & delete files. Both the system calls require specific needs.

the name of the file & some of its attributes. • Similar to how we break down larger problems into smaller, more manageable subproblems,

• Open file, close file: Once the file is created, we need to open it & use it. We close the file when we building an operating system in pieces is simpler.

are no longer using it. • The operating system is a component of every segment.

• Read, write, reposition file: After opening, we may also read, write or reposition the file (rewind or • The strategy for integrating different operating system components within the kernel can be thought

skip to the end of the file). of as an operating system structure.

• Get file attributes, set file attributes: For either files or directories, we need to be able to determine the
values of various attributes & reset them if necessary. 1. Simple structure
2. Monolithic structure

7 8
OS UNIT-1 OS UNIT-1
3. Layered structure
4. Micro-kernel structure
5. Modular structure

Simple structure
• Such operating systems do not have well-defined structures and are small, simple, and limited. The
interfaces and levels of functionality are not well separated.
• MS-DOS is an example of such an operating system.
Advantages:
• In MS-DOS, application programs are able to access the basic I/O routines.
• Because layering is unnecessary and the kernel alone is responsible for managing all operations, it
• These types of operating systems cause the entire system to crash if one of the user programs fails.
is easy to design and execute.
• Due to the fact that functions like memory management, file management, process scheduling, etc.,
are implemented in the same address area, the monolithic kernel runs rather quickly.
• Utilizing the same address speeds up and reduces the time required for address allocation for new
processes.
Disadvantages:
• The monolithic kernel's services are interconnected in address space and have an impact on one
another, so if any of them malfunctions, the entire system does as well.
• It is not adaptable. Therefore, launching a new service is difficult.

Advantages:
• It delivers better application performance because of the few interfaces between the application program Layered structure

and the hardware. • An OS can be broken into pieces and retain much more control over the system.

• It is easy for kernel developers to develop such an operating system. • In this structure, the OS is broken into a number of layers (levels).

Disadvantages: • The bottom layer (layer 0) is the hardware, and the topmost layer (layer N) is the user interface.

• The structure is very complicated, as no clear boundaries exist between modules. • These layers are so designed that each layer uses the functions of the lower-level layers.
• It does not enforce data hiding in the operating system. • This simplifies the debugging process, if lower-level layers are debugged and an error occurs during
debugging, then the error must be on that layer only, as the lower-level layers have already been
Monolithic structure debugged.

• The monolithic operating system controls all aspects of the operating system's operation, including
file management, memory management, device management, and operational operations.
• The core of an operating system for computers is called the kernel (OS).
• All other System components are provided with fundamental services by the kernel.
• The operating system and the hardware use it as their main interface.
• When an operating system is built into a single piece of hardware, such as a keyboard or mouse,
the kernel can directly access all of its resources.

9 10
OS UNIT-1 OS UNIT-1
Advantages • It resembles layered structure due to the fact that each kernel has defined and protected interfaces,
• Layering makes it easier to enhance the operating system, as the implementation of a layer can be but it is more flexible than a layered structure as a module can call any other module.
changed easily without affecting the other layers.
• It is very easy to perform debugging and system verification.
Disadvantages
• In this structure, the application’s performance is degraded as compared to simple structure.
• It requires careful planning for designing the layers, as the higher layers use the functionalities of only
the lower layers.

Micro-kernel Structure
• This structure designs the operating system by removing all non-essential components from the Advantages
kernel. • Efficient
• This results in a smaller kernel called the micro-kernel. • Easy to maintained
• Advantages of this structure are that all new services need to be added to user space and does not • Easy to debug and update
require the kernel to be modified. Disadvantages
• Thus it is more secure and reliable as if a service fails, then rest of the operating system remains • It may lose stability
untouched. • Less security

Advantages:
• It makes the operating system portable to various platforms.
• As microkernels are small so these can be tested effectively.
Disadvantages:
• Increased level of inter module communication degrades system performance.

Modular Structure
• The kernel has only a set of core components and other services are added as dynamically. Loadable
modules to the kernel either during runtime or boot time.

11 12
OS - UNIT 1 OS - UNIT 1

UNIT 1 – Chapter 2
2 Ready
Process Management The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into
this state after Start state or while running it by but interrupted by the scheduler to assign CPU to
Process some other process.
“A process is basically a program in execution”. The execution of a process must progress
in a sequential fashion.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
A process defines the fundamental unit of computation for the computer Components:
running and the processor executes its instructions.
• Object: Program code to be executed.
• Data: The data is being used in execution of a program.
4 Waiting
• Resources: While executing the program the resource it may require.
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input,
• Status of the process execution: Verify the status of program execution.
or waiting for a file to become available.

Process State 5 Terminated or Exit


Process state is defined as the current activity of the process. When process executes, it changes state. In Once the process finishes its execution, or it is terminated by the operating system, it is moved to the
general, a process can have one of the following five states at a time. terminated state where it waits to be removed from main memory.

Process Control Block (PCB)


“A Process Control Block is a data structure maintained by the Operating System for every process”. The
PCB is identified by an integer process ID (PID).
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −

S.N. State & Description

1 Start
This is the initial state when a process is first started/created.
OS - UNIT 1 OS - UNIT 1

The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates. A 10 IO status information
This includes a list of I/O devices allocated to the process.
PCB keeps all the information needed to keep track of a process as listed below in the table

S.N. Information & Description


Process Scheduling
Definition
“The process scheduling is the activity of the process manager that handles the removal of the running process
1 Process State
from the CPU and the selection of another process”.
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

Process Scheduling Queues


2 Process privileges
• The OS maintains all PCBs in Process Scheduling Queues.
This is required to allow/disallow access to system resources.
• The OS maintains a separate queue for each of the process states and PCBs of all processes in
the same execution state are placed in the same queue.
3 Process ID
• When the state of a process is changed, its PCB is unlinked from its current queue and moved to
Unique identification for each of the process in the operating system.
its new state queue.
• The Operating System maintains the following important process scheduling queues
4 Pointer • Job queue − This queue keeps all the processes in the system.
A pointer to parent process. • Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
5 Program Counter • Device queues − The processes which are blocked due to unavailability of an I/O device
Program Counter is a pointer to the address of the next instruction to be executed for this process. constitute this queue.

6 CPU registers
Various CPU registers where process need to be stored for execution for running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to schedule the process.

8 Memory management information The OS scheduler determines how to move processes between the ready and run queues which can only

This includes the information of page table, memory limits, Segment table depending on memory have one entry per processor core on the system; in the above diagram, it has been merged with the

used by the operating system. CPU.

9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc. Two-State Process Model
Two-state process model refers to running and non-running states which are described below −
OS - UNIT 1 OS - UNIT 1

• Short-term schedulers, make the decision of which process to execute next.


1) Running • Short-term schedulers are faster than long-term schedulers.
When a new process is created, it enters into the system as in the running state.
2) Not Running Medium Term Scheduler
Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the • Medium-term scheduling is a part of swapping.
queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as • It removes the processes from the memory.
follows. When a process is interrupted, that process is transferred in the waiting queue. If the process has • It reduces the degree of multiprogramming.
completed or aborted, • The medium-term scheduler is in-charge of handling the swapped out-processes.
• A running process may become suspended if it makes an I/O request.
Schedulers • A suspended processes cannot make any progress towards completion. In this condition, to

“Their main task is to select the jobs to be submitted into the system and to decide which process to run”. remove the process from memory and make space for other processes, the suspended process

Schedulers are of three types − is moved to the secondary storage. This process is called swapping, and the process is said to

• Long-Term Scheduler be swapped out or rolled out.

• Short-Term Scheduler
• Medium-Term Scheduler Comparison among Scheduler

Long Term Scheduler


S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
• It is also called a job scheduler.
• It determines which programs are admitted to the system for processing.
1 It is a job scheduler It is a CPU scheduler It is a process swapping scheduler.
• It selects processes from the queue and loads them into memory for execution.
• Process loads into the memory for CPU scheduling. 2 Speed is lesser than short term Speed is fastest among other Speed is in between both short and
• The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O scheduler two long term scheduler.
bound and processor bound.
3 It controls the degree of It provides lesser control over It reduces the degree of
• It also controls the degree of multiprogramming.
multiprogramming degree of multiprogramming multiprogramming.
• On some systems, the long-term scheduler may not be available or minimal.
• Timesharing operating systems have no long term scheduler. 4 It is almost absent or minimal in It is also minimal in time It is a part of Time sharing systems.
• When a process changes the state from new to ready, then there is use of long-term scheduler. time sharing system sharing system
Short Term Scheduler
5 It selects processes from pool and It can re-introduce the process into
• It is also called as CPU scheduler. It selects those processes
loads them into memory for memory and execution can be
• Its main objective is to increase system performance. which are ready to execute
execution continued.
• It is the change of ready state to running state of the process.
• CPU scheduler selects a process among the processes that are ready to execute and allocates
CPU.
OS - UNIT 1 OS - UNIT 1

Context Switch Operation on Processes


Several operations are possible on the process. Process must be created and deleted dynamically.
“A context switch is the mechanism to store and restore the state or context of a CPU”. Operating system must provide the environment for the process operation. We discuss the two main operations
Context switching is an essential part of a multitasking operating system features. on processes.
1. Create a process
When the scheduler switches the CPU from executing one process to execute another, the state from 2. Terminate a process
the current running process is stored into the process control block. After this, the state for the process
to run next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second 1. Create a process
process can start executing. • A process may create several new sub processes.
• Syntax for creating new process is: CREATE (processed, attributes)
• Two names are used in the process they are parent process and child process.
• Parent process is a creating process.
• Child process is created by the parent process or Child process may create another sub process, So it
forms a tree of processes.
• When operating system issues a CREATE system call, it obtains a new process control block from the
pool of free memory, fills the fields with provided and default parameters, and insert the PCB into the
ready list.
• Process will need certain resources, such as CPU time, memory, files and I/O devices to complete the
operation. When process creates a sub process, that sub process may obtain its resources directly from
the operating system. Otherwise it uses the resources of parent process.
Context switches are computationally intensive since register and memory state must be saved and restored. • When a process creates a new process, two possibilities exist in terms of execution.
To avoid the amount of context switching time, some hardware systems employ two or more sets of processor 1. The parent continues to execute concurrently with its children.
registers. 2. The parent waits until some or all of its children have terminated. For address space, two possibilities occur:
When the process is switched, the following information is stored for later use. • The child process is a duplicate of the parent process.
• Program Counter • The child process has a program loaded into it.
• Scheduling information 2. Terminate a Process
• Base and limit register value • DELETE system call is used for terminating a process.
• Currently used register • A process may delete itself or by another process.
• Changed State • The operating system reacts by reclaiming all resources allocated to the specified process, closing files
• I/O State information opened by or for the process.
• Accounting information • Following are the resources for terminating the child process by parent process.
1. The task given to the child is no longer required.
2. Child has exceeded its usage of some of the resources that it has been allocated.
OS - UNIT 1 OS - UNIT 1

3. Operating system does not allow a child to continue if its parent terminates. processes is a cooperating process. There are several reasons for providing environment that allows process
cooperation:
Co-operating Processes Information sharing: Since several users may be interested in the same piece of information (for instance, a

“Co-operating process is a process that can affect or be affected by the other processes while executing. If shared file), we must provide an environment to allow concurrent access to such information.

suppose any process is sharing data with other processes, then called co-operating process”. Computation speedup: If we want a particular task to run faster, must break it into subtasks, each of which

Benefit of the operating processes are: will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer

1. Sharing of information has multiple processing elements (such as CPUs or I/O channels).

2. Increases computation speed Modularity: We may want to construct the system in a modular fashion, dividing the system functions into

3. Modularity separate processes or threads.

4. Convenience Convenience: Even an individual user may work on many tasks at the same time. For instance, a user may be

• Co-operating processes share the information: Such as a file, memory etc. System must provide an editing, printing, and compiling inparallel process quire interprocess communication (IPC) mechanism that

environment to allow concurrent access to these types of resources. will allow them to exchange data and information.

• Computation speed will increase if the computer has multiple processing elements are connected Th There are two fundamental models of interprocess communication:

together. (1) shared memory

• System is constructed in a modular fashion. System function is divided into number of modules. (2) message passing.

Process 1 Printf("abc") In the shared-memory model, a region of memory thar shared by cooperating processes established. Processes

Process 2 Printf("CBA") can then exchange information by reading and writing data to the shared region.

CBAabe abCeBA abeCBA In the message passing model, communication takes place by means of messages exchanged between the

• Behaviour of co-operating processes is nondeterministic ie. it depends on relative execution sequence cooperating processes.

and cannot be predicted a prior.


• Co-operating processes s are also Reproducible.
• For example, suppose one process writes -ABCI, another writes -CBAI can get different outputs,
cannot tell what comes from which.
• Which process output first-Cl in-ABCCBAI. The suble state sharing that occurs here via the terminal.
Not just anything can happen, though. For example, AABBCCI cannot occur.

Interprocess Communication (IPC)


• Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.
• A process is independent if it cannot affect or be affected by the other processes executing in the
system. Any process that does share data with any other process is independent.
• A process is cooperating if it can affect or be affected by the other processes executing in the system. • Both of the models just discussed are common in operating systems, and many systems implement both.
Clearly, process that shares data with other process. • Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided.
OS - UNIT 1 Unit-01

• Message passing is also easier to implement than shared memory for intercomputer communication. UNIT 1 – Chapter 3
• Shared d memory allows maximum speed convenience communication, as it can be done at memory
CPU Scheduling
speeds when within a computer.
• Shared memory is faster than message passing, as message-passing systems are typically implemented
using system calls and thus require the more time-consuming task of kernel intervention. Basic concept:
• In contrast, in shared-memory systems, system calls are required only establish shared-memory regions.  CPU scheduling is a process which allows one process to use the CPU while the execution of another
Once shared memory established, all treated routine memory accesses and no assistance from the kernel process is on hold due to unavailability of any resource like I/O.
is required.  “Select the process from ready queue and CPU allocate that process”.
 The aim of CPU scheduling is to make the system efficient and fast.

SCHEDULING CRITERIA:
Many algorithms exist for CPU scheduling. Various criteria have been suggested for comparing these
CPU scheduling algorithms. Common criteria include:

1. CPU utilization: This may range from 0% to 100% ideally. In real systems it ranges from 40% for lightly-
loaded systems to 90% for heavily-loaded systems.
2. Throughput: The number of processes completed per time unit is throughput. Long processes may be of
the order of one process per hour whereas in case of short processes, throughput may be 10 or 12 processes
per second.
3. Turnaround time: The interval of time between submission and completion of a process is called
turnaround time. It includes execution time and waiting time.
4. Waiting time: Sum of all the times spent by a process at different instances waiting in the ready queue is
called waiting time.
5. Response time: In an interactive process the user is using some output generated while the process
continues to generate new results.
Instead of using the turnaround time that gives the difference between time of submission and time of
completion, response time is sometimes used.
Response time is thus the difference between time of submission and the time the first response occurs.

Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling
algorithms.
• First-Come, First-Served (FCFS) Scheduling

24
Unit-01 Unit-01

• Shortest-Job-First (SJF) Scheduling The waiting time for process P1 = 0 msecs


• Priority Scheduling P2 = 24 msecs
• Round Robin(RR) Scheduling P3 = 27 msecs
• Multiple-Level Queues Scheduling Average waiting time = (0 + 24 + 27) / 3 = 51 / 3 = 17 msecs.
P1 completes at the end of 24 msecs, P2 at the end of 27 msecs and P3 at the end of 30 msecs.
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so that Average turnaround time = (24 + 27 + 30) / 3 = 81 / 3 = 27 msecs.
once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas the
preemptive scheduling is based on priority where a scheduler may preempt a low priority running process (OR)
anytime when a high priority process enters into a ready state. If the processes arrive in the order P2, P3 and P3, then the result will be as follows:
Turn Around Time (T.A.T) = (Completion Time) – (Arrival Time) P2 P3 P1
Waiting Time (W.T) = (Turn Around Time) – (Burst Time) Response
Time (R.T) = (First Arrival Time) – (Arrival Time) 0 3 6 30
Average waiting time = (0 + 3 + 6) / 3 = 9 / 3 = 3 msecs.

First Come First Serve (FCFS) Average turnaround time = (3 + 6 + 30) / 3 = 39 / 3 = 13 msecs.

 Its implementation is based on FIFO queue.


 Definition: FCFS is an operating system scheduling algorithm that automatically executes queued 2) Consider a set of three processes P1, P2, P3, P4 and P5 having arriving at time and CPU burst times as

requests and processes by order of their arrival shown below calculate Average waiting time?

 It supports non-preemptive scheduling algorithm.


Process Burst time Arrival time
 It is the simplest form of a CPU scheduling algorithm
 It is a Non-Preemptive CPU scheduling algorithm, so after the process has been allocated to the CPU, P1 6 2

it will never release the CPU until it finishes executing. P2 2 5


 Poor in performance as average wait time is high P3 8 1
P4 3 0
1) Consider a set of three processes P1, P2 and P3 arriving at time instant 0 and having CPU burst times as P5 4 4
shown below calculate Average waiting time and average turnaround time?
Process Burst time (msecs)
P1 24

P2 3 P4 = 0-0 = 0

P3 3 P3 = 3-1 = 2

The Gantt chart below shows the result. PI = 11-2 = 9


P5= 17-4 = 13
P1 P2 P3
P2= 21-5= 16
0 24 27 30

25 26
Unit-01 Unit-01

Average turnaround time = (3 + 9 + 16 + 24) / 4 = 52 / 4 = 13 msecs.


Average Waiting Time= = 40/5= 8.
• For a given set of processes, the algorithm gives the minimum average waiting and turnaround times.
• Thus if processes with smaller CPU burst times arrive earlier, then average waiting and average This is because, shorter processes are scheduled earlier than longer ones and hence waiting time for
turnaround times are less. shorter processes decreases more than it increases the waiting time of long processes.
• In the meanwhile, I/O bound processes finish I/O and wait for CPU thus leaving the I/O devices idle. • The main disadvantage with the SJF algorithm lies in knowing the length of the next CPU burst. In
• The CPU-bound process releases the CPU as it goes for an I/O. case of long-term or job scheduling in a batch system, the time required to complete a job as given by
• I/O bound processes have short CPU bursts and they execute and go for I/O quickly. The CPU is idle the user can be used to schedule.
till the CPU-bound process finishes the I/O and gets hold of the CPU. • SJF algorithm is therefore applicable in long-term scheduling.
• The above cycle repeats. This is called the convoy effect. Here small processes wait for one big process • SJF algorithm could be either preemptive or non-preemptive. If a new process joins the ready queue
to release the CPU. with a shorter next CPU burst than what is remaining of the current executing process, then the CPU
• Since the algorithm is non-preemptive in nature, it is not suited for timesharing systems. is allocated to the new process.
• In case of non-preemptive scheduling, the current executing process is not preempted and the new
Shortest Job First (SJF) process gets the next chance, it being the process with the shortest next CPU burst.
• This is also known as shortest job first, or SJF
• This is a non-preemptive, pre-emptive scheduling algorithm. Given below are the arrival and burst times of four processes P1, P2, P3 and P4.
• Best approach to minimize waiting time. Process Arrival time (msecs) Burst time(msecs)
• Easy to implement in Batch systems where required CPU time is known in advance. P1 0 8
• Impossible to implement in interactive systems where required CPU time is not known.
P2 1 4
• The processer should know in advance how much time process will take.
P3 2 9
1) As an example, consider the following set of processes P1, P2, P3, P4 and their CPU burst times: P4 3 5
Process Burst time (msecs)
P1 6 1. If SJF preemptive scheduling is used, the following Gantt chart shows the result.

P2 8 P1 P2 P4 P1 P3

P3 7 0 1 5 10 17 26
P4 3 Average waiting time = ((10 – 1) + 0 + (17 – 2) + (5 – 3)) / 4 = 26 / 4 = 6.5 msecs.

Using SJF algorithm, the processes would be scheduled as shown below. 2. If non-preemptive SJF scheduling is used, the result is as follows:
P4 P1 P3 P2 P1 P2 P4 P3

0 3 9 16 24 0 8 12 17 26
Average waiting time = (0 + 3 + 9 + 16) / 4 = 28 / 4 = 7 msecs. Average waiting time = ((0 + (8 – 1) + (12 – 3) + (17 – 2)) / 4 = 31 / 4 = 7.75 msecs.

27 28
Unit-01 Unit-01

Process Arrival Time Priority Burst Time


Priority Based Scheduling
• Priority-based algorithms can be either preemptive or non-preemptive the most common scheduling P1 0 ms 3 3 ms
algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is to be executed first and so on.
P2 1 ms 2 4 ms
• Processes with same priority are executed on first come first served basis.
• Priority can be decided based on memory requirements, time requirements or any other resource
P3 2 ms 4 6 ms
requirement.

P4 3 ms 6 4 ms
1) In the following example, we will assume lower numbers to represent higher priority.

P5 5 ms 10 2 ms
Process Priority Burst time (msecs)
P1 3 10

P2 1 1

P3 3 2

P4 4 1 Here,
P5 2 5  Total Turn Around Time = 7 + 4 + 11 + 14 + 14 = 50 ms
Using priority scheduling, the processes are scheduled as shown below:  Average Turn Around Time = (Total Turn Around Time)/(no. of processes) = 50/5 = 10.00 ms
 Total Waiting Time = 4 + 0 + 5 + 10 + 12 = 31 ms
P2 P5 P1 P3 P4
 Average Waiting Time = (Total Waiting Time)/(no. of processes) = 31/5 = 6.20 ms
 Total Response Time = 0 + 0 + 5 + 10 + 12 = 27 ms
0 1 6 16 18 19
 Average Response Time = (Total Response Time)/(no. of processes) = 27/5 = 5.40 ms
Average waiting time = (6 + 0 + 16 + 18 + 1) / 5 = 41 / 5 = 8.2 msecs.

Round Robin Scheduling


• In case of preemptive scheduling, if a new process joins the ready queue with a priority higher than the
• Round Robin is the preemptive process scheduling algorithm.
process that is executing, then the current process is preempted and CPU allocated to the new process.
• Each process is provided a fix time to execute, it is called a quantum.
But in case of nonpreemptive algorithm, the new process having highest priority from among the ready
• Once a process is executed for a given time period, it is preempted and other process executes for a
processes is allocated the CPU only after the current process gives up the CPU.
given time period.
• Context switching is used to save states of preempted processes.
2) Consider the following table of arrival time, Priority, and burst time for five processes P1,
P2, P3, P4, and P5. (higher numbers to represent higher priority)

29 30
Unit-01

1) Consider the same example explained under FCFS algorithm.

Process Burst time(msecs)

P1 24

P2 3

P3 3

Let the duration of a time slice be 4 msecs, which is to say CPU switches between
processes every 4 msecs in a round-robin fashion. The Gantt chart below shows the
scheduling of processes.

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
Average waiting time = (4 + 7 + (10 – 4)) / 3 = 17/ 3 = 5.66 msecs.

Multiple-Level Queues Scheduling


Multiple-level queues are not an independent scheduling algorithm. They make use of
other existing algorithms to group and schedule jobs with common characteristics.
• Multiple queues are maintained for processes with common characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue and
assigns them to the CPU based on the algorithm assigned to the queue.

31

You might also like