Operating System CSC
Operating System CSC
The term software can be defined as the series of instructions that controls the operation of a
computer system. Without software, the hardware cannot perform any task associated with the
computers.
Also, software can further be described as program that makes the hardware to function effectively.
The term program is a series of statements or set of instructions written in a specialized language,
which when executed, controls the operation of the computer to perform some tasks.
system that is responsible for the management and coordination of activities, and the sharing of
the resources of the computer. The operating system acts as a host for applications that are run on
the machine.
As a host, one of the purposes of an operating system is to handle the details of the operation of
the hardware. This relieves application programs from having to manage these details and this
makes it easier to write applications. Almost all computers, including handheld computers, desktop
computers, supercomputers, and even video game consoles, use an operating system of some type.
Some of the oldest models may however use an embedded operating system that may be contained
Common contemporary operating systems include Microsoft Windows, Mac OS, Linux and
Solaris. Microsoft Windows has a significant majority of the market share in the desktop and
notebook computer markets, while servers generally run on Linux or other Unix-like systems.
Historically, operating systems have been highly related to the computer architecture. It is good
idea to study the history of operating system from the architecture of the computers on which they
run.
The earliest electronic digital computers had no operating systems. Machines of the time were so
primitive that programs were often entered one bit at a time on rows of mechanical switches (plug
boards). Programming languages were unknown (not even assembly languages). Operating
By the early 1950’s, the routine has improved somewhat with the introduction of punch cards. The
system of the 50’s generally ran one job at a time. These were called single-stream batch
processing systems because programs and data were submitted in groups or batches.
The systems of the 1960’s were also batch processing systems, but they were able to take better
advantage of the computer’s resources by running several jobs at once. So, operating system
designers developed the concept of multiprogramming in which several jobs are in main memory
at once, a processor is switched from job to job as needed to keep several jobs advancing while
keeping the peripheral device in use. For example, on the system with no multiprogramming, when
the current job paused to wait for other I/O operation to complete, the CPU simply sat idle until
the I/O is finished. The solution for this problem that evolved was to partition the memory into
several pieces, with a different job in each partition. While one job was waiting for I/O to complete,
Another major feature in the third generation operating system was the technique called spooling
interposed between a running program and a low speed device, involved with the program in
input/output. Instead of writing directly to a printer, for example, outputs are written to the disk.
Programs can run to completion faster, and other programs can be initiated sooner when the printer
Other feature present in this generation was time sharing technique, a variant of multiprogramming
technique, in which each user has an on-line (i.e. directly connected), and terminal. Because the
user is present and interacting with the computer, the computer system must respond quickly to
user requests, otherwise user productivity could suffer. Timesharing systems were developed to
With the development of LSI (Large scale integration) circuits, chips, operating system entered
into the personal computer and workstation age. Microprocessor technology evolved to the point
that it becomes possible to build desktop computers as powerful as the mainframes of the 1970s.
(i) It controls the hardware resources i.e. input and output devices.
(v) It handles errors when they occur and uses corrective routines where possible.
1.6. TYPES OF OPERATING SYSTEMS
Operating system can be classified based on the operations the user is able to carry out with the
As the name implies, this type of operating system makes provision for only one user at a time
with the computer system. Examples are Microsoft Disk Operating System (MS-DOS) or IBM’s
PC-DOS.
This operating system permits several users to run programs at the same time, which is usually
through several terminals. A terminal consists of the monitor and keyboard. A multi user
concurrently using the same computer. The operating systems of mainframe and minicomputer are
With this type of operating system, computers can be linked together thereby allowing the users
The users of a batch operating system do not interact with the computer directly. Each user
prepares his job on an off-line device like punch cards and submits it to the computer operator. To
speed up processing, jobs with similar needs are batched together and run as a group. The
programmers leave their programs with the operator and the operator then sorts the programs with
2. CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.
Time-sharing is a technique which enables many people, located at various terminals, to use a
particular computer system at the same time. Time-sharing or multitasking is a logical extension
termed as time-sharing.
Multiple jobs are executed by the CPU by switching between them, but the switches occur so
frequently. Thus, the user can receive an immediate response. For example, in a transaction
processing, the processor executes each user program in a short burst or quantum of computation.
That is, if n users are present, then each user can get a time quantum. When the user submits the
The operating system uses CPU scheduling and multiprogramming to provide each user with a
small portion of a time. Computer systems that were designed primarily as batch systems have
1. Not reliable.
Real-time system is defined as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment. The time taken by the
the response time. So in this method, the response time is very less as compared to online
processing.
Real-time systems are used when there are rigid time requirements on the operation of a processor
or the flow of data and as a control device in a dedicated application. A real-time operating system
must have well-defined, fixed time constraints, otherwise the system will fail. For example,
scientific experiments, medical imaging systems, industrial control systems, weapon systems,
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems,
secondary storage is limited or missing and the data is stored in ROM. In these systems, virtual
Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks
and retains the priority until it completes. Soft real-time systems have limited utility than hard real-
time systems. For example, multimedia, virtual reality, Advanced Scientific Projects like undersea
Distributed systems use multiple central processors to serve multiple real-time applications and
multiple users. Data processing jobs are distributed among the processors accordingly.
The processors communicate with one another through various communication lines (such as high-
speed buses or telephone lines). These are referred as loosely coupled systems or distributed
systems. Processors in a distributed system may vary in size and function. These processors are
1. With resource sharing facility, a user at one site may be able to use the resources available at
another.
2. Speedup the exchange of data with one another via electronic mail.
3. If one site fails in a distributed system, the remaining sites can potentially continue operating.
A Network Operating System runs on a server and provides the server the capability to manage
data, users, groups, security, applications, and other networking functions. The primary purpose
of the network operating system is to allow shared file and printer access among multiple
computers in a network, typically a local area network (LAN), and a private network or to other
networks.
Examples of network operating systems include Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
3. Upgrades to new technologies and hardware can be easily integrated into the system.
4. Remote access to servers is possible from different locations and types of systems.
Lesson Assessment 1
3. List and Explain the type of OS based on the operation of the user with the system.
4. What distinguishes the fourth generation OS from the third generation OS and what
2.1 KERNEL
In computer science, the kernel is the central component of most computer operating systems (OS).
Its responsibilities include managing the system's resources and the communication between
provides the lowest-level abstraction layer for the resources (especially memory, processors and
I/O devices) that applications must control to perform their function. It typically makes these
system calls.
The kernel's primary purpose is to manage the computer's resources and allow other programs to
run and use these resources. Typically, the resources consist of:
• The CPU (frequently called the processor). This is the most central part of a computer system,
responsible for running or executing programs on it. The kernel takes responsibility for deciding
at any time which of the many running programs should be allocated to the processor or processors
• The computer's memory. Memory is used to store both program instructions and data. The
kernel is responsible for deciding which memory each process can use, and determining what to
• Input/Output (I/O) devices present in the computer, such as disk drives, printers, displays, etc.
The kernel allocates requests from applications to perform I/O to an appropriate device (or
subsection of a device, in the case of files on a disk or windows on a display) and provides
convenient methods for using the device (typically abstracted to the point where the application
Tasks of a Kernel
1. Process Management
2. Memory Management
3. Device Management
4. System Calls
The main task of a kernel is to allow the execution of applications and support them with features
such as hardware abstractions. To run an application, a kernel typically sets up an address space
for the application, loads the file containing the application's code into memory (perhaps via
demand paging), sets up a stack for the program and branches to a given location inside the
Multi-tasking kernels are able to give the user the illusion that the number of processes being run
simultaneously on the computer is higher than the maximum number of processes the computer is
physically able to run simultaneously. Typically, the number of processes a system may run
simultaneously is equal to the number of CPUs installed (however this may not be the case if the
The kernel has full access to the system's memory and must allow processes to access this memory
safely as they require it. Often the first step in doing this is virtual addressing, usually achieved by
paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address
appear to be another address, the virtual address. Virtual address spaces may be different for
different processes; the memory that one process accesses at a particular (virtual) address may be
different memory from what another process accesses at the same address. This allows every
program to behave as if it is the only one (apart from the kernel) running and thus prevents
To perform useful functions, processes need access to the peripherals connected to the computer,
which are controlled by the kernel through device drivers. For example, to show the user something
on the screen, an application would make a request to the kernel, which would forward the request
to its display driver, which is then responsible for actually plotting the character/pixel.
A kernel must maintain a list of available devices. This list may be known in advance (e.g. on an
embedded system where the kernel will be rewritten if the available hardware changes), configured
by the user (typical on older PCs and on systems that are not designed for personal use) or detected
by the operating system at run time (normally called Plug and Play).
To actually perform useful work, a process must be able to access the services provided by the
kernel. This is implemented differently by each kernel, but most provide a C library or an API,
The method of invoking the kernel function varies from kernel to kernel. If memory isolation is in
use, it is impossible for a user process to call the kernel directly, because that would be a violation
kernel memory and which the processor knows the location of.
• Using a special system call instruction. This technique requires special hardware support,
which common architectures (not ably, x86) may lack. System call instructions have been added
to recent models of x86 processors, however, and some (but not all) operating systems for PCs
• Using a memory-based queue. An application that makes large numbers of requests but does
not need to wait for the result of each may add details of requests to an area of memory that the
2.2 SHELL
The shell or command shell is a software program that provides direct communication between
the user and the operating system. The term shell is also rarely applied to a software that is built
around a particular component or for a software that provides a user interface for another program.
1. A shell is software that provides an interface between users and operating system of
2. A shell is a user interface for access to an operating system’s services. Most often the user
The terminal is a program that opens a graphical window and lets you interact with the shell.
1. Command-line shell (eg. Bash(sh), Command Prompt(cmd), C shell, Bourne shell, Korn
shell(ksh) etc.)
2. GUI Shell (eg. Windows Explorer or Windows Shell)
A third type of shell is recently developed – a GCLI (Graphical Command Line Interface) shell.
A GCLI shell combines the features of both CLI and GUI shell and provides an interface which is
shell, Korn shell(ksh) etc.): A command-line interface (CLI) is an interface between the user and
a program.
CLI system does not have mouse as an input device and generally keyboard is used for input where
the messages are sent by typing a command and then by pressing the enter (return) key. After a
command is processed, a new prompt is issued for accepting next instruction from the user.
GUI was not supported by early operating system and text-based commands were used for
A graphical user interface is a human-computer interface that is graphical (rather than purely
textual) user interface. GUI uses images, window, icons, buttons, menus etc. which can be
manipulated by a mouse.
GUI are more user friendly than command line interface (CLI) as user interacts by using a mouse
rather than by having to type in commands. Also users don’t have to remember lot of commands
and thus it becomes easier for the user to learn and use the system. GUI is often pronounced as
“Gooey”.
that controls all the tasks of the system while the shell is the interface that allows the users to
The kernel manages everything. The shell translates your (user) commands into some low-level
Type of shell include Bash(sh), Command Prompt(cmd), C shell, Bourne shell, Korn shell(ksh)
Lesson Assessment 2
1. What are the kernel basic responsibilities? What are the basic tasks of the kernel?
2. What are the differences between the Kernel and Shell? What is terminal?
3.1 OPERATING SYSTEMS provide environments in which programs and services run for the
users of the system, including: User Interfaces, Program Execution , I/O Operations, File-
and Security.
In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in
the same memory area. This approach provides rich and powerful hardware access. Some
developers maintain that monolithic systems are easier to design and implement than other
solutions, and are extremely efficient if well-written. The main disadvantages of monolithic
kernels are the dependencies between system components - a bug in a device driver might crash
the entire system - and the fact that large kernels can become very difficult to maintain.
1. The execution of the monolithic kernel is quite fast as the services such as memory management,
file management, process scheduling etc.are implemented under the same address space.
1. If any service fails in the monolithic kernel, it leads to the failure of the entire system.
2. To add any new service, the entire operating system needs to be modified by the user.
3.2.1 MicroKernel
The microkernel approach consists of defining a simple abstraction over the hardware, with a set
multitasking, and inter-process communication. Other services, including those normally provided
by the kernel such as networking, are implemented in user-space programs, referred to as servers.
Microkernels are easier to maintain than monolithic kernels, but the large number of system calls
and context switches might slow down the system because they typically generate more overhead
Some of the differences between microkernel and monolithic kernel are given as follows:
2.The microkernel is easily extensible whereas this is quite complicated for the monolithic kernel.
4. Much more code is required to write a microkernel than the monolithic kernel.
Examples of Microkernel are QNX (Quantum Software Systems), Symbian, L4 Linux etc.
Monolithic Kernel examples are Linux, BSD (Berkeley Software Distribution -Unix) etc.
3.2.2 Hybrid Kernel is a kernel architecture based on combining aspects of microkernel and
monolithic kernel architectures used in computer operating systems. The category is controversial
due to the similarity to monolithic kernel; the term has been dismissed by some as just marketing.
The usually accepted categories are monolithic kernels and microkernels (with nanokernels and
with the modularity and execution safety of a microkernel. examples is Mac Os.
represents the closest hardware abstraction layer of the operating system by interfacing the CPU,
managing interrupts and interacting with the MMU. The interrupt management and MMU
interface are not necessarily part of a nanokernel; however, on most architectures these
components are directly connected to the CPU, therefore, it often makes sense to integrate these
The operating system is split into various layers. Each layers have different functionalities. In the
layered operating system and each of the layers have different functionalities. This type of
operating system was created as an improvement over the early monolithic systems.
2. All the layers can be defined separately and interact with each other as required.
3. It is easier to create, maintain and update the system if it is done in the form of layers.
4. Change in one layer specification does not affect the rest of the layers.
Note: Each of the layers in the operating system can only interact with the layers that are above
and below it. The lowest layer handles the hardware and the uppermost layer deals with the user
applications.
There are six layers in the layered operating system. The diagram below demonstrates these layers
Details about the six layers are:
1. Hardware
This layer interacts with the system hardware and coordinates with all the peripheral devices used
such as printer, mouse, keyboard, scanner etc. The hardware layer is the lowest layer in the layered
2. CPU Scheduling
This layer deals with scheduling the processes for the CPU. There are many scheduling queues
that are used to handle processes. When the processes enter the system, they are put into the job
queue. The processes that are ready to execute in the main memory are kept in the ready queue.
3. Memory Management
Memory management deals with memory and the moving of processes from disk to primary
memory for execution and back again. This is handled by the third layer of the operating system.
4. Process Management
This layer is responsible for managing the processes i.e assigning the processor to a process at a
time. This is known as process scheduling. The different algorithms used for process scheduling
are FCFS (first come first served), SJF (shortest job first), priority scheduling, round-robin
scheduling etc.
5. I/O Buffer
I/O devices are very important in the computer systems. They provide users with the means of
interacting with the system. This layer handles the buffers for the I/O devices and makes sure that
6. User Programs
This is the highest layer in the layered operating system. This layer deals with the many user
programs and applications that run in an operating system such as word processors, games,
browsers etc
A virtual machine (VM) is a software program or operating system that not only exhibits the
behavior of a separate computer, but is also capable of performing tasks such as running
applications and programs like a separate computer. In other words, a virtual machine (VM) is a
virtual environment that functions as a virtual computer system with its own CPU, memory,
A virtual machine, usually known as a guest is created within another computing environment
referred as a "host." Multiple virtual machines can exist within a single host at one time. A virtual
The physical hardware, equipped with a hypervisor such as Kernel-based Virtual Machine (KVM),
is called the host, while the many VMs that use its resources are guests. The hypervisor treats
compute resources—like CPU, memory, and storage—as a pool of resources that can easily be
hardware, like a server. They can be moved between host servers depending on demand or to use
The operating system in a VM runs in the same way that an operating system or application
normally would on the host hardware, so the user experience with a VM will be similar.
2. Virtual machines are simply managed, maintained, and are widely available.
1. Running multiple virtual machines on one physical machine can cause unstable performance.
2. Virtual machines are less efficient and run slower than a physical computer.
environment. It masks the information of the underlying hardware or operating system. This allows
2. System virtual machines: Support the sharing of a host computer’s physical resources between
3.5.1 Client
A client is a program that runs on the local machine requesting service from the server. A client
program is a finite program means that the service started by the user and terminates when the
service is completed.
3.5.1 Server
A server is a program that runs on the remote machine providing services to the clients. When the
client requests for a service, then the server opens the door for the incoming requests, but it never
A server program is an infinite program. That is, when it starts, it runs infinitely unless the problem
arises. The server waits for the incoming requests from the clients. When the request arrives at the
A client and server networking model is a model in which computers such as servers provide the
network services to the other computers such as clients to perform a user based tasks. This model
Note: An application program is known as a client program, running on the local machine that
requests for a service from an application program known as a server program, running on the
remote machine.
1. Centralized: Centralized back-up is possible in client-server networks, i.e., all the data is stored
in a server.
2. Security: These networks are more secure as all the shared resources are centrally administered.
3. Performance: The use of the dedicated server increases the speed of sharing resources. This
4. Scalability: We can increase the number of clients and servers separately, i.e., the new element
5. Cost efficient: It requires less maintenance cost and Data recovery is possible.
Disadvantages of Client-Server network:
1. Traffic Congestion is a big problem in Client/Server networks. When a large number of clients
send requests to the same server may cause the problem of Traffic congestion.
2. It does not have a robustness of a network, i.e., when the server is down, then the client requests
cannot be met.
3. A client/server network is very decisive. Sometimes, regular computer hardware does not serve
a certain number of clients. In such situations, specific hardware is required at the server side to
4. Clients are prone to viruses, Trojans and worms if present in the Server or uploaded into the
Server.
Lesson Assessment 3
4.1 VIRTUAL MEMORY is a memory management ability of an operating system (OS), that
uses hardware and software, to allow a computer to meet the requirements for physical memory
shortages by temporarily transferring data from random access memory (RAM) to disk storage.
One can think of virtual memory as an alternate set of memory addresses. Programs use
these virtual addresses rather than real addresses to store instructions and data. When the
program is actually executed, the virtual addresses are converted into real memory addresses.
Virtual memory was developed at a time when physical memory -- the installed RAM -- was
expensive. A system using virtual memory uses a section of the hard drive to emulate RAM.
With virtual memory, a system can load larger programs or multiple programs running at the
same time, allowing each one to operate as if it has infinite memory and without having to
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
The main visible advantage of this scheme is that programs can be larger than physical memory.
Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by
using disk. Second, it allows us to have memory protection, because each virtual address is
memory.
User written error handling routines are used only when an error occurred in the data or
computation.
Many tables are assigned a fixed amount of address space even though only a small amount
The ability to execute a program that is only partially in memory would counter many
benefits.
Less number of I/O would be needed to load or swap each user program into memory.
available.
Each user program could take less physical memory, more programs could be run the same
segmentation system. Demand segmentation can also be used to provide virtual memory.
In the 1940s citation was needed and 1950s, all larger programs had to contain logic for
managing primary and secondary storage, such as overlaying. Virtual memory was therefore
introduced not only to extend primary memory, but to make such an extension as easy as
multiple programs without virtual memory, such as early models of the PDP-10 via registers.
1. The concept of virtual memory was first developed by German physicist Fritz-Rudolf
Güntsch at the Technische Universität Berlin in 1956 in his doctoral thesis, Logical Design of
a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High
Speed Memory Operation; it described a machine with 6 100-word blocks of primary core
memory and an address space of 1,000 100-word blocks, with hardware automatically moving
2. The first minicomputer to introduce virtual memory was the Norwegian NORD-1; during the
1970s, other minicomputers implemented virtual memory, notably VAX models running VMS.
3. Virtual memory was introduced to the x86 architecture with the protected mode of the Intel
80286 processor, but its segment swapping technique scaled poorly to larger segment sizes.
The Intel 80386 introduced paging support underneath the existing segmentation layer, enabling
the page fault exception to chain with other exceptions without double fault. However, loading
segment descriptors was an expensive operation, causing operating system designers to rely
1. Security. Processes cannot access each other's memory except via well-defined
2. Convenience. Having a known fixed address for global variables is good for
performance. Yet without virtual memory all addresses would have to be computed
because you otherwise would have conflicts. With virtual memory, such conflicts can't
happen, because the same address in different processes will be different blocks of
memory.
3. When you don't have enough memory, the virtual memory subsystem in the OS kernel
can swap out pieces of it to the disk. It does have an impact on performance. but
3. The user will use the lesser hard disk space for its use.
managing virtual memory. In most computers, the MMU hardware is integrated into the CPU.
There are two ways in which virtual memory is implemented: paged and segmented.
The process of loading the page into memory on demand (whenever page fault occurs) is known
as demand paging.
1. If CPU try to refer a page that is currently not available in the main memory, it generates an
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address space.
The page replacement algorithms are used for the decision making of replacing the page in
6. The signal will be sent to the CPU to continue the program execution and it will place the
1. Service the interrupt – careful coding means just several hundred instructions needed
2. As there is more space in main memory, more processes can be loaded, reducing the context
3. Less loading latency occurs at program startup, as less information is accessed from
significantly reduce the bill of material (BOM) cost in smart phones for example. Symbian
1. Individual programs face extra latency when they access a page for the first time.
2. Low-cost, low-power embedded systems may not have a memory management unit that
3. Memory management with page replacement algorithms becomes slightly more complex.
2. Demand segmentation
Demand segmentation, which is similar to demand paging is also used by the OS. Operating
system uses demand segmentation where there is insufficient hardware available to implement
‘Demand Paging’.
Demand segmentation allows for pages that are often referenced with each other to be brought
Note: Some virtual memory systems combine segmentation and paging. In this case, memory
gets divided into frames or pages. The segments take up multiple pages and the virtual address
fragmentation.
size.
Lesson Assessment 4
memory, assigning portions (called blocks) to various running programs to optimize overall
system performance. Memory management resides in hardware, in the OS, in programs and
applications.
Memory management is the task carried out by the OS and hardware to accommodate multiple
allocate portions of memory to programs at their request, and free it for reuse when no longer
needed.
The following are the three key memory management techniques used by an operating system:
1. Segmentation
Segmentation refers to the technique of dividing the physical memory space into multiple blocks.
Each block has specific length and is known as a segment. Each segment has a starting address
called the base address. The length of the segment determines the availability memory space in
the segment.
The above figure shows the organization of segment in memory unit.
The location of data values stored in the segment can be determined by the distance of actual
position of data value from base address of the segment. The distance between the actual position
of data and the base address of segment is known as displacement or offset value. In other words,
when there is a need to obtain data from required segmented memory then the actual address of
data is calculated by adding the base address of the segment with offset value.
The base address of the segment and the offset value is specified in a program instruction itself..
The following figure shows how the actual position of an operand in a segment is obtained by
2) Paging
Paging is a technique in which the main memory of computer system is organized in the form of
equal sized blocks called pages. In this technique, the address of occupied pages of physical
Paging enables the operating system to obtain data from the physical memory location without
specifying lengthy memory address in the instruction. In this technique, the virtual address is
used to map the physical address of the data. The length of virtual address is specified in the
instruction and is smaller than physical address of the data. It consists of two different numbers,
first number is the address of page called virtual page in the page table and the second number is
The above figure shows how the virtual address is used to obtain the physical address of an
3) Swapping
Swapping is the technique used by an operating system for efficient management of memory
space of a computer system. Swapping involves performing two tasks called swapping in and
swapping out. The task of placing the pages or blocks of data from the hard disk to the main
memory is called swapping in. On the other hand, the task of removing pages or blocks of data
from main memory to the hard disk is called swapping out. The swapping technique is useful
when larger program is to be executed or some operations have to perform on a large file.
The above figure shows the Swapping of pages in operating system.
1. Relocation
Since it is impossible for programmers to know where the program will be placed in memory
2. Protection
Processes should not be able to reference memory locations in another process without
the program could be relocated. The address references must be checked at execution-time by
hardware.
3. Logical Organization
To effectively deal with user programs, the OS and hardware should support a basic form of a
module to provide the required protection and sharing. Because, users write programs in modules
with different characteristics. The instruction modules are execute-only, while data modules are
4. Sharing
In sharing the O.S must allow several processes to access a common portion of main memory
without compromising protection. It is better to allow each process to access the same copy of
5. Physical Organization
External memory is the long term store for programs and data while main memory holds
programs and data currently in use. Moving information between these two levels of the memory
Lesson Assessment 5
2. With the aid of a diagram explain the organization of segment in memory unit?
that handles the removal of the running procedure from the CPU (central processing unit) and the
several programs are run at the same time on a uniprocessor so that it appears to the user
that different programs are executing at the same time when the operating system
2. Central Processing Unit (CPU): Commonly called the central processor is the “brain”
of the computer or the electronic circuitry within a computer that carries out instructions
of a computer program by performing the basic arithmetic, logic, controlling and I/O
a common path by means of synchronized switches at each end of the transmission line
so that each signal appears on the line only a fraction of a time in an alternating pattern.
4. Process control block: It is a data structure in the operating system kernel containing the
The OS maintains all PCBs (Process Control Block) in process scheduling queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the same
execution state are placed in the same queue. When the state of a process is changed, its PCB is
unlinked from its current queue and moved to its new state queue. The following are the
Job Queue: This queue maintains the processes in the computer system.
Ready Queue: This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
Device Queue: The processes which are blocked due to unavailability of an I/O
A dispatcher is a special program which comes into play after the scheduler. When the
scheduler completes its job of selecting a process, it is the dispatcher which takes that process to
the desired state/queue. The dispatcher is the module that gives a process control over the CPU
after it has been selected by the short-term scheduler. This function involves the following:
1. Switching context
3. Jumping to the proper location in the user program to restart that program
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
It is also called a job scheduler and it determines which programs are admitted into the system
for processing. It selects processes from the queue and loads them into memory for execution.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
1. It is a job scheduler.
5. It selects processes from pool and loads them into memory for execution.
It is a part of swapping. It removes the processes from the memory. It is in charge of handling
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping and swapping may be necessary to improve the process
mix.
5. It can re-introduce the process into memory and execution can be continued.
It is also known as the CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among processes that are ready to execute and allocates
1. It is a CPU scheduler.
1. Preemptive Process: In preemptive scheduling policy, a low priority process has to suspend
its execution if high priority process is waiting in the same queue for its execution.
first come first serve basis, which means the next process is executed only when currently
Operating system performs the task of scheduling processes based on priorities using these
following algorithms:
1. First come first serve (FCFS): In this scheduling algorithm the first process entered in
2. Shortest job first (SJF): In this scheduling algorithm the process which requires shortest
3. Shortest remaining time first (SRTF): This scheduling algorithm is the preemptive version
of the SJF scheduling algorithm. In this, the process which is left with the least processing
4. Longest job first (LJF): In this type of scheduling algorithm, the process with the maximum
time required to execute is scheduled first. This type of scheduling is not widely used
because it is not a very effective way of scheduling, as the average turn-around time and the
5. Longest remaining time first (LRTF): As SRTF is to SJF, LRTF is the preemptive version
6. Priority scheduling: In this scheduling algorithm the priority is assigned to all the processes
and the process with highest priority executed first. Priority assignment of processes is done
on the basis of internal factor such as CPU and memory requirements or external factor such
as user’s choice. The priority scheduling algorithm supports preemptive and non- preemptive
scheduling policy.
7. Round robin (RR): In this algorithm the process is allocated the CPU for the specific time
period called time slice, which is normally of 10 to 100 milliseconds. If the process
completes its execution within this time slice, then it is removed from the queue otherwise it
1. Scheduling algorithm is used to minimize resource starvation and to ensure fairness among
2. It deals with problems of deciding which one of the request would is to be allocated
resources.
3. It helps to decide which process is to be executed first and last to achieve maximum CPU
utilization.
2. Minimizing wait time (time from work becoming ready until the first point it begins
execution).
3. Minimizing latency or response time (time from work becoming ready until it is finished in
case of batch activity, or until the system responds and hands the first output to the user in
4. Maximizing fairness (equal CPU time to each process, or more generally appropriate times
5. The scheduler must also ensure that processes can meet deadlines; Scheduler tasks can also be
distributed to remote devices across a network and managed through an administrative back
end.
Lesson Assessment 6
7.1 PREEMPTIVE SCHEDULING is the act of temporarily interrupting a task being carried
out by the computer system, without requiring its cooperation, and with the intention of resuming
the task at a later time. It can also be defined as the scheduling which is done when the process
changes from running state to ready state or from waiting state to ready state.
Preemptive scheduling is used in real time systems where the tasks are usually configured with
different priorities and time. Critical tasks are given higher priorities. A higher priority task can
stop a lower priority one and grab use the CPU until it releases it. The task contexts are saved so
that the tasks can resume their operations from the point they left off when they give back the
currently executing process and invokes a scheduler to determine which process should not
execute next. Therefore, all processes will get some amount of CPU time at any given time.
In preemptive multitasking, the operating system Kernel can also initiate a context switch to
satisfy the scheduling policy’s priority constraint, thus preempting the active task.
Preemptive multitasking is also a task in which a computer operating system uses some criteria
to decide how long to allocate to any one task before giving another task a turn to use the
operating system. The act of taking control of the operating system from one task and giving it to
1. Preemptive scheduling is more robust as one process cannot utilize the CPU.
Non-Preemptive scheduling can be referred to as when a task runs until it stops (voluntarily), or
finishes. Task within a non-preemptive system will run until completed. The scheduler then
checks all tasks’ states and scheduler the next highest priority task with a Ready state.
With Non-Preemptive scheduling, once a task has its assignment to the CPU, it cannot be taken
away, even if short tasks have to wait for longer tasks to complete.
Advantages of Non- preemptive scheduling
6. Tasks can share the same stack since it can only execute one task.
3. Not fairly good by most definitely (not good fairness by most definitions).
5. If preemption is not allowed optimistic schedules leave the processor idle at certain times.
6. Non-preemptive scheduling leaves processor idle to finish tasks with early deadlines arriving
late.
SCHEDULING
The similarity between preemptive and non-preemptive scheduling is that they are both done on
the CPU.
Basis for Preemptive scheduling Non- Preemptive scheduling
comparism
Basic The resources (CPU cycles) are Once resources are allocated to a
allocated to a process for a limited process (CPU cycles), the process
time. holds it till it completes its burst
time or switches to waiting state.
Interrupt A process can be interrupted in the A running process cannot be
middle of execution. interrupted in the middle of
execution.
Starvation If a high priority process If a process with long burst time
frequently arrives in the ready is running CPU, then another
queue, low priority may starve. process with less burst time may
starve.
Overhead It has overheads of scheduling It does not have overheads of
processes. scheduling processes.
Flexibility Preemptive scheduling is a flexible Non-preemptive scheduling is a
type of scheduling. rigid type of scheduling.
Cost It is cost associated. It is not cost associated.
Lesson Assessment 7
2. What are the merits and demerits of preemptive and non-preemptive scheduling?