0% found this document useful (0 votes)
351 views

Operating System CSC

The document provides an overview of operating systems, including their history, functions, and types. It discusses: 1) The evolution of operating systems from the earliest computers that lacked operating systems to modern systems like Windows, Mac OS, and Linux. 2) The key functions of operating systems like managing hardware resources, loading/unloading programs, and handling errors. 3) The different types of operating systems including single-user vs. multi-user, batch processing vs. time-sharing, and real-time operating systems.

Uploaded by

Azeez Adegbite
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
351 views

Operating System CSC

The document provides an overview of operating systems, including their history, functions, and types. It discusses: 1) The evolution of operating systems from the earliest computers that lacked operating systems to modern systems like Windows, Mac OS, and Linux. 2) The key functions of operating systems like managing hardware resources, loading/unloading programs, and handling errors. 3) The different types of operating systems including single-user vs. multi-user, batch processing vs. time-sharing, and real-time operating systems.

Uploaded by

Azeez Adegbite
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

CHAPTER 1

INTRODUCTION AND HISTORY OF O/S


1.1 DEFINITION OF SOFTWARE

The term software can be defined as the series of instructions that controls the operation of a

computer system. Without software, the hardware cannot perform any task associated with the

computers.

Also, software can further be described as program that makes the hardware to function effectively.

1.2 DEFIINITION OF PROGRAM

The term program is a series of statements or set of instructions written in a specialized language,

which when executed, controls the operation of the computer to perform some tasks.

1.3 MEANING OF OPERATING SYSTEMS

An operating system (commonly abbreviated OS or O/S) is the software component of a computer

system that is responsible for the management and coordination of activities, and the sharing of

the resources of the computer. The operating system acts as a host for applications that are run on

the machine.

As a host, one of the purposes of an operating system is to handle the details of the operation of

the hardware. This relieves application programs from having to manage these details and this

makes it easier to write applications. Almost all computers, including handheld computers, desktop

computers, supercomputers, and even video game consoles, use an operating system of some type.

Some of the oldest models may however use an embedded operating system that may be contained

on a compact disk or other data storage device.

Common contemporary operating systems include Microsoft Windows, Mac OS, Linux and

Solaris. Microsoft Windows has a significant majority of the market share in the desktop and
notebook computer markets, while servers generally run on Linux or other Unix-like systems.

Embedded device markets are split amongst several operating systems.

1.4 HISTORY OF OPERATING SYSTEMS

Historically, operating systems have been highly related to the computer architecture. It is good

idea to study the history of operating system from the architecture of the computers on which they

run.

First Generation, 1951-1956

The earliest electronic digital computers had no operating systems. Machines of the time were so

primitive that programs were often entered one bit at a time on rows of mechanical switches (plug

boards). Programming languages were unknown (not even assembly languages). Operating

systems were unheard of.

Second Generation, 1956-1964

By the early 1950’s, the routine has improved somewhat with the introduction of punch cards. The

system of the 50’s generally ran one job at a time. These were called single-stream batch

processing systems because programs and data were submitted in groups or batches.

Third Generation, 1964-1979

The systems of the 1960’s were also batch processing systems, but they were able to take better

advantage of the computer’s resources by running several jobs at once. So, operating system

designers developed the concept of multiprogramming in which several jobs are in main memory

at once, a processor is switched from job to job as needed to keep several jobs advancing while

keeping the peripheral device in use. For example, on the system with no multiprogramming, when

the current job paused to wait for other I/O operation to complete, the CPU simply sat idle until

the I/O is finished. The solution for this problem that evolved was to partition the memory into
several pieces, with a different job in each partition. While one job was waiting for I/O to complete,

another job could be using the CPU.

Another major feature in the third generation operating system was the technique called spooling

(simultaneous peripheral operations online). In spooling, a high-speed device like a disk

interposed between a running program and a low speed device, involved with the program in

input/output. Instead of writing directly to a printer, for example, outputs are written to the disk.

Programs can run to completion faster, and other programs can be initiated sooner when the printer

becomes available, the output may be printed.

Other feature present in this generation was time sharing technique, a variant of multiprogramming

technique, in which each user has an on-line (i.e. directly connected), and terminal. Because the

user is present and interacting with the computer, the computer system must respond quickly to

user requests, otherwise user productivity could suffer. Timesharing systems were developed to

multi-program large number of simultaneous interactive users.

Fourth Generation, 1979 - Present

With the development of LSI (Large scale integration) circuits, chips, operating system entered

into the personal computer and workstation age. Microprocessor technology evolved to the point

that it becomes possible to build desktop computers as powerful as the mainframes of the 1970s.

1.5 Functions of an Operating System

(i) It controls the hardware resources i.e. input and output devices.

(ii) It performs loading and unloading of programs e.g. application package.

(iii) Operating system implements the use of passwords.

(iv) It manages the resources of the computer e.g. storage management.

(v) It handles errors when they occur and uses corrective routines where possible.
1.6. TYPES OF OPERATING SYSTEMS

Operating system can be classified based on the operations the user is able to carry out with the

system and number of users.

1.6.1 TYPES OF O/S BASED ON NUMBERS OF USERS

1. Single User Operating System

As the name implies, this type of operating system makes provision for only one user at a time

with the computer system. Examples are Microsoft Disk Operating System (MS-DOS) or IBM’s

PC-DOS.

2. Multi-User Operating System

This operating system permits several users to run programs at the same time, which is usually

through several terminals. A terminal consists of the monitor and keyboard. A multi user

operates on the principle of multiprogramming (a method of executing two or more programs

concurrently using the same computer. The operating systems of mainframe and minicomputer are

multi-user systems. Examples are: UNIX and XENIX.

3. Networking Operating System

With this type of operating system, computers can be linked together thereby allowing the users

to have access to the resources of one another.

1.6.2 TYPES OF O/S BASED ON OPERATION OF USER WITH THE SYSTEM

1. Batch Operating System

The users of a batch operating system do not interact with the computer directly. Each user

prepares his job on an off-line device like punch cards and submits it to the computer operator. To

speed up processing, jobs with similar needs are batched together and run as a group. The
programmers leave their programs with the operator and the operator then sorts the programs with

similar requirements into batches.

The problems with Batch Systems are as follows:

1. Lack of interaction between the user and the job.

2. CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.

3. Difficult to provide the desired priority.

2. Time-Sharing Operating System

Time-sharing is a technique which enables many people, located at various terminals, to use a

particular computer system at the same time. Time-sharing or multitasking is a logical extension

of multiprogramming. Processor's time which is shared among multiple users simultaneously is

termed as time-sharing.

Multiple jobs are executed by the CPU by switching between them, but the switches occur so

frequently. Thus, the user can receive an immediate response. For example, in a transaction

processing, the processor executes each user program in a short burst or quantum of computation.

That is, if n users are present, then each user can get a time quantum. When the user submits the

command, the response time is in few seconds at most.

The operating system uses CPU scheduling and multiprogramming to provide each user with a

small portion of a time. Computer systems that were designed primarily as batch systems have

been modified to time-sharing systems.

Advantages of Timesharing operating systems are as follows −

1. Provides the advantage of quick response.

2. Avoids duplication of software.

3. Reduces CPU idle time.


Disadvantages of Time-sharing operating systems are as follows −

1. Not reliable.

2. No full security and integrity of user programs and data.

3. Data communication problem.

3. Real-time Operating System

Real-time system is defined as a data processing system in which the time interval required to

process and respond to inputs is so small that it controls the environment. The time taken by the

system to respond to an input and display of required updated information is termed as

the response time. So in this method, the response time is very less as compared to online

processing.

Real-time systems are used when there are rigid time requirements on the operation of a processor

or the flow of data and as a control device in a dedicated application. A real-time operating system

must have well-defined, fixed time constraints, otherwise the system will fail. For example,

scientific experiments, medical imaging systems, industrial control systems, weapon systems,

robots, air traffic control systems, etc.

There are two types of real-time operating systems.

(i) Hard real-time systems

Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems,

secondary storage is limited or missing and the data is stored in ROM. In these systems, virtual

memory is almost never found.

(ii) Soft real-time systems

Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks

and retains the priority until it completes. Soft real-time systems have limited utility than hard real-
time systems. For example, multimedia, virtual reality, Advanced Scientific Projects like undersea

exploration and planetary rovers, etc.

4. Distributed Operating System

Distributed systems use multiple central processors to serve multiple real-time applications and

multiple users. Data processing jobs are distributed among the processors accordingly.

The processors communicate with one another through various communication lines (such as high-

speed buses or telephone lines). These are referred as loosely coupled systems or distributed

systems. Processors in a distributed system may vary in size and function. These processors are

referred as sites, nodes, computers, and so on.

The advantages of distributed systems are as follows –

1. With resource sharing facility, a user at one site may be able to use the resources available at

another.

2. Speedup the exchange of data with one another via electronic mail.

3. If one site fails in a distributed system, the remaining sites can potentially continue operating.

4. Better service to the customers.

5. Reduction of the load on the host computer.

6. Reduction of delays in data processing.

5. Network Operating System

A Network Operating System runs on a server and provides the server the capability to manage

data, users, groups, security, applications, and other networking functions. The primary purpose

of the network operating system is to allow shared file and printer access among multiple

computers in a network, typically a local area network (LAN), and a private network or to other

networks.
Examples of network operating systems include Microsoft Windows Server 2003, Microsoft

Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.

The advantages of network operating systems are as follows –

1. Centralized servers are highly stable.

2. Security is server managed.

3. Upgrades to new technologies and hardware can be easily integrated into the system.

4. Remote access to servers is possible from different locations and types of systems.

The disadvantages of network operating systems are as follows −

1. High cost of buying and running a server.

2. Dependency on a central location for most operations.

3. Regular maintenance and updates are required.

Lesson Assessment 1

1. What is an OS? What is a Software? What is a Program?

2. List the basic function of OS.

3. List and Explain the type of OS based on the operation of the user with the system.

4. What distinguishes the fourth generation OS from the third generation OS and what

improvement in the computer architecture led to this?

5. What are the type of OS based on the numbers of users?

6. Differences between distributed OS and Network OS


CHAPTER 2
CONCEPTS OF OPERATING SYSTEM

2.1 KERNEL

A kernel connects the application software to the hardware of a computer.

In computer science, the kernel is the central component of most computer operating systems (OS).

Its responsibilities include managing the system's resources and the communication between

hardware and software components. As a basic component of an operating system, a kernel

provides the lowest-level abstraction layer for the resources (especially memory, processors and

I/O devices) that applications must control to perform their function. It typically makes these

facilities available to application processes through inter-process communication mechanisms and

system calls.

2.1.1 KERNEL BASIC RESPONSIBILITIES

The kernel's primary purpose is to manage the computer's resources and allow other programs to

run and use these resources. Typically, the resources consist of:

• The CPU (frequently called the processor). This is the most central part of a computer system,

responsible for running or executing programs on it. The kernel takes responsibility for deciding

at any time which of the many running programs should be allocated to the processor or processors

(each of which can usually run only one program at once).

• The computer's memory. Memory is used to store both program instructions and data. The

kernel is responsible for deciding which memory each process can use, and determining what to

do when not enough is available.

• Input/Output (I/O) devices present in the computer, such as disk drives, printers, displays, etc.

The kernel allocates requests from applications to perform I/O to an appropriate device (or
subsection of a device, in the case of files on a disk or windows on a display) and provides

convenient methods for using the device (typically abstracted to the point where the application

does not need to know implementation details of the device).

Tasks of a Kernel

1. Process Management

2. Memory Management

3. Device Management

4. System Calls

2.1.1.1 PROCESS MANAGEMENT

The main task of a kernel is to allow the execution of applications and support them with features

such as hardware abstractions. To run an application, a kernel typically sets up an address space

for the application, loads the file containing the application's code into memory (perhaps via

demand paging), sets up a stack for the program and branches to a given location inside the

program, thus starting its execution.

Multi-tasking kernels are able to give the user the illusion that the number of processes being run

simultaneously on the computer is higher than the maximum number of processes the computer is

physically able to run simultaneously. Typically, the number of processes a system may run

simultaneously is equal to the number of CPUs installed (however this may not be the case if the

processors support simultaneous multithreading).

2.1.1.2 MEMORY MANAGEMENT

The kernel has full access to the system's memory and must allow processes to access this memory

safely as they require it. Often the first step in doing this is virtual addressing, usually achieved by

paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address
appear to be another address, the virtual address. Virtual address spaces may be different for

different processes; the memory that one process accesses at a particular (virtual) address may be

different memory from what another process accesses at the same address. This allows every

program to behave as if it is the only one (apart from the kernel) running and thus prevents

applications from crashing each other.

2.1.1.3 DEVICE MANAGEMENT

To perform useful functions, processes need access to the peripherals connected to the computer,

which are controlled by the kernel through device drivers. For example, to show the user something

on the screen, an application would make a request to the kernel, which would forward the request

to its display driver, which is then responsible for actually plotting the character/pixel.

A kernel must maintain a list of available devices. This list may be known in advance (e.g. on an

embedded system where the kernel will be rewritten if the available hardware changes), configured

by the user (typical on older PCs and on systems that are not designed for personal use) or detected

by the operating system at run time (normally called Plug and Play).

2.1.2.4 SYSTEM CALLS

To actually perform useful work, a process must be able to access the services provided by the

kernel. This is implemented differently by each kernel, but most provide a C library or an API,

which in turn invoke the related kernel functions.

The method of invoking the kernel function varies from kernel to kernel. If memory isolation is in

use, it is impossible for a user process to call the kernel directly, because that would be a violation

of the processor's access control rules. A few possibilities are:

• Using a software-simulated interrupt. This method is available on most hardware, and is

therefore very common.


• Using a call gate. A call gate is a special address which the kernel has added to a list stored in

kernel memory and which the processor knows the location of.

• Using a special system call instruction. This technique requires special hardware support,

which common architectures (not ably, x86) may lack. System call instructions have been added

to recent models of x86 processors, however, and some (but not all) operating systems for PCs

make use of them when available.

• Using a memory-based queue. An application that makes large numbers of requests but does

not need to wait for the result of each may add details of requests to an area of memory that the

kernel periodically scans to find requests.

2.2 SHELL

The shell or command shell is a software program that provides direct communication between

the user and the operating system. The term shell is also rarely applied to a software that is built

around a particular component or for a software that provides a user interface for another program.

Other Shell definition

1. A shell is software that provides an interface between users and operating system of

a computer to access the services of a kernel.

2. A shell is a user interface for access to an operating system’s services. Most often the user

interacts with the shell using a command-line interface (CLI).

The terminal is a program that opens a graphical window and lets you interact with the shell.

2.2.1 TYPES OF SHELL

1. Command-line shell (eg. Bash(sh), Command Prompt(cmd), C shell, Bourne shell, Korn

shell(ksh) etc.)
2. GUI Shell (eg. Windows Explorer or Windows Shell)

A third type of shell is recently developed – a GCLI (Graphical Command Line Interface) shell.

A GCLI shell combines the features of both CLI and GUI shell and provides an interface which is

both user-friendly and powerful.

2.2.1 1. COMMAND-LINE SHELL (eg. Bash(sh), Command Prompt(cmd), C shell, Bourne

shell, Korn shell(ksh) etc.): A command-line interface (CLI) is an interface between the user and

a program.

CLI system does not have mouse as an input device and generally keyboard is used for input where

the messages are sent by typing a command and then by pressing the enter (return) key. After a

command is processed, a new prompt is issued for accepting next instruction from the user.

GUI was not supported by early operating system and text-based commands were used for

communication between user and the computer.

2.2.1 2. GUI SHELL (eg. Windows Explorer or Windows Shell)

A graphical user interface is a human-computer interface that is graphical (rather than purely

textual) user interface. GUI uses images, window, icons, buttons, menus etc. which can be

manipulated by a mouse.

GUI are more user friendly than command line interface (CLI) as user interacts by using a mouse

rather than by having to type in commands. Also users don’t have to remember lot of commands

and thus it becomes easier for the user to learn and use the system. GUI is often pronounced as

“Gooey”.

2.2 DIFFERENCES BETWEEN KERNEL AND SHELL


The main difference between kernel and shell is that the kernel is the core of the operating system

that controls all the tasks of the system while the shell is the interface that allows the users to

communicate with the kernel.

The kernel manages everything. The shell translates your (user) commands into some low-level

calls to the kernel.

Type of shell include Bash(sh), Command Prompt(cmd), C shell, Bourne shell, Korn shell(ksh)

etc. The kernel has no type.

Lesson Assessment 2

1. What are the kernel basic responsibilities? What are the basic tasks of the kernel?

2. What are the differences between the Kernel and Shell? What is terminal?

3. List and explain the types of Shell?

5. Differentiate between process management and system calls.


CHAPTER 3
STRUCTURE OF OPERATING SYSTEM

3.1 OPERATING SYSTEMS provide environments in which programs and services run for the

users of the system, including: User Interfaces, Program Execution , I/O Operations, File-

System Manipulation, Communications, Error Detection , Resource Allocation, Protection

and Security.

3.2 MONOLITHIC SYSTEMS

3.2.0 Monolithic Kernel

In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in

the same memory area. This approach provides rich and powerful hardware access. Some

developers maintain that monolithic systems are easier to design and implement than other

solutions, and are extremely efficient if well-written. The main disadvantages of monolithic

kernels are the dependencies between system components - a bug in a device driver might crash

the entire system - and the fact that large kernels can become very difficult to maintain.

Advantages of Monolithic Kernel

Some of the advantages of monolithic kernel are:

1. The execution of the monolithic kernel is quite fast as the services such as memory management,

file management, process scheduling etc.are implemented under the same address space.

2. A process runs completely in a single address space in the monolithic kernel.

3. The monolithic kernel is a static single binary file.

Disadvantages of Monolithic Kernel

Some of the disadvantages of monolithic kernel are:

1. If any service fails in the monolithic kernel, it leads to the failure of the entire system.
2. To add any new service, the entire operating system needs to be modified by the user.

3.2.1 MicroKernel

The microkernel approach consists of defining a simple abstraction over the hardware, with a set

of primitives or system calls to implement minimal OS services such as memory management,

multitasking, and inter-process communication. Other services, including those normally provided

by the kernel such as networking, are implemented in user-space programs, referred to as servers.

Microkernels are easier to maintain than monolithic kernels, but the large number of system calls

and context switches might slow down the system because they typically generate more overhead

than plain function calls.

Differences Between Microkernel and Monolithic Kernel

Some of the differences between microkernel and monolithic kernel are given as follows:

1. The microkernel is much smaller in size as compared to the monolithic kernel.

2.The microkernel is easily extensible whereas this is quite complicated for the monolithic kernel.

3. The execution of the microkernel is slower as compared to the monolithic kernel.

4. Much more code is required to write a microkernel than the monolithic kernel.

Examples of Microkernel are QNX (Quantum Software Systems), Symbian, L4 Linux etc.

Monolithic Kernel examples are Linux, BSD (Berkeley Software Distribution -Unix) etc.

3.2.2 Hybrid Kernel is a kernel architecture based on combining aspects of microkernel and

monolithic kernel architectures used in computer operating systems. The category is controversial

due to the similarity to monolithic kernel; the term has been dismissed by some as just marketing.

The usually accepted categories are monolithic kernels and microkernels (with nanokernels and

exokernels seen as more extreme versions of microkernels).


The hybrid kernel approach tries to combine the speed and simpler design of a monolithic kernel

with the modularity and execution safety of a microkernel. examples is Mac Os.

A nanokernel or picokernel is a very minimalist operating system kernel. The nanokernel

represents the closest hardware abstraction layer of the operating system by interfacing the CPU,

managing interrupts and interacting with the MMU. The interrupt management and MMU

interface are not necessarily part of a nanokernel; however, on most architectures these

components are directly connected to the CPU, therefore, it often makes sense to integrate these

interfaces into the kernel.

3.3 LAYER SYSTEMS

The operating system is split into various layers. Each layers have different functionalities. In the

layered operating system and each of the layers have different functionalities. This type of

operating system was created as an improvement over the early monolithic systems.

3.3.1 Reason for Layering in Operating Systems

1. Layering provides a distinct advantage in an operating system.

2. All the layers can be defined separately and interact with each other as required.

3. It is easier to create, maintain and update the system if it is done in the form of layers.

4. Change in one layer specification does not affect the rest of the layers.

Note: Each of the layers in the operating system can only interact with the layers that are above

and below it. The lowest layer handles the hardware and the uppermost layer deals with the user

applications.

3.2.2 Layered Operating Systems

There are six layers in the layered operating system. The diagram below demonstrates these layers
Details about the six layers are:

1. Hardware

This layer interacts with the system hardware and coordinates with all the peripheral devices used

such as printer, mouse, keyboard, scanner etc. The hardware layer is the lowest layer in the layered

operating system architecture.

2. CPU Scheduling

This layer deals with scheduling the processes for the CPU. There are many scheduling queues

that are used to handle processes. When the processes enter the system, they are put into the job

queue. The processes that are ready to execute in the main memory are kept in the ready queue.

3. Memory Management

Memory management deals with memory and the moving of processes from disk to primary

memory for execution and back again. This is handled by the third layer of the operating system.

4. Process Management

This layer is responsible for managing the processes i.e assigning the processor to a process at a

time. This is known as process scheduling. The different algorithms used for process scheduling
are FCFS (first come first served), SJF (shortest job first), priority scheduling, round-robin

scheduling etc.

5. I/O Buffer

I/O devices are very important in the computer systems. They provide users with the means of

interacting with the system. This layer handles the buffers for the I/O devices and makes sure that

they work correctly.

6. User Programs

This is the highest layer in the layered operating system. This layer deals with the many user

programs and applications that run in an operating system such as word processors, games,

browsers etc

3.4 VIRTUAL MACHINE

A virtual machine (VM) is a software program or operating system that not only exhibits the

behavior of a separate computer, but is also capable of performing tasks such as running

applications and programs like a separate computer. In other words, a virtual machine (VM) is a

virtual environment that functions as a virtual computer system with its own CPU, memory,

network interface, and storage, created on a physical hardware system.

A virtual machine, usually known as a guest is created within another computing environment

referred as a "host." Multiple virtual machines can exist within a single host at one time. A virtual

machine is also known as a guest.

The physical hardware, equipped with a hypervisor such as Kernel-based Virtual Machine (KVM),

is called the host, while the many VMs that use its resources are guests. The hypervisor treats

compute resources—like CPU, memory, and storage—as a pool of resources that can easily be

relocated between existing guests or to new virtual machines.


VMs are isolated from the rest of the system, and multiple VMs can exist on a single piece of

hardware, like a server. They can be moved between host servers depending on demand or to use

resources more efficiently.

The operating system in a VM runs in the same way that an operating system or application

normally would on the host hardware, so the user experience with a VM will be similar.

Advantages of Virtual Machines:

1. Provides disaster recovery and application provisioning options.

2. Virtual machines are simply managed, maintained, and are widely available.

3. Multiple operating system environments can be run on a single physical computer.

Disadvantages of Virtual Machines:

1. Running multiple virtual machines on one physical machine can cause unstable performance.

2. Virtual machines are less efficient and run slower than a physical computer.

Types of Virtual Machines

1. Process virtual machines: Executes computer programs in a platform-independent

environment. It masks the information of the underlying hardware or operating system. This allows

the program to be executed in the same fashion on any platform.

2. System virtual machines: Support the sharing of a host computer’s physical resources between

multiple virtual machines.

3.5 CLIENT/SERVER MODEL

3.5.1 Client

A client is a program that runs on the local machine requesting service from the server. A client

program is a finite program means that the service started by the user and terminates when the

service is completed.
3.5.1 Server

A server is a program that runs on the remote machine providing services to the clients. When the

client requests for a service, then the server opens the door for the incoming requests, but it never

initiates the service.

A server program is an infinite program. That is, when it starts, it runs infinitely unless the problem

arises. The server waits for the incoming requests from the clients. When the request arrives at the

server, then it responds to the request.

3.5.2 Client and Server model

A client and server networking model is a model in which computers such as servers provide the

network services to the other computers such as clients to perform a user based tasks. This model

is known as client-server networking model.

Note: An application program is known as a client program, running on the local machine that

requests for a service from an application program known as a server program, running on the

remote machine.

Advantages of Client-server networks:

1. Centralized: Centralized back-up is possible in client-server networks, i.e., all the data is stored

in a server.

2. Security: These networks are more secure as all the shared resources are centrally administered.

3. Performance: The use of the dedicated server increases the speed of sharing resources. This

increases the performance of the overall system.

4. Scalability: We can increase the number of clients and servers separately, i.e., the new element

can be added, or we can add a new node in a network at any time.

5. Cost efficient: It requires less maintenance cost and Data recovery is possible.
Disadvantages of Client-Server network:

1. Traffic Congestion is a big problem in Client/Server networks. When a large number of clients

send requests to the same server may cause the problem of Traffic congestion.

2. It does not have a robustness of a network, i.e., when the server is down, then the client requests

cannot be met.

3. A client/server network is very decisive. Sometimes, regular computer hardware does not serve

a certain number of clients. In such situations, specific hardware is required at the server side to

complete the work.

4. Clients are prone to viruses, Trojans and worms if present in the Server or uploaded into the

Server.

5. Server are prone to Denial of Service (DOS) attacks.

6. Data packets may be spoofed or modified during transmission.

Lesson Assessment 3

1. What are the differences between Microkernel and Monolithic Kernel?

2. Define Client, Server, Application Program and Client server model.

3. Explain the layered operating systems?

4. What are the advantages and disadvantages of the virtual machine?


CHAPTER 4
VIRTUAL MEMORY

4.1 VIRTUAL MEMORY is a memory management ability of an operating system (OS), that

uses hardware and software, to allow a computer to meet the requirements for physical memory

shortages by temporarily transferring data from random access memory (RAM) to disk storage.

One can think of virtual memory as an alternate set of memory addresses. Programs use

these virtual addresses rather than real addresses to store instructions and data. When the

program is actually executed, the virtual addresses are converted into real memory addresses.

Virtual memory was developed at a time when physical memory -- the installed RAM -- was

expensive. A system using virtual memory uses a section of the hard drive to emulate RAM.

With virtual memory, a system can load larger programs or multiple programs running at the

same time, allowing each one to operate as if it has infinite memory and without having to

purchase more RAM.

4.2 WHY IT IS CALLED VIRTUAL MANAGEMENT

A computer can address more memory than the amount physically installed on the system. This

extra memory is actually called virtual memory and it is a section of a hard disk that's set up to

emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than physical memory.

Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by

using disk. Second, it allows us to have memory protection, because each virtual address is

translated to a physical address.


Following are the situations, when entire program is not required to be loaded fully in main

memory.

 User written error handling routines are used only when an error occurred in the data or

computation.

 Certain options and features of a program may be used rarely.

 Many tables are assigned a fixed amount of address space even though only a small amount

of the table is actually used.

 The ability to execute a program that is only partially in memory would counter many

benefits.

 Less number of I/O would be needed to load or swap each user program into memory.

 A program would no longer be constrained by the amount of physical memory that is

available.

 Each user program could take less physical memory, more programs could be run the same

time, with a corresponding increase in CPU utilization and throughput.

Virtual memory is commonly implemented by demand paging. It can also be implemented in a

segmentation system. Demand segmentation can also be used to provide virtual memory.

4.3 HISTORY OF VIRTUAL MEMORY

In the 1940s citation was needed and 1950s, all larger programs had to contain logic for

managing primary and secondary storage, such as overlaying. Virtual memory was therefore

introduced not only to extend primary memory, but to make such an extension as easy as

possible for programmers to use.


To allow for multiprogramming and multitasking, many early systems divided memory between

multiple programs without virtual memory, such as early models of the PDP-10 via registers.

1. The concept of virtual memory was first developed by German physicist Fritz-Rudolf

Güntsch at the Technische Universität Berlin in 1956 in his doctoral thesis, Logical Design of

a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High

Speed Memory Operation; it described a machine with 6 100-word blocks of primary core

memory and an address space of 1,000 100-word blocks, with hardware automatically moving

blocks between primary memory and secondary drum memory.

2. The first minicomputer to introduce virtual memory was the Norwegian NORD-1; during the

1970s, other minicomputers implemented virtual memory, notably VAX models running VMS.

3. Virtual memory was introduced to the x86 architecture with the protected mode of the Intel

80286 processor, but its segment swapping technique scaled poorly to larger segment sizes.

The Intel 80386 introduced paging support underneath the existing segmentation layer, enabling

the page fault exception to chain with other exceptions without double fault. However, loading

segment descriptors was an expensive operation, causing operating system designers to rely

strictly on paging rather than a combination of paging and segmentation.

4.4 MAJOR FUNCTIONS OF THE VIRTUAL MEMORY

1. Security. Processes cannot access each other's memory except via well-defined

interfaces. Processes cannot access hardware except via well-defined interfaces. A

process cannot crash the whole system.

2. Convenience. Having a known fixed address for global variables is good for

performance. Yet without virtual memory all addresses would have to be computed

because you otherwise would have conflicts. With virtual memory, such conflicts can't
happen, because the same address in different processes will be different blocks of

memory.

3. When you don't have enough memory, the virtual memory subsystem in the OS kernel

can swap out pieces of it to the disk. It does have an impact on performance. but

would you rather lose your data than have a performance

4.5 ADVANTAGES OF VIRTUAL MEMORY

1. The degree of multiprogramming will be increased.

2. There is no need to buy more memory RAMs.

3. User can run large application with less real RAM.

4.6 DISADVANTAGES OF VIRTUAL MEMORY

1. The system becomes slower since swapping takes time.

2. It takes more time in switching between applications.

3. The user will use the lesser hard disk space for its use.

4.7 TYPES OF VIRTUAL MEMORY

A computer's memory management unit (MMU) handles memory operations, including

managing virtual memory. In most computers, the MMU hardware is integrated into the CPU.

There are two ways in which virtual memory is implemented: paged and segmented.

1. Demand paging 2. Demand segmentation


1. Demand paging

The process of loading the page into memory on demand (whenever page fault occurs) is known

as demand paging.

The process includes the following steps:

1. If CPU try to refer a page that is currently not available in the main memory, it generates an

interrupt indicating memory access fault.

2. The OS puts the interrupted process in a blocking state. For the execution to proceed the

OS must bring the required page into the memory.

3. The OS will search for the required page in the logical address space.

4. The required page will be brought from logical address space to physical address space.

The page replacement algorithms are used for the decision making of replacing the page in

physical address space.

5. The page table will updated accordingly.

6. The signal will be sent to the CPU to continue the program execution and it will place the

process back into ready state.


Hence whenever a page fault occurs these steps are followed by the operating system and the

required page is brought into memory.

Three major activities of demand paging

1. Service the interrupt – careful coding means just several hundred instructions needed

2. Read the page – lots of time.

3. Restart the process – again just a small amount of time.

Advantages of demand paging

Demand paging, as opposed to loading all pages immediately:

1. Only loads pages that are demanded by the executing process.

2. As there is more space in main memory, more processes can be loaded, reducing the context

switching time, which utilizes large amounts of resources.

3. Less loading latency occurs at program startup, as less information is accessed from

secondary storage and less information is brought into main memory.

4. As main memory is expensive compared to secondary memory, this technique helps

significantly reduce the bill of material (BOM) cost in smart phones for example. Symbian

OS had this feature.

Disadvantages of demand paging

1. Individual programs face extra latency when they access a page for the first time.
2. Low-cost, low-power embedded systems may not have a memory management unit that

supports page replacement.

3. Memory management with page replacement algorithms becomes slightly more complex.

4. Possible security risks, including vulnerability to timing attacks.

5. Thrashing which may occur due to repeated page faults.

2. Demand segmentation

Demand segmentation, which is similar to demand paging is also used by the OS. Operating

system uses demand segmentation where there is insufficient hardware available to implement

‘Demand Paging’.

Demand segmentation allows for pages that are often referenced with each other to be brought

into memory together, this decreases the number of page faults.

Note: Some virtual memory systems combine segmentation and paging. In this case, memory

gets divided into frames or pages. The segments take up multiple pages and the virtual address

includes both the segment number and the page number.

4.8 DIFFERENCES BETWEEN PAGING AND SEGMENTATION

Basis for Comparison Paging Segmentation


Basic A page is of fixed A segment is of variable

block size. size.


Fragmentation Paging may lead to Segmentation may lead to

internal external fragmentation.

fragmentation.

Address The user specified The user specifies each

address is divided by address by two quantities a

CPU into a page segment number and the

number and offset. offset (Segment limit).

Size The hardware The segment size is

decides the page specified by the user.

size.

Table Paging involves a Segmentation involves the

page table that segment table that contains

contains base address segment number and offset

of each page. (segment length).

Lesson Assessment 4

1. What are the differences between Paging and Segmentation?

2. What are the three major activities of Demand Paging?

3. Explain Virtual Memory?


CHAPTER 5
REAL MEMORY MANAGEMENT

5.1 MEMORY MANAGEMENT is the process of controlling and coordinating computer

memory, assigning portions (called blocks) to various running programs to optimize overall

system performance. Memory management resides in hardware, in the OS, in programs and

applications.

Memory management is the task carried out by the OS and hardware to accommodate multiple

processes in main memory. Also, it is a form of resource management applied to computer

memory. The essential requirement of memory management is to provide ways to dynamically

allocate portions of memory to programs at their request, and free it for reuse when no longer

needed.

5.2 MEMORY MANAGEMENT TECHNIQUES

The following are the three key memory management techniques used by an operating system:

1. Segmentation 2. Paging 3. Swapping

1. Segmentation

Segmentation refers to the technique of dividing the physical memory space into multiple blocks.

Each block has specific length and is known as a segment. Each segment has a starting address

called the base address. The length of the segment determines the availability memory space in

the segment.
The above figure shows the organization of segment in memory unit.

The location of data values stored in the segment can be determined by the distance of actual

position of data value from base address of the segment. The distance between the actual position

of data and the base address of segment is known as displacement or offset value. In other words,

when there is a need to obtain data from required segmented memory then the actual address of

data is calculated by adding the base address of the segment with offset value.

The base address of the segment and the offset value is specified in a program instruction itself..

The following figure shows how the actual position of an operand in a segment is obtained by

adding the base address and offset value

2) Paging
Paging is a technique in which the main memory of computer system is organized in the form of

equal sized blocks called pages. In this technique, the address of occupied pages of physical

memory are stored in a table, which is known as page table.

Paging enables the operating system to obtain data from the physical memory location without

specifying lengthy memory address in the instruction. In this technique, the virtual address is

used to map the physical address of the data. The length of virtual address is specified in the

instruction and is smaller than physical address of the data. It consists of two different numbers,

first number is the address of page called virtual page in the page table and the second number is

the offset value of the actual data in the page.

The above figure shows how the virtual address is used to obtain the physical address of an

occupied page of physical memory using a page table.

3) Swapping

Swapping is the technique used by an operating system for efficient management of memory

space of a computer system. Swapping involves performing two tasks called swapping in and

swapping out. The task of placing the pages or blocks of data from the hard disk to the main

memory is called swapping in. On the other hand, the task of removing pages or blocks of data

from main memory to the hard disk is called swapping out. The swapping technique is useful

when larger program is to be executed or some operations have to perform on a large file.
The above figure shows the Swapping of pages in operating system.

5.3 REAL MEMORY MANAGEMENT REQUIREMENTS

1. Relocation

Since it is impossible for programmers to know where the program will be placed in memory

when it is executed, a process may be (often) relocated in main memory due to

swapping/compaction. Swapping or ability to relocate enables the OS to have a larger pool of

ready-to-execute processes. Compaction enables the OS to have a larger contiguous memory to

place programs in.

2. Protection

Processes should not be able to reference memory locations in another process without

permission. It should be impossible to check addresses in programs at compile/load-time since

the program could be relocated. The address references must be checked at execution-time by

hardware.

3. Logical Organization
To effectively deal with user programs, the OS and hardware should support a basic form of a

module to provide the required protection and sharing. Because, users write programs in modules

with different characteristics. The instruction modules are execute-only, while data modules are

either read-only or read/write.

4. Sharing

In sharing the O.S must allow several processes to access a common portion of main memory

without compromising protection. It is better to allow each process to access the same copy of

the program rather than have their own separate copy.

5. Physical Organization

External memory is the long term store for programs and data while main memory holds

programs and data currently in use. Moving information between these two levels of the memory

hierarchy is a major concern of memory management. It is highly inefficient to leave this

responsibility to the application programmer. Thus, memory must be well organized.

Lesson Assessment 5

1. What is real memory management?

2. With the aid of a diagram explain the organization of segment in memory unit?

3. Explain memory management techniques?


CHAPTER 6
PROCESS SCHEDULING

6.1 PROCESS SCHEDULING is the essential part of a multiprogramming operating system

that handles the removal of the running procedure from the CPU (central processing unit) and the

selection of another procedure on the basis of a particular strategy.

6.2 TERMINOLOGIES USED IN PROCESS SCHEDULING

1. Multiprogramming operating system: It is a form of parallel processing in which

several programs are run at the same time on a uniprocessor so that it appears to the user

that different programs are executing at the same time when the operating system

executes part of one program then another and so on.

2. Central Processing Unit (CPU): Commonly called the central processor is the “brain”

of the computer or the electronic circuitry within a computer that carries out instructions

of a computer program by performing the basic arithmetic, logic, controlling and I/O

operations as specified by the instructions.

3. Time multiplexing: It is a method of transmitting and receiving independent signals over

a common path by means of synchronized switches at each end of the transmission line

so that each signal appears on the line only a fraction of a time in an alternating pattern.

4. Process control block: It is a data structure in the operating system kernel containing the

information needed to manage the scheduling of a particular process.


6.3 PROCESS SCHEDULING QUEUES

The OS maintains all PCBs (Process Control Block) in process scheduling queues. The OS

maintains a separate queue for each of the process states and PCBs of all processes in the same

execution state are placed in the same queue. When the state of a process is changed, its PCB is

unlinked from its current queue and moved to its new state queue. The following are the

important process scheduling queues maintained by the Operating System:

 Job Queue: This queue maintains the processes in the computer system.

 Ready Queue: This queue keeps a set of all processes residing in main memory,

ready and waiting to execute. A new process is always put in this queue.

 Device Queue: The processes which are blocked due to unavailability of an I/O

device constitute this queue.

6.4 DISPATCHER AND SCHEDULER

A dispatcher is a special program which comes into play after the scheduler. When the

scheduler completes its job of selecting a process, it is the dispatcher which takes that process to

the desired state/queue. The dispatcher is the module that gives a process control over the CPU

after it has been selected by the short-term scheduler. This function involves the following:

1. Switching context

2. Switching to user mode

3. Jumping to the proper location in the user program to restart that program
Schedulers are special system software which handle process scheduling in various ways. Their

main task is to select the jobs to be submitted into the system and to decide which process to run.

Schedulers are of three types:

1. Long Term Scheduler.

2. Short Term Scheduler.

3. Medium Term Scheduler.

1. Long term (Job) scheduler

It is also called a job scheduler and it determines which programs are admitted into the system

for processing. It selects processes from the queue and loads them into memory for execution.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O

bound and processor bound.

Characteristics of long term scheduler

1. It is a job scheduler.

2. The speed is lesser than that of the short term scheduler.

3. It controls the degree of multiprogramming.

4. It is almost absent or minimal in the sharing system.

5. It selects processes from pool and loads them into memory for execution.

2. Medium term scheduler

It is a part of swapping. It removes the processes from the memory. It is in charge of handling

the swapped out processes.


A running process may become suspended if it makes an I/O request. A suspended process

cannot make any progress towards completion. In this condition, to remove the process from

memory and make space for other processes, the suspended process is moved to the secondary

storage. This process is called swapping and swapping may be necessary to improve the process

mix.

Characteristics of medium term scheduler

1. It is a process swapping scheduler.

2. Speed is in between both short and long term scheduler.

3. It reduces the degree of multi programming.

4. It is a part of the sharing systems.

5. It can re-introduce the process into memory and execution can be continued.

3. Short term (CPU) scheduler

It is also known as the CPU scheduler. Its main objective is to increase system performance in

accordance with the chosen set of criteria. It is the change of ready state to running state of the

process. CPU scheduler selects a process among processes that are ready to execute and allocates

CPU to one of them.

Characteristics of short term scheduler

1. It is a CPU scheduler.

2. Speed is the fastest among other two.

3. It provides lesser control over degree of multiprogramming.

4. It is also minimal in time sharing system.

5. It selects those processes which are ready to execute.


6.5 TYPES OF SCHEDULING PROCESSSES

1. Preemptive Process: In preemptive scheduling policy, a low priority process has to suspend

its execution if high priority process is waiting in the same queue for its execution.

2. Non - Preemptive Process: In non-preemptive scheduling policy, processes are executed in

first come first serve basis, which means the next process is executed only when currently

running process finishes its execution.

6.6 SCHEDULING PROCESS ALGORITHMS

An algorithm is the step by step method in solving a problem.

Operating system performs the task of scheduling processes based on priorities using these

following algorithms:

1. First come first serve (FCFS): In this scheduling algorithm the first process entered in

queue is processed first.

2. Shortest job first (SJF): In this scheduling algorithm the process which requires shortest

CPU time to execute is processed first.

3. Shortest remaining time first (SRTF): This scheduling algorithm is the preemptive version

of the SJF scheduling algorithm. In this, the process which is left with the least processing

time is executed first.

4. Longest job first (LJF): In this type of scheduling algorithm, the process with the maximum

time required to execute is scheduled first. This type of scheduling is not widely used
because it is not a very effective way of scheduling, as the average turn-around time and the

average waiting time are maximum in this case.

5. Longest remaining time first (LRTF): As SRTF is to SJF, LRTF is the preemptive version

of the LJF scheduling algorithm.

6. Priority scheduling: In this scheduling algorithm the priority is assigned to all the processes

and the process with highest priority executed first. Priority assignment of processes is done

on the basis of internal factor such as CPU and memory requirements or external factor such

as user’s choice. The priority scheduling algorithm supports preemptive and non- preemptive

scheduling policy.

7. Round robin (RR): In this algorithm the process is allocated the CPU for the specific time

period called time slice, which is normally of 10 to 100 milliseconds. If the process

completes its execution within this time slice, then it is removed from the queue otherwise it

has to wait for another time slice.

Uses of scheduling algorithm

1. Scheduling algorithm is used to minimize resource starvation and to ensure fairness among

the parties utilizing the resources.

2. It deals with problems of deciding which one of the request would is to be allocated

resources.

3. It helps to decide which process is to be executed first and last to achieve maximum CPU

utilization.

6.7 FUNCTIONS OF A SCHEDULER


1. Maximizing Throughput(time from work completed per time unit)

2. Minimizing wait time (time from work becoming ready until the first point it begins

execution).

3. Minimizing latency or response time (time from work becoming ready until it is finished in

case of batch activity, or until the system responds and hands the first output to the user in

case of interactive activity).

4. Maximizing fairness (equal CPU time to each process, or more generally appropriate times

according to the priority and workload of each process).

5. The scheduler must also ensure that processes can meet deadlines; Scheduler tasks can also be

distributed to remote devices across a network and managed through an administrative back

end.

Lesson Assessment 6

1. Define process scheduling? Write short note on dispatcher.

2. Explain Scheduler? List 3 functions of a scheduler.

3. List and write short notes on 5 process scheduling algorithm?

4. What are the uses of scheduling algorithm? What is an algorithm?


CHAPTER 7
PREEMPTIVE AND NON-PREEMPTIVE SCHEDULING

7.1 PREEMPTIVE SCHEDULING is the act of temporarily interrupting a task being carried

out by the computer system, without requiring its cooperation, and with the intention of resuming

the task at a later time. It can also be defined as the scheduling which is done when the process

changes from running state to ready state or from waiting state to ready state.

Preemptive scheduling is used in real time systems where the tasks are usually configured with

different priorities and time. Critical tasks are given higher priorities. A higher priority task can

stop a lower priority one and grab use the CPU until it releases it. The task contexts are saved so

that the tasks can resume their operations from the point they left off when they give back the

CPU in preemptive scheduling.

7.1.1 PREEMPTIVE MULTITASKING


Preemptive multitasking involves the use of an interrupt mechanism which suspends the

currently executing process and invokes a scheduler to determine which process should not

execute next. Therefore, all processes will get some amount of CPU time at any given time.

In preemptive multitasking, the operating system Kernel can also initiate a context switch to

satisfy the scheduling policy’s priority constraint, thus preempting the active task.

Preemptive multitasking is also a task in which a computer operating system uses some criteria

to decide how long to allocate to any one task before giving another task a turn to use the

operating system. The act of taking control of the operating system from one task and giving it to

another task is called preempting.


Algorithm based on preemptive scheduling are:

1. Round Robin (RR)

2. Shortest Remaining Time First (SRTF)

3. Priority (Preemptive Version) etc.

Advantages of preemptive scheduling

1. Preemptive scheduling is more robust as one process cannot utilize the CPU.

2. It ensures fairness to all jobs regardless of its priority.

3. It improves the average response time.

4. It is beneficial when used in a multi-programming environment.

Disadvantages of preemptive scheduling

1. It could introduce blocking delays in tasks that are high priority.

2. It leads to indefinite postponement to process.

7.2 NON-PREEMPTIVE SCHEDULING

Non-Preemptive scheduling can be referred to as when a task runs until it stops (voluntarily), or

finishes. Task within a non-preemptive system will run until completed. The scheduler then

checks all tasks’ states and scheduler the next highest priority task with a Ready state.

With Non-Preemptive scheduling, once a task has its assignment to the CPU, it cannot be taken

away, even if short tasks have to wait for longer tasks to complete.
Advantages of Non- preemptive scheduling

1. It is conceptually very simple and produces high through put.

2. It reduces context and reduces stack size.

3. It simplifies the access to shared resources.

4. No semaphores are needed for critical sections.

5. Deadlock prevention is trivial for non-preemptive scheduling.

6. Tasks can share the same stack since it can only execute one task.

7. Non-preemption allows zero I/O jitter.

Disadvantages of Non-preemptive scheduling

1. Poor response time for processes.

2. Bugs can cause machine to freeze up if process contains infinite loop.

3. Not fairly good by most definitely (not good fairness by most definitions).

4. May make real time & priority scheduling difficult.

5. If preemption is not allowed optimistic schedules leave the processor idle at certain times.

6. Non-preemptive scheduling leaves processor idle to finish tasks with early deadlines arriving

late.

7. It reduces scheduling introducing delays in high priority tasks.

7.3 DIFFERENCES BETWEEN PREEMPTIVE AND NON PREEMPTIVE

SCHEDULING

The similarity between preemptive and non-preemptive scheduling is that they are both done on

the CPU.
Basis for Preemptive scheduling Non- Preemptive scheduling
comparism

Basic The resources (CPU cycles) are Once resources are allocated to a
allocated to a process for a limited process (CPU cycles), the process
time. holds it till it completes its burst
time or switches to waiting state.
Interrupt A process can be interrupted in the A running process cannot be
middle of execution. interrupted in the middle of
execution.
Starvation If a high priority process If a process with long burst time
frequently arrives in the ready is running CPU, then another
queue, low priority may starve. process with less burst time may
starve.
Overhead It has overheads of scheduling It does not have overheads of
processes. scheduling processes.
Flexibility Preemptive scheduling is a flexible Non-preemptive scheduling is a
type of scheduling. rigid type of scheduling.
Cost It is cost associated. It is not cost associated.

Lesson Assessment 7

1. Differentiate between the preemptive and non-preemptive scheduling.

2. What are the merits and demerits of preemptive and non-preemptive scheduling?

You might also like