0% found this document useful (0 votes)
414 views156 pages

Gla University (Mathura) Bca 2nd Semester Operating System PDF

This document provides an overview of an operating system course. The course is titled "Fundamentals of Operating system" with course code BCAC0006. It is part of the BCA 1st semester program. The document defines key concepts of operating systems like process management, memory management, file systems, I/O systems, protection and security. It also discusses different types of operating systems like single-user, multi-user, batch processing, and multiprogramming systems.

Uploaded by

toxic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
414 views156 pages

Gla University (Mathura) Bca 2nd Semester Operating System PDF

This document provides an overview of an operating system course. The course is titled "Fundamentals of Operating system" with course code BCAC0006. It is part of the BCA 1st semester program. The document defines key concepts of operating systems like process management, memory management, file systems, I/O systems, protection and security. It also discusses different types of operating systems like single-user, multi-user, batch processing, and multiprogramming systems.

Uploaded by

toxic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 156

Course Name: Introduction of Logic Circuit & Digital Design

Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar, Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
An Operating System
• An Operating System (OS) is a collection of program which provides an
interface between a computer user and computer hardware.
• An operating system is a software which performs all the basic tasks like
file management, memory management, process management,
handling input and output, and controlling peripheral devices such as
disk drives and printers.
• A Collection program that acts as an intermediary between a user of a
computer and the computer hardware.
• It is responsible for the management and coordination of activities and
the sharing of the resources of the computer.
Computer System
Components
• Hardware Provides basic computing resources (CPU, memory, I/O
devices).
• Operating System Controls and coordinates the use of hardware among
application programs.
• Application Programs Solve computing problems of users (compilers,
database systems, video games, business programs such as banking
software).
• Users People, machines, other computers
Four Components of a Computer System
Operating System Views
• Resource allocator :to allocate resources (software and hardware) of the
computer system and manage them efficiently.

• Control program : Controls execution of user programs and operation of


I/O devices.

• Kernel :The program that executes forever (everything else is an


application with respect to the kernel).
Goals of an Operating System
• Simplify the execution of user programs and make solving user problems
easier.
• Use computer hardware efficiently.
• Allow sharing of hardware and software resources.
• Make application software portable and versatile.
• Provide isolation, security and protection among user programs.
• Improve overall system reliability
Functions/Components of Operating System:

Process management
• A program in its execution state is known as process.
• A process needs certain resources including CPU time, memory, files and
I/O devices to accomplish its task.
• These resources are either given to the process when it is created or
allocated to it while it is running.
• A program is a passive entity such as contents of a file stored on the disk
whereas a process is an active entity
The operating system is responsible for the following activities in
process management:
• Creating and deleting both user and system processes
• Suspending and resuming processes
• Providing mechanisms for process synchronization
• Providing mechanisms for process communication
• Providing mechanisms for deadlock handling.
2. Memory management
Main memory is a collection of quickly accessible data shared by the CPU and I/O
devices.
The central processor reads instructions from main memory (during instruction-
fetch cycle) and both reads and writes data from main memory (during data-
fetch cycle).
The operating system is responsible for the following activities in memory
management:
• Keeping track of which parts of memory are currently being used and by whom
• Deciding which processes and data to move into and out of memory
• Allocating and deallocating memory space as needed.
3. File-System Management
The operating system is responsible for the following activities with file management:
• Creating and deleting files
• Creating and deleting directories to organize files
• Supporting primitives for manipulating files and directories
• Backing up files on stable (nonvolatile) storage media.
 
4. Secondary storage ManagementThe operating system is responsible for the
following activities with disk management:
• Free-space management
• Storage allocation
• Disk scheduling
5. I/O system Management
The operating system is responsible for the following activities with I/O subsystem:
• A memory-management component that includes buffering, caching, and spooling
• A general device-driver interface
• Drivers for specific hardware devices

6. Protection and Security


• A mechanisms ensure that files, memory segments, CPU, and other resources can
be operated on by only those processes that have been allowed proper
authorization from the operating system.
• For example, memory-addressing hardware ensures that a process can execute
only within its own address space.
• Protection is a mechanism for controlling the access of processes or users to the
resources defined by a computer system.
7. Networking
• A distributed system is a collection of physically separated computer systems that are networked
to provide the users with access to the various resources that the system maintains.
• Access to a shared resource increases computation speed, functionality, data availability, and
reliability.
• Network Operating System (NOS) provides remote access to the users.
• It also provides the sharing of h/w and s/w resources from remote machine to own systems.

8.Command Interpreter
• To interface with the operating System we use command-line interface or command interpreter
that allows users to directly enter commands that are to be performed by the operating system.
• The main function of the command interpreter is to get and execute the user-specified
commands. Many of the commands given at this level manipulate files: create, delete, list, print,
copy, execute, and so on. Eg: MS-DOS and UNIX shells.
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar , Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Types of Operating System
• Single user Operating system This OS provides the environment for
single user i.e. only one user can interact with the system at a time. Eg:
MS-DOS, MS WINDOWS-XP, ME, 2000 etc.

• Multi user Operating System This OS provides the environment for


multiple users i.e. many user can interact with the system at a time.
These users are remotely connected to a system taking benefits of
shared resources of master system through networking. Eg: UNIX,
LINUX.
Serial Processing Operating System

• Early computer from late 1940 to the mid 1950.


• The programmer interacted directly with the computer hardware.
• These machine are called bare machine as they don't have OS.
• Every computer system is programmed in its machine language.
• Uses Punch Card, paper tapes and language translator

• In a typical sequence first the editor is been called to create a source code of user
program then translator is been called to convert source code into its object code,
finally the loader is been called to load its executable program into main memory for
execution.

• If syntax errors are detected than the whole program must be restarted from the
beginning.
Batch processing Operating System

• Batch processing is executing a series of non –interactive jobs all


at one time.
• Usually,jobs with similar request are grouped together during
working hours and then executed during the evening or whenever
the computer is idle.

• Batching similar jobs brought higher utilization of system


resources
Multiprogramming
• Multiprogramming is a technique to execute number of
programs simultaneously by a single processor.
• In Multiprogramming, number of processes reside in main
memory at a time.
• The OS picks and begins to executes one of the jobs in the
main memory.
• If any I/O wait happened in a process, then CPU switches from
that job to another job.
• Hence CPU in not idle at any time.
• Figure dipicts the layout of
OS multiprogramming system.
Job 1 • The main memory consists of 5 jobs
at a time, the CPU executes one by
Job 2 one.

Job 3
• Advantages:
• Efficient memory utilization
Job 4
• Throughput increases
Job 5
• CPU is never idle, so performance
increases.
Multiprocessor system :-This system has more than one processor which
share common bus, clock, peripheral devices and sometimes memory.
These systems have following advantages:
• Increased throughput By increasing the number of processor, we get more
work done in less time.
• Economy of scale Multiprocessor system are cost effective as several
processors share same resources of the system(memory, peripherals etc.)
• Increased reliability each processor is been fairly allotted with different job,
failure of one processor will not halt the system, only it will slower down
the performance. for example if we have ten processor and one fails, then
each of the remaining nine processor share the work of failed processor.
Thus the system will be 10% slower rather than failing altogether
This system can be categorized into:

• i) SMP (Symmetric multiprocessing)In this system each


processor runs an identical copy of the task and however these
processor communicate with each other whenever needed. Eg
Windows NT, Solaris, Unix/Linux.

• ii) ASMP(Asymmetric multiprocessing) In asymmetric


multiprocessing each processor runs a specific task. As there is a
master processor which controls the whole system,and other
processor looks to the master for instructions. Eg Sun OS ver. 4
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Operating-System Services
1) User interface Almost all operating systems have a user interface
(UI).This interface can take several forms.
• command-line interface (CLI) which uses text commands.
• batch interface, in which commands and directives to control those
commands are entered into files
• graphical user interface (GUI) the interface is a window system with a
pointing device to direct I/O, choose from menus, and make selections
and keyboard to enter text.
2) Program execution:- The system should to load a program into memory and to run
that program. The program must be able to end its execution.

3) I/O operations:- A running program may require I/O operation(such as recording to a


CD or DVD drive or blanking a CRT screen). For efficiency and protection, users usually
cannot control I/O devices directly. Therefore the operating system must provide a
means to do I/O.

4)File-system manipulation:- Programs need to read and write files. They also need to
create and delete. Finally, some programs include permissions management to allow or
deny access to files or directories based on file ownership.
 
 
5) Input / Output Operations:-A program which is currently executing
may require I/O, which may involve file or other I/O device. For
efficiency and protection, users cannot directly govern the I/O devices.
So, the OS provide a means to do I/O Input / Output operation which
means read or write operation with any file
6) Communications. There are many circumstances in which one process needs to
exchange information with another process.
Such communication may occur between processes that are executing on the
same computer or between processes that are executing on different computer
systems tied together by a computer network.
 
7) Error detection. Errors may occur in the CPU and memory hardware (such as a
memory error or a power failure), in I/O devices (a network failure, or lack of
paper in the printer), and in the user program (such as an arithmetic overflow, an
attempt to access an illegal memory location, or too-great use of CPU time).
For each type of error, the operating system should take the appropriate action
to ensure correct and consistent computing.
8. Resource allocation. When there are multiple users or multiple jobs running at the same
time, resources must be allocated to each of them. Many different types of resources are
managed by the operating system trough various methods such as CPU-scheduling.
 
9. Accounting. OS keeps track of which users use how much and what kind of computer
resources. This record keeping may be used for accounting
 
10. Protection and security. When several separate processes execute concurrently, it should
not be possible for one process to interfere with the others or with the operating system
itself.
Protection involves ensuring that all access to system resources is controlled. Security of the
system from outsiders is also important by means of a password, to gain access to system
resources.
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Interrupts

• An interrupt is a signal from a device attached to a computer or


from a program within the computer that causes the main program
that operates the computer to stop and figure out what to do next.
• The occurrence of an event is usually signaled by an interrupt from
either the hardware or the software. Hardware may trigger an
interrupt at any time by sending a signal to the CPU, usually
through system bus. Software may trigger an interrupt by executing
a special operation called a system call or monitor call
Interrupts processing
• The basic interrupt mechanism works as follows.
• The CPU hardware has a wire called the interrupt-request line that the CPU senses after
executing every instruction.
• When the CPU detects that a controller has asserted a signal on the interrupt request
line, the CPU performs a state save and jumps to the interrupthandler routine at a fixed
address in memory.
• The interrupt handler determines the cause of the interrupt, performs the necessary
processing, performs a state restore, and executes a return from interrupt instruction to
return the CPU to the execution state prior to the interrupt.
• We say that the device controller raises an interrupt by asserting a signal on the
interrupt request line, the CPU catches the interrupt and dispatches it to the interrupt
handler, and the handler clears the interrupt by servicing the device.
• Computers perform operations concurrently
– For example, compiling a program, sending a file to a printer,
rendering a Web page, playing music and receiving e-mail
– Processes enable systems to perform and track simultaneous
activities
– Processes transition between process states
– Operating systems perform operations on processes such as creating,
destroying, suspending, resuming and waking
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Time sharing System(Multitasking)

• Time sharing, or multitasking, is a logical extension of multiprogramming.


• Multiple jobs are executed by switching the CPU between them.
• In this, the CPU time is shared by different processes, so it is called as
“Time sharing Systems”.
• A time-shared operating system uses CPU scheduling and
multiprogramming to provide each user with a small portion of a time-
shared computer.
• Time slice is defined by the OS, for sharing CPU time between processes.
• Examples: Multics, Unix, etc.,
Real time Operating System

• A real time system has well defined fixed time constraints, processing
must be done within defined constraints or system will get failed.
• System that controls scientific system, experimenting medical system,
industrial control system and certain display systems are real time
system.
• They are also applicable to automobile engine fuel system, home
appliance controller and weapon systems.
• There are two types of real system:
• i) Hard real time system This system guarantees that critical tasks be
completed on time. For this all the delays in the system should be
bounded, from the retrieval of stored data to the time it takes operating
system to finish any request made to it.

• ii) Soft real time system This is less restrictive type of system defined as
not hard real-time, simply providing that a critical real-time task will
receive priority over other tasks and that it will retain the priority until it
completes.
Netwok Operating System :-An Os that includes special functions for
connecting computers and devices into a LAN. Some OS such as UNIX and
the Mac OS, having networking functions built in.
Some popular NOS’s for DOS and Windows systems include Novell
Netware, Microsoft LAN Manager and Windows NT.
• Some characteristics
• Each computer has its own private OS, instead of running part of a global
system wide operating system.
• Each user normally works on his/her own system.
Distributed Operating System :-It hides the existence of multiple
computers from the user i.e. the user doesn’t know that many
computers are being used to process the data.
These computers may be located at many places around the globe. This
OS provides provide single-system image to its users. All these
computers work in close coordination with each other.
• In this OS, each processor has its own memory and clock. The processor
communicates with each other through various communication lines
such as high speed buses and telephone lines.
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Process Scheduling
• The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization. The objective
of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it is
running. To meet these objectives, the process scheduler
selects an available process (possibly from a set of several
available processes) for program execution on the CPU.
• As processes enter the system, they are put into a job queue,
which consists of all processes in the system. The processes that
are residing in main memory and are ready and waiting to execute
are kept on a list called the ready queue. A ready-queue header
contains pointers to the first and final PCBs in the list.
• The list of processes waiting for a particular I/O device is called a
device queue.
• A new process is initially put in the ready queue. It waits there until
it is selected for execution, or is dispatched. Once the process is
allocated the CPU and is executing, one of several events could
occur:
• The process could issue an I/O request and then be placed in an I/
O queue.
• The process could create a new subprocess and wait for the
subprocess's termination.
• The process could be removed forcibly from the CPU, as a result of
an interrupt, and be put back in the ready queue.
• A process continues this cycle until it terminates, at which time it
is removed from all queues and has its PCB and resources
deallocated.
Fig: Queuing diagram representation of process scheduling
Schedulers
• A process migrates among the various scheduling queues
throughout its lifetime. The operating system must select, for
scheduling purposes, processes from these queues in some
fashion. The selection process is carried out by the appropriate
scheduler.
• Types of schedulers:
• i) Long term Scheduler In batch system, more processes are submitted
than can be executed immediately. These processes are spooled to a mass-
storage device (typically a disk), where they are kept for later execution.
The long-term scheduler, or job scheduler, selects processes from this
pool and loads them into memory for execution. The long-term scheduler
executes much less frequently; minutes may separate the creation of one
new process and the next. The long-term scheduler controls the degree of
multiprogramming(the number of processes in memory). It is important
that the long-term scheduler make a careful selection.
• ii) Short term Scheduler The short-term scheduler, or CPU scheduler,
selects from among the processes that are ready to execute and allocates
the CPU to one of them. Because of the short time between executions,
the short-term scheduler must be fast.
• medium-term scheduler The key idea behind a medium-term
scheduler is that sometimes it can be advantageous to remove
processes from memory for I/O operation and thus reduce the degree
of multiprogramming. Later, the process can be reintroduced into
memory, and its execution can be continued where it left off. This
scheme is called swapping. The process is swapped out, and is later
swapped in, by the medium-term scheduler. Swapping may be
necessary to improve the process mix or because a change in memory
requirements has overcommitted available memory, requiring
memory to be freed up.
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Scheduling Algorithms
• A Process Scheduler schedules different processes to be assigned to the
CPU based on particular scheduling algorithms.
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Priority Scheduling
• Shortest Remaining Time
• Round Robin(RR) Scheduling
• Multiple-Level Queues Scheduling
• These algorithms are either non-preemptive or preemptive.
• Non-preemptive algorithms are designed so that once a
process enters the running state, it cannot be preempted until
it completes its allotted time
• preemptive scheduling is based on priority where a scheduler
may preempt a low priority running process anytime when a
high priority process enters into a ready state.
• Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and
arrival time.
Turn Around Time = Completion Time – Arrival Time
• Waiting Time(W.T): Time Difference between turn around time and
burst time.
Waiting Time = Turn Around Time – Burst Time
First-Come, First-Served (FCFS) Scheduling
• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high
FCFS Non Preemptive Example
Average Waiting Time

Waiting Time = Starting Time - Arrival Time


Waiting time of
P1 = 0
P2 = 5 - 0 = 5 ms
P3 = 29 - 0 = 29 ms
P4 = 45 - 0 = 45 ms
P5 = 55 - 0 = 55 ms
Average Waiting Time = Waiting Time of all Processes / Total Number of
Process
Therefore, average waiting time = (0 + 5 + 29 + 45 + 55) / 5 = 26.8 ms
Average Turnaround Time
• Turnaround Time = Waiting time in the ready queue + executing time +
waiting time in waiting-queue for I/O
Turnaround time of
P1 = 0 + 5 + 0 = 5ms
P2 = 5 + 24 + 0 = 29ms
P3 = 29 + 16 + 0 = 45ms
P4 = 45 + 10 + 0 = 55ms
P5 = 55 + 3 + 0 = 58ms
Total Turnaround Time = (5 + 29 + 45 + 55 + 58)ms = 192ms
Average Turnaround Time = (Total Turnaround Time / Total Number of
Process) = (192 / 5)ms = 38.4ms
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Shortest Job First (SJF)
• This is a non-preemptive, pre-emptive scheduling algorithm.
• pre-emptive SJF is also known as SRTF(Shortest remaining time first)
algorithm
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in
advance.
• Impossible to implement in interactive systems where required CPU time
is not known.
• The processer should know in advance how much time process will take.
Example of Non Preemptive SJF
• Average Waiting Time : arrival time is common to all processes(i.e.,
zero).
Waiting Time for
P1 = 3 - 0 = 3ms
P2 = 34 - 0 = 34ms
P3 = 18 - 0 = 18ms
P4 = 8 - 0 = 8ms
P5 = 0ms
Now, Average Waiting Time = (3 + 34 + 18 + 8 + 0) / 5 = 12.6ms
• Average Turnaround Time
• According to the SJF Gantt chart and the turnaround time formulae,
Turnaround Time of
P1 = 3 + 5 = 8ms
P2 = 34 + 24 = 58ms
P3 = 18 + 16 = 34ms
P4 = 8 + 10 = 18ms
P5 = 0 + 3 = 3ms
Therefore, Average Turnaround Time = (8 + 58 + 34 + 18 + 3) / 5 =
24.2ms
Example of Preemptive SJF/SRTF
• Average Waiting Time
• First of all, we have to find the waiting time for each process.
Waiting Time of process
P1 = 0ms
P2 = (3 - 2) + (10 - 4) = 7ms
P3 = (4 - 4) = 0ms
P4 = (15 - 6) = 9ms
P5 = (8 - 8) = 0ms
Therefore, Average Waiting Time = (0 + 7 + 0 + 9 + 0) / 5 = 3.2ms
• Average Turnaround Time
• First of all, we have to find the turnaround time of each process.
Turnaround Time of process
P1 = (0 + 3) = 3ms
P2 = (7 + 6) = 13ms
P3 = (0 + 4) = 4ms
P4 = (9 + 5) = 14ms
P5 = (0 + 2) = 2ms
Therefore, Average Turnaround Time = (3 + 13 + 4 + 14 + 2) / 5 = 7.2ms
Priority Scheduling
• Priority scheduling is a non-preemptive algorithm and one of the
most common scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is
to be executed first and so on.
• Processes with same priority are executed on first come first
served basis.
• Priority can be decided based on memory requirements, time
requirements or any other resource requirement
Priority Scheduling Example
• Average Waiting Time
• First of all, we have to find out the waiting time of each process.
Waiting Time of process
P1 = 3ms
P2 = 13ms
P3 = 25ms
P4 = 0ms
P5 = 9ms
Therefore, Average Waiting Time = (3 + 13 + 25 + 0 + 9) / 5 = 10ms
• Average Turnaround Time
• First finding Turnaround Time of each process.
Turnaround Time of process
P1 = (3 + 6) = 9ms
P2 = (13 + 12) = 25ms
P3 = (25 + 1) = 26ms
P4 = (0 + 3) = 3ms
P5 = (9 + 4) = 13ms
Therefore, Average Turnaround Time = (9 + 25 + 26 + 3 + 13) / 5 =
15.2ms
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Round Robin (RR)

• Round Robin is the preemptive process scheduling algorithm.


• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and
other process executes for a given time period.
• Context switching is used to save states of preempted processes.
Round Robin (RR) Example
Time quantum is 5
• Average Waiting Time
• For finding Average Waiting Time, we have to find out the waiting time
of each process.

Waiting Time of
P1 = 0 + (15 - 5) + (24 - 20) = 14ms
P2 = 5 + (20 - 10) = 15ms
P3 = 10 + (21 - 15) = 16ms
Therefore, Average Waiting Time = (14 + 15 + 16) / 3 = 15ms
• Average Turnaround Time

• Same concept for finding the Turnaround Time.


Turnaround Time of

P1 = 14 + 30 = 44ms
P2 = 15 + 6 = 21ms
P3 = 16 + 8 = 24ms

Therefore, Average Turnaround Time = (44 + 21 + 24) / 3 = 29.66ms
Multilevel Queue
• Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
• Each queue has its own scheduling algorithm
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues
– Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
– 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue
• A process can move between the various queues; aging can be
implemented this way
• Multilevel-feedback-queue scheduler defined by the following
parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a process
– method used to determine when to demote a process
– method used to determine which queue a process will enter when that
process needs service
Example of Multilevel Feedback Queue
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is
moved to queue Q1.
– At Q1 job is again served FCFS and receives 16 additional milliseconds. If
it still does not complete, it is preempted and moved to queue Q2.
Multilevel Feedback Queues
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Shell Layer Architecture

• The functioning of OS is done in three levels.


• On the outer crust reside the application programs and other
utilities.
• At the heart of OS is the Kernel, which interacts with the actual
hardware in machine language.
• Between these two lies the Shell which is responsible for the
communication b/w the two layers.
• The Operating System is divided into three main entities:

• 1) The Kernel It’s the core of the system. It controls all the tasks,
schedules all the processes and carries out all the crucial features of the
operating system.
• It manages the files, carries out all the data transfer between the file
system and the hardware, and also manages memory.
• Among major duties of the Kernel is to keep track of the programs that
are executing, allot processor time to each and also decide when one
program stops and another starts.
• The Kernel also handles information exchange between the computer and
its terminals, tapes drives and printers.
2) The Shell
• It is the Command Interpreter of the operating system. It accepts
commands from the user and analyses and interprets these commands
and finally starts executing their appropriate executable file.
• It then requests the Kernel to carry out the actual transfer of data which
finally leads to the output that is displayed on the terminal.
• Hence the Shell act as an Middleman between the Kernel and the User of
the Operating System.
• There are three shells mainly used in UNIX.
• i) The Bourne Shell
• This is one of the most widely used shells in the Unix world. It was
developed by Steve Bourne of AT&T Bell Labs in the late 1970’s. It is the
primary Unix Command Interpreter and comes along with every Unix
System. The $ prompt is the trademark of Bourne Shell.
• ii) The C Shell
• It was created by Bill Joy at University of California, Berkeley. It has 2
advantages over the Bourne Shell:
• Aliasing. The C shell permits you to call frequently used commands by
your own formulated abbreviations. This proves very useful when
lengthy commands which are used time and again are renamed by you.
Instead of typing the entire command you can simply use the short alias
at the command line.
• A history mechanism. Previously typed command can be recalled, since
the C Shell keeps track of all the command issued at the command line.
This feature is similar to the one provided by the program DOSKEY in
MS-DOS.
• iii) The Korn Shell
• Developed by David Korn of AT&T, this shell was designed to be
much bigger than the Bourne Shell and includes several
features that make it more superior.
• It is more powerful, versatile and efficient than Bourne Shell.
• The Korn Shell includes all the enhancements in the C shell, like
command history and aliasing, and offers a more few features
itself.
• 3) Tools and Application The outermost layer of the operating
system is its tools and applications.
• As some versions of Unix are decked with more than 400 tools
and applications.
• These tools can be invoked from the command line itself and
help to perform day-to-day complex tasks of the system.
• These are placed one level above the Shell and can be
extended as per user convenience.
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Memory Hierarchy
• In computer architecture the memory hierarchy is a concept used
to discuss performance issues in computer architectural design,
algorithm predictions, and lower level programming constructs
involving locality of reference.
• The memory hierarchy in computer storage separates each of its
levels based on response time.
• Since response time, complexity, and capacity are related, the
levels may also be distinguished by their performance and
controlling technologies.
• Primary Memory The primary memory is also known as internal
memory, and this is accessible by the processor . This memory includes
main, cache, as well as CPU registers.

Secondary Memory The secondary memory is also known as external
memory, and this is accessible by the processor through an input/output
module. This memory includes an optical disk, magnetic disk, and
magnetic tape.
An Example Memory Hierarchy
Smaller, L0:
faster, registers CPU registers hold words retrieved from L1 cache.
and
costlier L1: on-chip L1
(per byte) cache (SRAM) L1 cache holds cache lines retrieved from the L2
cache memory.
storage
devices L2: off-chip L2
cache (SRAM) L2 cache holds cache lines retrieved from
main memory.

L3: main memory


Larger, (DRAM)
Main memory holds disk
slower, blocks retrieved from local
and disks.
cheaper L4: local secondary storage
(per byte) (local disks)
storage Local disks hold files retrieved
from disks on remote network
devices servers.

L5: remote secondary storage


(tapes, distributed file systems, Web servers)

From lecture-9.ppt
Characteristics of Memory Hierarchy
• The memory hierarchy characteristics mainly include the following.
• Performance
• The speed gap among the main memory as well as the CPU registers enhances
because of the huge disparity in access time, which will cause the lower
performance of the system.
• So, the enhancement was mandatory. The enhancement of this was designed in
the memory hierarchy model due to the system’s performance increase.
• Ability
• The ability of the memory hierarchy is the total amount of data the memory can
store. Because whenever we shift from top to bottom inside the memory hierarchy,
then the capacity will increase.
• Access Time
• The access time in the memory hierarchy is the interval of the time
among the data availability as well as request to read or write. Because
whenever we shift from top to bottom inside the memory hierarchy,
then the access time will increase
• Cost per bit
• When we shift from bottom to top inside the memory hierarchy, then
the cost for each bit will increase which means an internal Memory is
expensive compared with external memory
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Cache Memory
• The data or contents of the main memory that are used frequently by
CPU are stored in the cache memory so that the processor can easily
access that data in a shorter time.
• Whenever the CPU needs to access memory, it first checks the cache
memory. If the data is not found in cache memory, then the CPU moves
into the main memory.
• Cache memory is placed between the CPU and the main memory. The
block diagram for a cache memory can be represented as:
The basic operation of a cache memory is as
follows:

• When the CPU needs to access memory, the cache is examined.


If the word is found in the cache, it is read from the fast
memory.
• If the word addressed by the CPU is not found in the cache, the
main memory is accessed to read the word.
• A block of words one just accessed is then transferred from
main memory to cache memory. The block size may vary from
one word (the one just accessed) to about 16 words adjacent to
the one just accessed.
• The performance of the cache memory is frequently measured
in terms of a quantity called hit ratio.
• When the CPU refers to memory and finds the word in cache, it
is said to produce a hit.
• If the word is not found in the cache, it is in main memory and
it counts as a miss.
• The ratio of the number of hits divided by the total CPU
references to memory (hits plus misses) is the hit ratio.
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Cache Memory
• The data or contents of the main memory that are used frequently by
CPU are stored in the cache memory so that the processor can easily
access that data in a shorter time.
• Whenever the CPU needs to access memory, it first checks the cache
memory. If the data is not found in cache memory, then the CPU moves
into the main memory.
• Cache memory is placed between the CPU and the main memory. The
block diagram for a cache memory can be represented as:
The basic operation of a cache memory is as
follows:

• When the CPU needs to access memory, the cache is examined.


If the word is found in the cache, it is read from the fast
memory.
• If the word addressed by the CPU is not found in the cache, the
main memory is accessed to read the word.
• A block of words one just accessed is then transferred from
main memory to cache memory. The block size may vary from
one word (the one just accessed) to about 16 words adjacent to
the one just accessed.
• The performance of the cache memory is frequently measured
in terms of a quantity called hit ratio.
• When the CPU refers to memory and finds the word in cache, it
is said to produce a hit.
• If the word is not found in the cache, it is in main memory and
it counts as a miss.
• The ratio of the number of hits divided by the total CPU
references to memory (hits plus misses) is the hit ratio.
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Memory Management
• The memory management services of an operating system are one of the basic
services as it is needed:
• To ensure protection of different processes from each other (so that they do not
interfere with each other’s operation).
• To place the programs in memory (such that memory is optimum utilized and high
degree of multiprogramming can be achieved).
• Memory management keeps track of each and every memory location either it is
allocated to some process or it is free.
• It checks how much memory is to be allocated to processes. It decides which process
will get memory at what time.
• It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
• Memory management provides
protection by using two registers, a
base register and a limit register.

• The base register holds the smallest


legal physical memory address and the
limit register specifies the size of the
range.

• For example, if the base register holds


300000 and the limit register is
120000, then the program can legally
access all addresses from 300000
through 419999
Multistep processing of a user program
Logical Versus Physical Address Space
• An address generated by the CPU is commonly referred to as a logical
address (virtual address).
• An address seen by the memory unit (that is, the one loaded into the
memory-address register of the memory) is commonly referred to as a
physical address.
• The set of all logical addresses generated by a program is a logical address
space.
• the set of all physical addresses corresponding to these logical addresses is a
physical address space.
•  
Address Binding
• Address binding is the process of mapping the program's logical or virtual addresses
to corresponding physical or main memory addresses.
• In other words, a given logical address is mapped by the MMU (Memory
Management Unit) to a physical address.
• CPU generates the logical or virtual address for an instruction/data to be fetched
from RAM.
• The logical address undergoes translation by the MMU or address translation unit in
particular.
• The output of this process is the appropriate physical address or the location of
code/data in RAM.
Figure 1 Figure 2
Dynamic Loading
• With dynamic loading, a routine is not loaded until it is called.
• All routines are kept on disk in a relocatable load format. The main program is loaded
into memory and is executed.
• When a routine needs to call another routine, the calling routine first checks to see
whether the other routine has been loaded.
• If not, the relocatable loader is called to load the desired routine into memory and to
update the program's address tables to reflect this change. Then control is passed to
the newly loaded routine.
• Advantages
• Unused routine is never loaded
• It does not need special support from O/S
• O/S provides library routine to implement dynamic loading
Dynamic Linking:
• The concept of dynamic linking is similar to the dynamic binding. Without this
facility, all program on a system need to have a copy of their language library.
• With the dynamic linking, a stub is included in the image for each library
routine reference.
• The stub is a small piece of code that indicates how to load the library, if the
routine is not already present.
• When this stub is executed, it checks to see whether the needed routine is
already in memory.
• If it is not in memory, the program loads it into memory. Either way stub
replaces itself with the address of the routine and executes the routine.
Swapping
• A process must be in memory to be executed. A process, however, can be swapped
temporarily out of memory to a backing store and then again brought back into
memory for continued execution.
• For example, in a multiprogramming environment with a round-robin CPU-
scheduling algorithm,
• when a quantum expires, the memory manager will start to swap out the process
that just finished and to swap another process into the memory space that has been
freed.
• In the meantime, the CPU scheduler will allocate a time slice to some other process
in memory. When each process finishes its quantum, it will be swapped out with
another process.
• This swapping policy is also used in
priority-based scheduling algorithms.
• If a higher-priority process arrives and
wants service, the memory manager
can swap out the lower-priority
process and then load and execute the
higher-priority process.
• When the higher-priority process
finishes, the lower-priority process
can be swapped back in and
continued.
• This variant of swapping is sometimes
called roll out, roll in.
Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
Presented by:
Mr Anuj Kumar ,Mr Anuj Mangal -
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Contiguous Memory Allocation
• Main memory usually into two
partitions:
– Resident operating system, usually
held in low memory
– User processes then held in high
memory.
Memory Allocation: Fixed
Partitioning
• Partition main memory into a set of
non overlapping regions called
partitions
• Partitions can be of equal or unequal
sizes
• any process whose size is less than or
equal to a partition size can be
loaded into the partition
• if all partitions are occupied, the
operating system can swap a process
out of a partition

4
Fixed Partitioning – Problems
• Main memory use is inefficient. Any program, no matter how
small, occupies an entire partition. This is called internal
fragmentation.
• Unequal-size partitions lessens these problems but they still
remain...
• Equal-size partitions was used in early IBM’s OS/MFT
(Multiprogramming with a Fixed number of Tasks)

5
Multiple Partition Allocation: Dynamic
• Hole – block of available memory; holes of various size are scattered throughout
memory.
• When a process arrives, it is allocated contiguous memory from a hole large enough
to accommodate it.
• Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9

process 8 process 10

process 2 process 2 process 2 process 2


Multiple Partition Allocation
Dynamic Storage-Allocation Problem
• How to satisfy a request of size n from a list of free holes
– First-fit: Allocate the first hole that is big enough (fast, but fragments)
– Best-fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size (slow, but small fragments).
– Worst-fit: Allocate the largest hole; must also search entire list (slow,
but leaves large holes)

• First-fit and best-fit better than worst-fit in terms of speed and storage
utilization.
Fragmentation
• Internal Fragmentation – allocated memory may be
slightly larger than requested memory; this size
difference is memory internal to a partition, but not
being used.
– Occurs when memory is allocated in fixed size pieces
External Fragmentation
• Total memory space exists to satisfy a request, but it is
not contiguous. (Stats indicate 1/3 wastage)

 Reduce external fragmentation by compaction


Shuffle memory contents to place all free
memory together in one large block.
Compaction is possible only if relocation is
dynamic (i.e., registers can be updated),
and is done at execution time.
Compaction
• One solution to the problem of external fragmentation is compaction. The goal is to
move the memory contents so as to place all free memory together in one large
block. However, compaction is not always possible.
• If addresses are relocated dynamically, relocation requires only moving the program
and data and then changing the base register to reflect the new base address.
• The compaction moves all processes toward one end of memory and all holes in
the other direction, producing one large hole of available memory. This scheme can
be expensive.
• Eg: disk defragmenter in Windows.
• Internal fragmentation can be maimed by having partitions of several sizes and
assigning a program based on the best fit. However, still internal fragmentation is not
fully eliminated.
Compaction Options

Operating System Concepts


Course Name: Introduction of Logic Circuit & Digital Design
Course Code: BCAC0003

BCA Ist Semester


Fundamentals of Operating system
Course Code: BCAC0006
-
Presented by:
Mr Anuj Kumar
Assistant Professor, Dept. Of CEA
GLA University, Mathura
Non-contiguous Allocation
Two principal approaches:
– Paging
– Segmentation
Paging
• The basic method for implementing paging involves
• breaking physical memory into fixed-sized blocks called frames
• breaking logical memory into blocks of the same size called pages.
• When a process arrives in the system to be executed, its size, expressed
in pages, is examined.
• Each page of the process needs one frame. Thus, if the process requires
n pages, at least n frames must be available in memory. If n frames are
available, they are allocated to this arriving process.
• The first page of the process is loaded into one of the allocated frames,
and the frame number is put in the page table for this process.
• The next page is loaded into another frame, and its frame number is put
into the page table, and so on.
• Logical Address or Virtual Address (represented in bits): An address
generated by the CPU
• Every address generated by the CPU is divided into two parts:
• page number (p)
• page offset (d)
Page Addressing

• If the size of logical address space is 2m and a page size is 2n addressing units (bytes
or words), then the high-order m – n bits of a logical address designate the page
number, and the n low-order bits designate the page offset. Thus, the logical address
is as follows:
• where p is an index into the page table and d is the displacement within the page.
Here, in the logical address, n= 2 and m = 4. Using a page size
of 4 bytes and a physical memory of 32 bytes (8 pages), we show how the
user's view of memory can be mapped into physical memory.

•Logical address 0 is page 0, offset 0.

•Indexing into the page table, we find that page 0 is in frame 5. Thus, logical
address 0 maps to physical address 20 [= (5 x 4) + 0]

•Logical address 3 (page 0, offset 3) maps to physical address 23 [ = (5 x 4) +


3]

•Logical address 4 is page 1, offset 0; Thus, logical address 4 maps to physical


address 24 [ = ( 6 x 4) + O]

•Logical address 13 maps to physical address 9


1) How to calculate number of bits in logical address and physical address when logical
address space of 8 pages of 1024 word each, mapped to physical memory of 32 frames?

Size of logical address space is


No. of pages * Page size = 8 * 1024 = 2^3 * 2 ^10 = 2^13
No. of bits for logical address is 13

Size of Physical address space is


2^5 * 2^10 = 2^15
No. of bits for physical address is 15

2) Assume a page size of 1K and a 15-bit logical address space. How many pages are in the
system?
2^5 = 32.

3) Assuming a 15-bit address space with 8 logical pages. How large are the pages?
2^12 = 4K(4096).
Consider logical address 1025 and the following page table for some
process P0. Assume a 15-bit address space with a page size of 1K.
What is the physical address to which logical address 1025 will be
mapped?

8
0

2
Consider logical address 1025 and the following page table for some
process P0. Assume a 15-bit address space with a page size of 1K. What
is the physical address to which logical address 1025 maps?

8 Step 1. Convert to binary:

0 000010000000001

2
Consider logical address 1025 and the following page table for
some process P0. Assume a 15-bit address space with a page
size of 1K. What is the physical address to which logical address
1025 maps?

8 Step2. Determine the logical page number:


0 Since there are 5-bits allocated to the logical
page, the address is broken up as follows:
2 00001 0000000001
Logical page number offset with in page
Hierarchical Paging
• Most modern computer systems support a large logical address space (232 to 264). In
such an environment, the page table itself becomes excessively large.
• One simple solution to this problem is to divide the page table into smaller pieces. We
can accomplish this division in several ways.
• Two-level paging algorithm: In this the page table itself is also paged. For example,
consider again the system with a 32-bit logical address space and a page size of 4 KB.
• A logical address is divided into a page number consisting of 20 bits and a page offset
consisting of 12 bits.
• Because we page the page table, the page number is further divided into a 10-bit page
number and a 10-bit page offset. Thus, a logical address is as follows:
• where p1 is an index into the outer page table and P2 is the
displacement within the page of the outer page table. Because address
translation works from
• the outer page table inward, this scheme is also known as a forward-
mapped page table.
For Example : The VAX architecture supports a variation of two-level paging.

The next step would be a four-level paging scheme, where the second-level
outer page table itself is also paged, and so forth.

You might also like