0% found this document useful (0 votes)
19 views

BSC Os Sem4

Uploaded by

lvkreddy0101
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

BSC Os Sem4

Uploaded by

lvkreddy0101
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Operating Systems

UNIT- I
What is Operating System? History and Evolution of OS, Basic OS functions,
Resource Abstraction, Types of Operating Systems– Multiprogramming Systems,
Batch Systems, Time Sharing Systems; Operating Systems for Personal Computers,
Workstations and Hand- held Devices, Process Control & Real time Systems.
UNIT- II
Processor and User Modes, Kernels, System Calls and System Programs, System
View of the Process and Resources, Process Abstraction, Process Hierarchy,
Threads, Threading Issues, Thread Libraries; Process Scheduling, Non-Preemptive
and Preemptive Scheduling Algorithms.

UNIT III
Process Management: Deadlock, Deadlock Characterization, Necessary and
Sufficient Conditions for Deadlock, Deadlock Handling Approaches: Deadlock
Prevention, Deadlock Avoidance and Deadlock Detection and Recovery.
Concurrent and Dependent Processes, Critical Section, Semaphores, Methods for
Inter- process Communication; Process Synchronization, Classical Process
Synchronization Problems: Producer-Consumer, Reader-Writer.

UNIT IV
Memory Management: Physical and Virtual Address Space; Memory Allocation
Strategies– Fixed and -Variable Partitions, Paging, Segmentation, Virtual Memory.

UNIT V
File and I/O Management, OS Security: Directory Structure, File Operations, File
Allocation Methods, Device Management, Pipes, Buffer, Shared Memory, Security
Policy Mechanism, Protection, Authentication and Internal Access Authorization

REFERENCE BOOKS:
1. Operating System Principles by Abraham Silberschatz, Peter Baer Galvin and
Greg Gagne (7thEdition) Wiley India Edition.
2. Operating Systems: Internals and Design Principles by Stallings (Pearson)
3. Operating Systems by J. Archer Harris (Author), Jyoti Singh (Author) (TMH)

UNIT-1
Operating System Introduction
Page 1 of 38
What is Operating System:
Definition: An operating system is system software, that acts as
interface between user and computer. It is used to control resources
such as the CPU, memory, input/output devices and overall
operations of computer system.
An Operating System provides an environment, in which a
user can execute programs efficiently and conveniently. It is the
first program loaded during the booting and remains in the
memory all the time.

Objectives and Functions (or) Services of Operating System:


An operating system is system software, that acts as interface between user and
computer. The various services or functions provided by an operating system are as
follows:
1. Program Execution 6. User Interface
2. I/O Operations 7. Multitasking
3. File System Manipulation 8. Security
4. Error Handling 9. Networking
5. Resource Manager
1. Program Execution:
 The number of steps is needed to perform a program execution.
 Program must be loaded into main memory, I/O devices and files must be
initialized, and other resources must be prepared.
 The OS handles these tasks for the user to execute programs.
2. I/O Operations:
 Each Input/Output device requires its own set of instructions or signals for
operation.
 I/O operation means read or write operation with specific I/O device.
 The Operating System provides to access the I/O devices, whenever required.
3. File System manipulation:
 A file is a collection of related information. The files are stored in secondary storage
device.
 For easy access, files are grouped together into directories.
 The various file operations are creating/deleting files, backup the files, Mapping
files onto secondary memory etc.

4. Error Handling:

Page 2 of 38
 The various types of errors can occur, while a computer system is running.
 These include internal and external hardware errors, such as memory errors,
device failure errors etc.
 In each case, the OS is responsible to clear the errors, without effect on running
applications.
5.

Resource Manager:
 A computer has a set of resources for storing, processing of data, and also control
of these functions.
 The OS is responsible for managing these resources.
6. User Interface:
 OS provides an environment like CUI (Character User Interface) or GUI (Graphical
User Interface) to the user, to use the computer easily.
 It also translates various instructions given by user.
 Hence it acts as interface between user and computer.
7. Multitasking:
 OS automatically skips from one task to another, when multiple tasks are
executed.
 For example, typing text, listening to music, printing the information and so on.
 The Operating System is responsible for executing all the tasks at the same time.
8. Security:
 OS provides the security to protect various resources against unauthorized users.
 It also uses timer, that do not allow unauthorized processes to access the CPU.
9. Networking:
 Networking is used for exchanging the information between different computers.
 These computers are connected by using various communication links, such as
telephone lines or buses.
Resource Abstraction & Types of Operating Systems (Evaluation of OS):
Resource abstraction It is the process of “hiding the details of how the
hardware operates, thereby making computer hardware relatively easy for an
application programmer to use
Operating Systems are classified into different categories, Following are some of
the most widely used types of Operating system.
1. Simple Batch System
2. Multiprogramming System
3. Distributed Systems
4. Real Time Systems
5. Time sharing Operating Systems

1. Simple Batch Systems:

Page 3 of 38
In Batch Processing System, computer programs are executed as 'batches'. In this
system, programs are collected, grouped and executed at a time.
 In Batch Processing System, the user has to submit a job (written on cards or tape)
to a computer operator.
 The computer operator grouped all the jobs together as batches serially.
 Then computer operator places a batch of several jobs into input device.
 Then a special program called the Monitor, it executes each program in the batch.
 The Batches are executed one after another at a defined time interval.
 Finally, the operator receives the output of all jobs, and returns them to the
concerned users.
2. Multiprogramming (Multitasking) Systems:
In a multiprogramming system, one or more programs are loaded into main memory.
Only one program is executed at a time by the CPU. All the other programs are waiting
for execution.

Process 1

Process 2 CPU

Process 3

 This operating system picks and begins to execute one job from memory.
 Once this job needs an I/O operation, then operating system switches to another
job (CPU and OS always busy).
 Jobs in the memory are always less than the number of jobs on disk (Job Pool).
 If several jobs are ready to run at the same time, then OS chooses which one to run
based on CPU Scheduling methods.
 In Multiprogramming system, CPU will never be idle and keeps on processing.
3. Distributed Operating System:
In Distributed Operating System, the workload is shared between two or more
computers, linked together by a network.
 A network is a communication path between two or more computer systems.
 In Distributed Operating system, computers are called Nodes.
 It provides an illusion (imagination) to its users, that they are using single
computer.
 Different computers are linked together by a communication network. i.e., LAN
(Local Area Network) or WAN (Wide Area Network).

 The Distributed Operating system has the following two models:

Page 4 of 38
1. Client-Server Model

2. Peer-to-Peer Model
1. Client-Server Model: In this model, the Client sends a resource request to the
Server, and the server provides the requested resource to the Client. The
following diagram shows the Client-Server Model.
Server
Network

Client Client Client Client

2. Peer-to-Peer Model: In P2PModel, the Peers are computers, which are


connected to each other via network. Files can be shared directly between
systems on the network, without need of a central Server. The following diagram
shows the Peer-to-Peer Model.
Network

Client Client Client Client

4. Real-Time Operating System


 A Real Time Operating System (RTOS) is a special-purpose operating system.
 RTOS is a very fast and small operating system. It is also called Embedded system.
 It is used to control scientific experiments, industrial control systems, rockets,
home appliances, weapon systems etc.
 RTOS is divided into the following two categories:
1. Hard Real-Time System
2. Soft Real-Time System
1. Hard Real-Time System: It is guarantees that, critical tasks are completed
within the time. If the task is not completed within the time, then the system is
considered to be failed.
Ex: Nuclear systems, some medical equipment, flight control systems etc.
2. Soft Real-Time System: It is less restrictive system. If the task is not
completed within the time, then the system is not considered to be failed.
Ex: Multimedia (Games), Home Appliances etc.
5. Time-Sharing OS:
 It allows multiple users simultaneously share CPU’s
time.
 This OS allots a time slot to each user for execution.
 When the job expires, then the OS allocates the CPU
time to next user on the system.
 The time slot period is between 10-100ms this time is
called as time slice or a quantum

Page 5 of 38
Operating Systems for Personal Comp’s

1. Microsoft Windows
 Microsoft created the Windows operating system in the mid-1980s.
 There have been many different versions of Windows, but the most recent ones are
Windows 10 (released in 2015), Windows 8 (2012), Windows 7 (2009), and Windows
Vista (2007).
 Windows comes pre-loaded on most new PCs, which helps to make it the most
popular operating system in the world.

2. macOS
 macOS (previously called OS X) is a line of operating systems created by Apple.
 It comes preloaded on all Macintosh computers, or Macs.
 Some of the specific versions include Mojave (released in 2018), High Sierra (2017),
and Sierra (2016).
3. Solaris
 Best for Large workload processing, managing multiple databases, etc.
 Solaris is a UNIX based operating system which was originally developed by Sun
Microsystems in the mid-’90s.
 In 2010 it was renamed as Oracle Solaris after Oracle acquired Sun Microsystems. It is
known for its scalability and several other features that made it possible such as
Dtrace, ZFS and Time Slider
4. Linux
 The Linux was introduced by Linus Torvalds and the Free Software Foundation (FSF).
 Linux (pronounced LINN-ux) is a family of open-source operating systems,
 which means they can be modified and distributed by anyone around the world.
 This is different from proprietary software like Windows, which can only be modified
by the company that owns it.
 The advantages of Linux are that it is free, and there are many different distributions
—or versions—you can choose from.
5. Chrome OS
Best For a Web application.
Chrome OS is another Linux-kernel based operating software that is designed by
Google. As it is derived from the free chromium OS, it uses the Google Chrome web
browser as its principal user interface. This OS primarily supports web applications.

Page 6 of 38
WORKSTATIONS

 Workstation is a computer used for engineering applications


(CAD/CAM), desktop publishing, software development, and
other such types of applications which require a moderate
amount of computing power and relatively high-quality
graphics capabilities.
 Workstations generally come with a large, high-resolution
graphics screen, large amount of RAM, inbuilt network
support, and a graphical user interface. Most workstations
also have mass storage device such as a disk drive, but a special type of workstation,
called diskless workstation, comes without a disk drive.
 Common operating systems for workstations are UNIX and Windows NT. Like PC,
workstations are also single-user computers like PC but are typically linked together to
form a local-area network, although they can also be used as stand-alone systems.
Process Control
Process Control Block is a data structure that contains information of the
process related to it. The process control block is also known as a task control
block, entry of the process table, etc.
Process Control Block (PCB):
 The Process Control Block (PCB) is a data structure, which is created and managed
by Operating System. It is also called Task Control Block.
 Each process is represented in the operating system by a Process Control Block.
 Each and every process has its own PCB. The information in the PCB is updated,
during the process execution.
 The PCB contains sufficient information. So that it is possible to interrupt a running
process, and later resume execution.
 The Process Control Block contains the following information:
1. Identifier: It contains unique value, which is assigned by OS, at the time of
process creation.
2. State: It contains the process current state. The process state may be new, ready,
running, waiting, or terminated.
3. Priority: It contains the priority level value. It is related to other processes.
4. Program Counter: It contains the address of the
next instruction to be executed in the program. Identifier
5. Memory Pointers: It contains addresses of the Process state
instructions and data related to process. Priority
6. Context Data: It contains the data, which is Program Counter
stored in CPU registers, while the process is Memory Pointers
executing. Context Data
7. I/O Status Information: It contains a list of I/O I/O Status Information
devices allocated to the process, a list of open
files, and so on. Accounting Information
8. Accounting Information: It contains the :
amount of processor time used, time limits,
account numbers, and so on :

Page 7 of 38
UNIT-2

Processor
Processor is a hardware component which Controls the all operations of the
computer system. It is regularly called to as Central Processing Unit (CPU).

 A processor is an integrated electronic circuit that performs the calculations that run a
computer.
 A processor performs arithmetical, logical, input/output (I/O) and other basic
instructions that are passed from an operating system (OS).
 Most other processes are dependent on the operations of a processor.
 The CPU is just one of the processors inside a personal computer (PC).
 The Graphics Processing Unit (GPU) is another processor, and even some hard drives
are technically capable of performing some processing.

Processor Registers: A Register is a small memory, that resides in the processor. It


provides data quickly currently executing programs (process). A register can be 8-bit,
16-bit, 32-bits, or 64-bit.
a) PC:PC stands for Program Counter .It contains the address of next instruction to be
executed.
b) IR:IR stands for Instruction Register. It stores the currently being executed
instruction.
c) MAR:MAR stands for Memory Address Register. It stores the address of the data or
instruction, fetched from the main memory.
d) MBR:MBR stands for Memory Buffer Register.It stores the data or instruction fetched,
from the main memory. It then copied into Instruction Register (IR) for execution.
e) I/OAR: I/O AR stands for Input/Output Address Register. It specifies a particular I/O
device.
f) I/OBR:I/O BR stands for Input/Output Buffer Register. It is used for exchanging the
data between an I/O module and the processor.

Page 8 of 38
User Mode and Kernel Mode.
There are two modes of operation in the operating system to make sure it
works correctly. These are
1. User mode
2. Kernel mode.
1. User Mode
The system is in user mode when the operating system is running a user
application such as handling a text editor.
While in the User Mode, the CPU executes the processes that are given by
the user in the User Space.
The mode bit is set to 1 in the user mode. It is changed from 1 to 0 when
switching from user mode to kernel mode.
2. Kernel Mode
 A Kernel is a computer program that is the heart of an Operating System.
 The system starts in kernel mode when it boots and after the operating system is
loaded, it executes applications in user mode.
 There are certain instructions that need to be executed by Kernel only. So, the CPU
executes these instructions in the Kernel Mode only.
Ex:- memory management should be done in Kernel-Mode only
 The mode bit is set to 0 in the kernel mode. It is changed from 0 to 1 when switching
from kernel mode to user mode.
 The Operating System has control over the system,
 The Kernel also has control over everything in the system.
 The Kernel remains in the memory until the Operating System is
shut down.
 It provides an interface between the user and the hardware
components of the system. When a process makes a request to
the Kernel, then it is called System Call.
Functions of a Kernel
 Access Computer resource:
 Resource Management
 Memory Management:
 Device Management:
In the above image, the user process executes in the user mode until it gets
a system call. Then a system trap is generated, and the mode bit is set to zero. The
system call gets executed in kernel mode. After the execution is completed, again a
system trap is generated, and the mode bit is set to 1. The system control returns
to kernel mode and the process execution continues.

Page 9 of 38
System Call

 A system call is a way for programs to interact with the operating system.
 A computer program makes a system call when it makes a request to the operating
system’s kernel.
 System call provides the services of the operating system to the user programs via
Application Program Interface (API).
 It provides an interface between a process and operating system. All programs needing
resources must use system calls.

Services Provided by System Calls:


1. Process creation and management
2. Main memory management
3. File Access, Directory & File system management
4. Device handling(I/O)
5. Protection
6. Networking, etc.

Types of System Calls: There are 5 different categories of system calls –


1. Process control: end, abort, create, terminate, allocate and free memory.
2. File management: create, open, close, delete, read file etc.
3. Device management
4. Information maintenance
5. Communication

System Programs
System Programming can be defined as the act of building Systems Software using
System Programming Languages.
According to Computer Hierarchy, which comes at last is Hardware. Then it
is Operating System, System Programs, and finally Application Programs.

Process Concepts
Process:

Page 10 of 38
A Process is a program in the execution. A system consists of a collection of
processes. All the processes are executed in sequential fashion. Operating system
processes are executing the system code, and user processes are executing the user
code.
Process Hierarchy

Process States:
 The process state is defined as the current activity of the process.
 A process goes through various states, during its execution.
 The Operating system placed all the processes in a FIFO (First In First Out) queue for
execution.
 A dispatcher is a program; it switches the processor from one process to another for
execution.
 The different process states are as follows.

New Terminated
admitted
dispatch complete
Ready Running
timeout

Event occurs Event wait


Waiting

1. New State: The New state defines that, a process is being admitted (created) by an
operating system.
2. Ready State: The Ready state defines that, the process ready to execute. i.e., waiting
for a chance of execution.
3. Running State: The Running state defines that, the instructions of a process are
being executed.

Page 11 of 38
4. Waiting State: The Waiting state defines that, the process is waiting for some event to
occur, such as the completion of an I/O operation. It is also known as Blocked state.
5. Terminated State: The Terminated state defines that, the process has finished its
execution. The process can be either completely executed or aborted for some
reasons.
State Transitions of a Process
The process states are divided into different combinations
1. NullNew
2. NewReady
3. ReadyRunning
4. RunningTerminated
5. RunningReady
6. RunningWaiting
7. WaitingReady

Process Creation and Termination (or) Operations on Process


The Operating System must provide a facility for process creation and
termination. The processes are created and deleted dynamically.
1. Process Creation:
When a new process is added, the Operating System creates a Process Control
Block and allocates space in main memory. These steps are called as Process
Creation.
Example: Opening MS-Word software
When the O.S creates a new process, by the request of another process then it is
referred as “Process Spawning”. When one process spawns (produces) another, then
the former process is called as Parent process, and the spawned (produced) process is
called as Child process.
Example: Printing from MS-Word software
2. Process Termination:
An operating system terminates a process in different situations. While
termination, all the process related information is released from the main memory.
Example: Closing MS-Word software
Reasons for process termination:
A process can be terminated due to the following reasons:
 Normal completion of the process
 Time limit exceeded
 I/O Failure
 Invalid instruction executed
 Parent process terminated

Page 12 of 38
Process Scheduling (or) CPU Scheduling
The process scheduling is to assign processes to the processor for execution.
It is the method of executing multiple processes at a time in a multiprogramming
system.
Hence, the CPU scheduling helps to achieve system objectives such as response
time, CPU utilization, waiting time etc. In many systems, the scheduling task is divided
into three separate functions. They are
1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler

New

Long-term Long-term
scheduler scheduler
Ready/ Suspend
Ready Running Exit
Medium-term Short-term
scheduler scheduler

1. Long-Term Scheduler:
 A Long-Term Scheduler determines, which programs are admitted to the system for
processing.
 Once admitted a program, it becomes a process, and is added to the queue.
 It controls the degree of Multi-programming., the no.of processes present in ready
state at any time.
 The Long-Term Scheduler is also called as Job Scheduler.

2. Short-Term Scheduler:
 The Short-Term Scheduler is also known as CPU Scheduler or Dispatcher.
 It decides which process will execute next in the CPU. i.e., Ready to Be Running
state.
 It also preempts the currently running process, to execute another process.
 The main aim of this scheduler is, to enhance CPU performance and increase
process execution rate.
3. Medium-Term Scheduler:
 The Medium-Term Scheduler is responsible for suspending and resuming the
processes.
 It mainly does Swapping. i.e., moving processes from Main memory to secondary
memory and vice versa.
 The Medium-Term Scheduler reduces the degree of multi-programming.

Page 13 of 38
Process Scheduling Algorithms
Scheduling algorithms are used to decide, which of the process in the queue should be
allocated to the CPU. An Operating System uses Dispatcher, which assigns a process to
the CPU.
Types of Scheduling Algorithms:
The scheduling algorithms are classified into two types. They are as follows:

Scheduling Algorithms

Non- Preemptive Algorithms Preemptive


Algorithms

I. Non-Preemptive Algorithms:
A non-preemptive algorithm will not prevent currently running process. In
this case, once the process enters into CPU execution, it cannot be pre-empted,
until it completes its execution.
Ex: (1). First Come First Serve (FCFS)
(2). Shortest Job First (SJF)
II. Preemptive Algorithms:
A preemptive algorithm will prevent the currently running process. In this
case, the currently running process may be interrupted and moves to the Ready
state. The preemptive decision is performed, when a new process arrives or when an
interrupt occurs, or a time-out occurs.
Ex: Round Robin (RR)
1) First Come First Serve [FCFS] Algorithm:
 The FCFS algorithm is a simplest and straight forward scheduling algorithm.
 It follows non-preemptive scheduling algorithm method.
 In this algorithm, processes are executed on first-come and first-served basis.
 This algorithm is easy to understand and implement.
 The problem with this algorithm is, the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3

If the processes arrive in the order P1, P2, P3, then Gantt chart of this scheduling
is as follows.

P1 P2 P3

Page 14 of 38
0 24 27 30
2) Shortest Job First [SJF] Algorithm:
 It is also called as Shortest Process Next (SPN).
 It follows non-preemptive scheduling algorithm method.
 The SJF algorithm is faster than the FCFS.
 The process with least burst time is selected from the ready queue for execution.
 This is the best approach to minimize waiting time.
 The problem with SJF is that, it requires the prior knowledge of burst-time of each
process.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 6
P2 8
P3 7
P4 3

The Gantt chart of SJF scheduling is as follows.

P4 P1 P3 P2

0 3 9 16 24
3) Round Robin [RR] Algorithm:
 The Round Robin scheduling algorithm was used in Time-sharing System.
 It is one of the most widely used algorithms.
 A fixed time (Quantum) is allotted to each process for execution.
 If the running process doesn’t complete within the quantum, then the process is
preempted.
 The next process in the ready queue is allocated the CPU for execution.
 The problem with this algorithm is , the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3

If the time quantum is 4 milliseconds, then Gantt chart of this scheduling is as


follows.

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
Page 15 of 38
Threads
 A Thread is also called as “Light Weight Process” (or) a single unit of a process.
 Thread has its own Program Counter (PC), a register set, and a stack.
 It shares some information from other threads like process code, data, and open files.
 A traditional process has a single thread of control. It is also called “Heavy Weight
Process”.
 If the process contains multiple threads of control, then it can do more than one task
at a time.
 Many software packages that run on modern computers are multi-threaded.
 For Example, MS-Word software uses multiple threads like performing spelling and
grammar checking in background, auto save, etc.

Reference Image:

Threading Issues
Following threading issues are:
a) The fork() and exec() system call
b) Signal handling
c) Thread cancelation
d) Thread Pools
e) Thread local storage

Page 16 of 38
a. The fork() and exec() system calls
 The fork() is used to create a duplicate process. The meaning of the fork() and exec()
system calls change in a multithreaded program.
 If a thread calls the fork(), does the new process duplicate all threads
 If a thread calls the exec() system call, the program specified in the parameter to
exec() will replace the entire process which includes all threads.
b. Signal Handling
Generally, signal is used in UNIX systems to notify a process that a
particular event has occurred.
A signal received either synchronously or asynchronously, based on the
source of and the reason for the event being signaled.
All signals, whether synchronous or asynchronous, follow the same
pattern as given below
 A signal is generated by the occurrence of a particular event.
 The signal is delivered to a process.
 Once delivered, the signal must be handled.
c. Cancellation
Termination of the thread in the middle of its execution is called ‘thread
cancellation.
Threads that are no-longer required can be cancelled by another thread in one
of two techniques:
1. Asynchronies cancellation
2. Deferred cancellation
1. Asynchronies Cancellation
It means cancellation of thread immediately
2. Deferred Cancellation
In this method a flag is sets that indicating the thread should cancel itself
when it is feasible
For example − If multiple database threads are concurrently searching
through a database and one thread returns the result the remaining threads might
be cancelled.
d. Thread polls
 Multithreading in a web server, whenever the server receives a request, it creates a
separate thread to service the request
 A thread pool is to create a number of threads at process start-up and place them into
a pool, where they sit and wait for work.

Page 17 of 38
e. Thread Local Storage
The benefit of using threads in the first place is that Most data is shared
among the threads but, sometimes threads also need thread explicit data.
The Major libraries of threads are pThreads, Win32 and java which provide
support for thread specific which is called as TLS thread local storage

Thread Libraries
 Thread libraries provide programmers with an Application Program Interface for
creating and managing threads.
 Thread libraries may be implemented either in user space or in kernel space
There are two primary ways of implementing thread library, Those are
 The first way is to provide a library entirely in user space with kernel support
 The second way is to implement a kernel level library supported directly by the
operating system.
 There are Three Main Thread Libraries in use today:
1. POSIX Pthreads - may be provided as either a user or kernel library, as an extension
to the POSIX standard.
 pThreads are available on Solaris, Linux, Mac OSX, Tru64, and via public domain
shareware for Windows.
 Global variables are shared amongst all threads.
 One thread can wait for the others to rejoin before continuing.
2. Win32 threads - provided as a kernel-level library on Windows systems.
 It is Similar to pThreads.
3. Java threads –
 Since Java generally runs on a Java Virtual Machine,
 The implementation of threads is based upon whatever OS and hardware
 The JVM is running on, i.e. either Pthreads or Win32 threads depending on the
system.

Process Management
UNIT-3

Deadlock
Deadlock: “Deadlock is a situation, when a set of processes are blocked, because each
process is loading a resource, and waiting for another resource, acquired by some other

Page 18 of 38
process”. (or) The Deadlock is a situation when several processes may compete for a
finite number of resources.
In a multiprogramming system, a process requests a resource, and if the
resource is not available then the process enters a waiting state. The Waiting
process may never change state, because the resources are held by other waiting
process. This situation is called a Deadlock.
Consider the following Resource Allocation Graph.
R1

Assigned to
Waiting for

P1 P2

Assigned to
Waiting for

R2

From the above Resource allocation graph, process P1 is holding the


resource R1, and waiting for the resource R2, which is assigned by process P2,
and process P2 is waiting for resource R2. This situation is called Deadlock.

Deadlock characterization (or) Conditions for Deadlock


There are following 4 conditions that cause the occurrence of a deadlock.
1) Mutual exclusion: At least one resource must be held in a non-sharable mode. It
means only one process at a time can use the resource. If another process requests
the same resource, the requesting process must be wait until the resource has been
released.
2) Hold and wait: A process must be holding at least one resource and is waiting for
another resource that is held by other process.
3) No preemption: Resources cannot be preempted. It means, no resource can be
forcibly removed from a process holding it.
4) Circular wait: If processes are waiting for resources in a circle. For example, P1 is
holding resource R1, and is waiting for resource R2. Similarly, P2holding the resource
R2, and is waiting for resource R1.

Resource –Allocation Graph:


A Resource-Allocation graph is an analytical tool, that is used to verify whether a
system is in a deadlock state or not.

R1

Page 19 of 38
Assigned to
Waiting for

Assigned to
Waiting for

R2

Circular
R1Wait

Held
Request

P1 P2

Held
Request

R2

No Deadlock

R1 R1

P1 P1

Resource is required Resource is held

From the above all diagrams, P1, P2 represents processes. R1, R2


represents resources. Dot ( ) represents each instance of that resource.

Methods for Handling Deadlocks


The deadlock problem can be solved in three ways. They are
1. Deadlock prevention
2. Deadlock Avoidance
3. Deadlock detection and recovery
(1). Deadlock Prevention
Page 20 of 38
When the four conditions (mutual exclusion, hold and wait, no preemption, circular
wait) hold in the system, then deadlock occurs. If one of these conditions cannot hold, we
can prevent the occurrence of a deadlock. The strategy of dead lock prevention is simply
designing a system, in such a way that the possibility of dead lock is excluded.
a) Mutual exclusion:
 The Mutual exclusion condition can be prevented, whenever the resources are
sharable.
 Sharable resources do not require mutually exclusive access, and thus cannot be
involved in a deadlock.
 Read-only files are a good example of a sharable resource.
 Some resources such as files may allow multiple access for read, but exclusively
access for writing.
 If more than one process requires write permission, a dead lock can occur.
b) Hold & wait:
 The hold-and-wait condition can be prevented, whenever a process requests a
resource, it does not hold any other resources.
 There are two approaches for this:
 One approach is, requires that all processes request all resources at one time.
 Another approach is, requires that processes holding resources must release
them, before requesting new resource. And then re-acquire the released resource
along with new request.
c) No preemption:
 If a process request for a resource, which is held by another waiting resource, then
the requested resource may be preempted from the waiting resource.
 In the second approach, if a process request for a resource, which are not presently
available, then all other resources that it holds are preempted.
d) Circular wait:
 The circular-wait condition can be prevented, when the each resource will be assigned
with a numerical number.
 A process can request for the resource, only in increasing order of numbering.
 For example, if P1process is allocated R5 resource. Now next time, if P1 ask for R4,
R3, which are lesser than R5, such request will not be granted. Only request for
resources, more than R5 will be granted.

(2). Deadlock Avoidance


 In the dead lock avoidance, we restrict resources requests to prevent at least one of
the four conditions of dead lock.
 This leads to inefficient use of resources and inefficient execution of processes.

Page 21 of 38
 With dead lock avoidance, a decision is made dynamically where the current resource
allocation request will be granted.
 If it is granted potentially, it leads to a dead lock. Dead lock avoidance requires the
knowledge of further process resource request.
 In this we can describe two approaches to dead lock avoidance.
 Don’t start a process, if its demands may lead to dead lock.
 Don’t grant an incremental resource requested by a process, if this allocation lead
to dead lock.
 The Deadlock Avoidance algorithm ensures that, a process will never enter into unsafe
or deadlock state.
 Each process declares the maximum number of resources of each type that it may
need, number of available resources, allocated resources, maximum demand of the
processes.
 Processes inform operating system in advance, that how many resources they will
need.
 If we allocated the resources in an order for each process, according to requirements,
and deadlock cannot be occurred. Then this state is called as Safe state.
 A safe state in not a deadlocked state, and not all unsafe states are deadlocked. But
an unsafe state, deadlock may occur.
 We can recognize deadlock by using Banker’s algorithm.

Unsafe

Deadlock

Safe

Resource allocation:
Consider a system with a finite number of processes and finite number of
resources. At any time a process may have zero or more resources allocated to it.The state
of the system is reflected by the current allocation of resources to processes. The state
may be safe state or unsafe state.

Page 22 of 38
Safe State:

Unsafe State:

(3). Deadlock Detection and Recovery


 If a system does not use either a deadlock-prevention or a deadlock avoidance
algorithm, when a deadlock situation may occur.
 Deadlock Detection and Recovery technique is used, after system into deadlock
situation.
 Resource allocation Graph (RAG) is used in deadlock detection algorithm.
 The Detection algorithm that examines the state of the system, to detect whether a
deadlock has occurred.
 The Recovery algorithm is used to recover from the deadlock.

Page 23 of 38
1. Dead Lock Detection: Deadlock detection is the process of whether a deadlock
exists or not, and identify the processes and resources involved in the deadlock. The
basic idea is, to check allocation of resource availability, and to determine if the
system is in deadlocked state.
Detection strategies do not restrict process actions. With deadlock detection,
requested resources are granted to processes whenever possible. Periodically, the OS
performs an algorithm, to detect the circular wait condition.
1. A deadlock exists, if and only if, there are unmarked processes at the end of the
algorithm.
2. Each unmarked process is deadlocked.
3. The strategy in this algorithm is to find a process, whose request can be
satisfied with the available resources.
2. Deadlock Recovery: When a detection algorithm finds that a deadlock exists, then
several recovery methods used.
a) Process Termination: To eliminate deadlocks by aborting a process, we use one of
two methods. In both methods, the system reclaims all resources allocated to the
terminated processes.
1. Abort all deadlocked processes: This method clearly will break the deadlock
cycle. These processes are computed for a long time, and the results of these
partial computations must be discarded, and recomputed later.
2. Abort one process at a time, until the deadlock cycle is eliminated: This
method is very complicated to implement, even after each process is aborted. A
deadlock-detection algorithm determines, whether any processes are still
deadlocked.
b) Resource Preemption: Resources are preempted from the processes that are
involved in deadlock. Then preempted resources are allocated to other processes.
So that, there is a possibility of recovering the system from deadlock.

Process Synchronization
Process Synchronization means sharing system resources by processes in a
such a way that, Concurrent access to shared data is handled thereby minimizing
the chance of inconsistent data. Maintaining data consistency demands
mechanisms to ensure synchronized execution of cooperating processes. Process
Synchronization was introduced to handle problems that arose while multiple
process executions. Some of the problems are discussed below.

Critical Section Problem


A Critical Section is a code segment that accesses shared variables and has
to be executed as an atomic action. It means that in a group of cooperating

Page 24 of 38
processes, at a given point of time, only one process must be executing its critical
section. If any other process also wants to execute its critical section, it must wait
until the first one finishes

Solution to Critical Section Problem


A solution to the critical section problem must satisfy the following three
conditions :

1. Mutual Exclusion Out of a group of cooperating processes, only one process can
be in its critical section at a given point of time.
2. Progress If no process is in its critical section, and if one or more threads want to
execute their critical section then any one of these threads must be allowed to get
into its critical section.
3. Bounded Waiting After a process makes a request for getting into its critical
section, there is a limit for how many other processes can get into their critical
section, before this process's request is granted. So after the limit is reached,
system must grant the process permission to get into its critical section.
Synchronization Hardware
Many systems provide hardware support for critical section code. The critical
section problem could be solved easily in a single-processor environment if we
could disallow interrupts to occur while a shared variable or resource is being
modified.
In this manner, we could be sure that the current sequence of instructions
would be allowed to execute in order without pre-emption. Unfortunately, this
solution is not feasible in a multiprocessor environment.
Disabling interrupt on a multiprocessor environment can be time
consuming as the message is passed to all the processors. This message

Page 25 of 38
transmission lag, delays entry of threads into critical section and the system
efficiency decreases.

OPERATING SYSTEM LAB - 2021-2022


INDEX

SNO DATE NAME OF EXPERIMENT


1. 06/05/2022 Creating a file with records
2. 08/05/2022 First Come First Serve Scheduling
3. 12/05/2022 Shortest Job First Scheduling
4. 03/06/2022 Round Robin Scheduling
5. 05/06/2022 Deadlock Detection
6. 08/06/2022 Deadlock Avoidance
7. 10/06/2022 Sequential File Allocation Strategy
8. 18/06/2022 Indexed File Allocation Strategy

Page 26 of 38
1. CREATING A FILE WITH RECORDS

Aim: A program to create a file with records.

PROGRAM:
/* a program to write data into file*/
#include <stdio.h>
#include <conio.h>
#include <stdlib.h>
main()
{
int stno,sub1,sub2,n,i; char stname[10];
FILE *fp; fp=fopen("bca.txt","w");
if(fp = = NULL)
{
printf("Can't open that file!");
exit(0);
}
clrscr();
printf("How many students: ");
scanf("%d",&n);
for(i=1;i<=n;i++)
{
printf("\nEnter Student %d Details \n",i);
printf("Student Number:");
scanf("%d",&stno);
fflush(stdin); printf("Student Name:");
gets(stname);
printf("Marks in two subjects:");
scanf("%d%d",&sub1,&sub2);
fprintf(fp,"\n%d %s %d %d",stno,stname,sub1,sub2);
}
printf("Record(s) Created Successfully");
fclose(fp);
getch();
}

OUTPUT:
How many students: 2
Enter Student 1 Details Student Number:101 Student Name:ABC
Marks in two subjects:45 56

Page 27 of 38
Enter Student 2
Details Student
Number:102
Student Name:XYZ
Marks in two subjects:67 55 Record(s) Created Successfully

2. FIRST COME FIRST SERVE SCHEDULING


Aim: A program to implement First Come First Serve (FCFS) scheduling
algorithm
PROGRAM:
/* a program to implement FCFS process scheduling algorithm */ #include<stdio.h>
#include<conio.h>
#include<process.h>
void main()
{
char p[10][5];
int tot=0,wt[10],et[10],pt[10],i,n;
float avg=0;
clrscr();
printf("How many processes? ");
scanf("%d",&n); for(i=0;i<n;i++)
{
printf("Enter process%d name: ",i+1);
scanf("%s",p[i]);
printf("Enter process service time: ");
scanf("%d",&pt[i]);
}
wt[0]=0;
for(i=1;i<n;i++)
{
wt[i]=wt[i-1]+pt[i-1];
tot=tot+wt[i];
}
avg=(float)tot/n;
printf("p_name\t P_time\t w_time\n");
for(i=0;i<n;i++)
printf("%s\t%d\t%d\n",p[i],pt[i],wt[i]);
printf("\n Total waiting time=%d",tot);
printf("\n Average waiting time=%f",avg);
getch();
}
OUTPUT:
How many processes? 3 Enter process1 name: A
Enter process service time: 3 Enter process2 name: B
Enter process service time: 6 Enter process3 name: C
Enter process service time: 4 p_name P_timew_time
A 3
B 6

Page 28 of 38
C 4

Total waiting time=12


Average waiting time=4.000000

3. SHORTEST JOB FIRST SCHEDULING


Aim: A program to implement Shortest Job First (SJF) scheduling algorithm
PROGRAM:

/* a program to implement Shortest Job First (SJF) process Scheduling algorithm */


#include<stdio.h>
#include<conio.
h>
#include<stdlib.
h> main()
{
int temp, i, j, n, sum = 0, position; float
avg_wait_time, avg_turn_time;
int ser_time[20], process[20], wait_time[20], turn_time[20];
clrscr();
printf("\n How many Processes?");
scanf("%d", &n);
for(i = 0; i < n; i++)
{
printf("Enter execution time for Process[%d]:\t", i + 1);
scanf("%d", &ser_time[i]);
process[i] = i + 1;
}
for(i = 0; i < n; i++)
{
position = i;
for(j = i + 1; j < n; j++)
{
if(ser_time[j] < ser_time[position])
{
position = j;
}
}
temp = ser_time[i];
ser_time[i] = ser_time[position];
ser_time[position] = temp;
temp = process[i];
process[i] = process[position];
process[position] = temp;
}
wait_time[0] = 0;
for(i = 1; i < n; i++)
{
wait_time[i] = 0;
for(j = 0; j < i; j++)
{

Page 29 of 38
wait_time[i] = wait_time[i] + ser_time[j];
}
sum = sum + wait_time[i];
}
avg_wait_time = (float)sum / n;
sum = 0;
printf("\nProcess ID\t\tService Time\t Waiting Time\t Turnaround Time\n");
for(i = 0; i < n; i++)
{
turn_time[i] = ser_time[i] + wait_time[i];
sum = sum + turn_time[i];
printf("\nProcess[%d]\t\t%d\t\t %d\t\t %d",process[i],ser_time[i],wait_time[i],
turn_time[i]);
}
avg_turn_time = (float)sum / n;
printf("\n\n Average Waiting Time: %f", avg_wait_time); printf("\nAverage
Turnaround Time: %f", avg_turn_time);
getch();
}
OUTPUT:
How many Processes?3
Enter execution time for Process[1]: 3
Enter execution time for Process[2]: 6
Enter execution time for Process[3]: 4
Process ID Service Time Waiting Time Turnaround Time

Process[1] 3 0 3
Process[3] 4 3 7
Process[2] 6 7 13
Average Waiting Time: 3.333333
Average Turnaround Time: 7.666667

4. ROUND ROBIN SCHEDULING

Aim: A program to implement Round Robin (RR) Scheduling algorithm


PROGRAM:
/* a program to implement Round Robin (RR) process scheduling alogirthm
*/ #include<stdio.h>
#include<conio.h>
#include <stdlib.h>
int
main()
{
int i, n, total = 0, x, counter = 0, q_time;
char p[10][10];
int wait_time = 0, turn_time = 0, arr_time[10], ser_time[10], temp[10];
float avg_wait_time, avg_turn_time;
clrscr();
printf("How many Processes?");
scanf("%d", &n);
x = n;
for(i = 0; i < n; i++)
{
printf("\n Enter Process %d Name: " ,i+1);
scanf("%s",p[i]);
printf("Arrival Time:");
scanf("%d", &arr_time[i]);
printf("Service Time:");
scanf("%d", &ser_time[i]);
temp[i] = ser_time[i];

Page 30 of 38
}
printf("\nEnter Quantum Time:");
scanf("%d", &q_time);
printf("\nProcess Name\t\tService Time\t Turnaround Time\t Waiting Time\n");
total=0;
i=0;
while (x!=0)
{
if(temp[i] <= q_time && temp[i] > 0)
{
total = total +
temp[i]; temp[i] = 0;
counter = 1;
}
else if(temp[i] > 0)
{
temp[i] = temp[i] -
q_time; total = total +
q_time;
}
if(temp[i] == 0 && counter == 1)
{
x--;
printf("\nProcess %s\t\t%d\t\t %d\t\t\t %d",p[i],ser_time[i],
total-arr_time[i], total-arr_time[i]-ser_time[i]);
wait_time = wait_time + total - arr_time[i] - ser_time[i];
turn_time = turn_time + total - arr_time[i];
counter = 0;
}
if(i == n - 1)
{
i = 0;
}
else if(arr_time[i+1] <= total)
{
i++;
}
else
{
i=0;
}
}
avg_wait_time = wait_time * 1.0 / n;
avg_turn_time = turn_time * 1.0 / n;
printf("\n\n Average Waiting Time: %f", avg_wait_time);
printf("\n Average Turnaround Time: %f", avg_turn_time);
getch();
}

OUTPUT:
How many Processes?3

Enter Process 1 Name: A


Arrival Time:0
Service Time:3

Enter Process 2 Name: B


Arrival Time:2
Service Time:6

Enter Process 3 Name: C


Arrival Time:4
Service Time:4

Page 31 of 38
Enter Quantum Time:1
Process Name Service Time Turnaround Time Waiting Time
Process A 3 4 1
Process C 4 8 4
Process B 6 11 5
Average Waiting Time: 3.333333
Average Turnaround Time: 7.666667

5. DEADLOCK DETECTION
Aim: A program to implement deadlock prevention algorithm
PROGRAM:
#include<stdio.h>
#include<conio.h>
void main()
{
int found,flag,l,p[4][5],tp,tr,c[4][5],i,j,k=1,m[5],r[5],a[5],temp[5],sum=0;
clrscr();
printf("How many processes:");
scanf("%d",&tp);
printf("\n How many resources:");
scanf("%d",&tr);
printf("Enter number of resource units for each resource:\n");
for(i=1;i<=tr;i++)
scanf("%d",&r[i]);
printf("enter maximum resources for each process\n");
for(i=1;i<=tp;i++)
for(j=1;j<=tr;j++)
{
scanf("%d",&c[i][j]);
}
printf("Enter allocated resources for each process\n");
for(i=1;i<=tp;i++)
for(j=1;j<=tr;j++)
{
scanf("%d",&p[i][j]);
}
printf("enter availability vector:\n");
for(i=1;i<=tr;i++)
{
scanf("%d",&a[i]);
temp[i]=a[i];
}
for(i=1;i<=tp;i++)
{
sum=0;
for(j=1;j<=tr;j++)
sum+=p[i][j];
if(sum==0)
{

Page 32 of 38
m[k]=i;
k++;
}
}
for(i=1;i<=tp;i++)
{
for(l=1;l<k;l++)
if(i!=m[l])
{
flag=1;
for(j=1;j<=tr;j++)
if(c[i][j]>temp[j])
{
flag=0;
break;
}
}
if(flag==1)
{
m[k]=i;
k++;
for(j=1;j<=tr;j++)
temp[j]+=p[i][j];
}
}
printf("deadlock causing processes are:");
for(j=1;j<=tp;j++)
{
found=0;
for(i=1;i<k;i++)
{
if(j==m[i])
found=1;
}
if(found==0)
printf("%d\t",j);
}
getch();
}
OUTPUT:
How many processes:2
How many resources:3
Enter number of resource units for each resource:
333
enter maximum resources for each process
222
222
Enter allocated resources for each process
111
111
enter availability vector:
111
deadlock causing processes are: 1 2
6. DEADLOCK AVOIDANCE
AIM: A program to implement deadlock avoidance algorithm
PROGRAM:
/* A program to implement deadlock Avoidance algorithm */

Page 33 of 38
#include<stdio.h>
#include<conio.h>
#include<stdlib.h>
void main()
{
int allocated[15][15],max[15][15],need[15][15];
int avail[15],tres[15],work[15],flag[15];
int pno,rno,i,j,prc,count,t,total;
count=0;
clrscr();
printf("Enter number of processes:");
scanf("%d",&pno);
printf("Enter number of resources:");
scanf("%d",&rno); for(i=1;i<=pno;i++) flag[i]=0;
printf("Enter number of resource units for each resource:");
for(i=1;i<= rno;i++)
scanf("%d",&tres[i]);
printf("Enter Maximum resources for each process:");
for(i=1;i<= pno;i++)
{
printf("\n For process%d :",i);
for(j=1;j<= rno;j++)
scanf("%d",&max[i][j]);
}
printf("Enter allocated resources for each process:");
for(i=1;i<= pno;i++)
{
printf("\n For process%d :",i);
for(j=1;j<= rno;j++)
scanf("%d",&allocated[i][j]);
}
printf("Available resources:\n");
for(j=1;j<= rno;j++)
{
avail[j]=0;
total=0;
for(i=1;i<= pno;i++)
total+=allocated[i][j];
avail[j]=tres[j]-total;
work[j]=avail[j];
printf("%d \t",work[j]);
}
do
{
for(i=1;i<= pno;i++)
{
for(j=1;j<= rno;j++)
need[i][j]=max[i][j]-allocated[i][j];
}
printf("\n Allocated matrix \t Max \t Need");
for(i=1;i<= pno;i++)
{
printf("\n");
for(j=1;j<= rno;j++)
printf("%4d",allocated[i][j]); printf("\t\t|");
for(j=1;j<= rno;j++)
printf("%4d",max[i][j]); printf("\t|");

Page 34 of 38
for(j=1;j<= rno;j++)
printf("%4d",need[i][j]);
}
prc=0;
for(i=1;i<= pno;i++)
{
if(flag[i]==0)
{
prc=i;
for(j=1;j<= rno;j++)
{
if(work[j]< need[i][j])
{
prc=0;
break;
}
}
}
if(prc!=0)
break;
}
if(prc!=0)
{
printf("\n Process %d completed",i);
count++;
printf("\n Available matrix:");
for(j=1;j<= rno;j++)
{
work[j]+=allocated[prc][j];
allocated[prc][j]=0; max[prc][j]=0;
flag[prc]=1;
printf("%d",work[j]);
}
}
} while(count!=pno && prc!=0);
if(count==pno)
printf("\nThe system is in a safe state!!");
else
printf("\nThe system is in an unsafe state!!");
getch();
}

OUTPUT:
Enter number of
processes:2
Enter number of
resources:3
Enter number of resource units for each
resource:3 3 3 Enter Maximum resources for
each process:
For process1 :1 1 1
For process2 :1 1 1
Enter allocated resources for each
process: For process1 :1 1 1

Page 35 of 38
For process2 :1 1 1

1 1 1
Allocated matrix Max Need
1 1 1 | 1 1 1 | 0 0 0
1 1 1 | 1 1 1 | 0 0 0
Process 1 completed
Available matrix: 2 2 2
Allocated matrix Max Need
0 0 0 | 0 0 0 | 0 0 0
1 1 1 | 1 1 1 | 0 0 0
Process 2 completed
Available matrix: 3 3 3
The system is in a safe state!!

7. Sequential File Allocation Strategy


AIM:-
/* Program to simulate Sequential File Allocation Strategy */
Program:
#include < stdio.h>
#include<conio.h>
void main()
{
int f[50], i, st, len, j, c, k, count = 0;
clrscr();
for(i=0;i<50;i++)
f[i]=0;
printf("Files Allocated are : \n");
x: count=0;
printf(“Enter starting block and length of files: ”);
scanf("%d%d", &st,&len);
for(k=st;k<(st+len);k++)
if(f[k]==0)
count++;
if(len==count)
{
for(j=st;j<(st+len);j++)
if(f[j]==0)
{
f[j]=1;
printf("%d\t%d\n",j,f[j]);
}
if(j!=(st+len-1))
printf(” The file is allocated to disk\n");
}
else
printf(” The file is not allocated \n");
printf("Do you want to enter more file(Yes - 1/No - 0)");
scanf("%d", &c);
if(c==1)
goto x;
else
exit();
getch();
}

Page 36 of 38
OUTPUT:-

Files Allocated are :


Enter starting block and length of files: 14 3
14 1
15 1
16 1
The file is allocated to disk
Do you want to enter more file(Yes - 1/No - 0)1
Enter starting block and length of files: 14 1
The file is not allocated
Do you want to enter more file(Yes - 1/No - 0)1
Enter starting block and length of files: 14 4
The file is not allocated
Do you want to enter more file(Yes - 1/No - 0)0

8. Indexed File Allocation Strategy


AIM:-
/* Program to simulate Indexed File Allocation Strategy */
Program :
#include<stdio.h>
#include<conio.h>
#include<stdlib.h>
void main()
{
int f[50], index[50],i, n, st, len, j, c, k, ind,count=0;
clrscr();
for(i=0;i<50;i++)
f[i]=0;
x:printf("Enter the index block: ");
scanf("%d",&ind);
if(f[ind]!=1)
{
printf("Enter no of blocks needed and no of files for the index %d on the disk : \n", ind);
scanf("%d",&n);
}
else
{
printf("%d index is already allocated \n",ind);
goto x;
}
y: count=0;
for(i=0;i<n;i++)
{
scanf("%d", &index[i]);
if(f[index[i]]==0)
count++;
}
if(count==n)
{
for(j=0;j<n;j++)
f[index[j]]=1;
printf("Allocated\n");
printf("File Indexed\n");
for(k=0;k<n;k++)
printf("%d-------->%d : %d\n",ind,index[k],f[index[k]]);
}

Page 37 of 38
else
{
printf("File in the index is already allocated \n");
printf("Enter another file indexed");
goto y;
}
printf("Do you want to enter more file(Yes - 1/No - 0)");
scanf("%d", &c);
if(c==1)
goto x;
else
exit(0);
getch();
}

Output :-
Program Output:
Enter the index block: 5
Enter no of blocks needed and no of files for the index 5 on the disk :
4
1234
Allocated
File Indexed
5-------->1 : 1
5-------->2 : 1
5-------->3 : 1
5-------->4 : 1
Do you want to enter more file(Yes - 1/No - 0)1
Enter the index block: 4
4 index is already allocated
Enter the index block: 6
Enter no of blocks needed and no of files for the index 6 on the disk :
2
78
A5llocated
File Indexed
6-------->7 : 1
6-------->8 : 1
Do you want to enter more file(Yes - 1/No - 0)0

Dear Students, no one can predict your future, don't see back, take always step
forward with confidence.

Page 38 of 38

You might also like