Os Unit IV Notes
Os Unit IV Notes
Deadlocks: Resources, Conditions for resource deadlocks, Ostrich algorithm, Deadlock detection And
recovery, Deadlock avoidance, Deadlock prevention.
File Systems: Files, Directories, File system implementation, management and optimization.
Secondary-Storage Structure: Overview of disk structure, and attachment, Disk scheduling, RAID
structure, Stable storage implementation.
Deadlocks
When processes request a resource and if the resources are not available at that time the
process enters into waiting state. Waiting process may not change its state because the resources
they are requested are held by other process. This situation is called deadlock. The situation
where the process waiting for the resource i.e., not available is called deadlock.
System Model
A system may consist of finite number of resources and is distributed among number of
processes. There resources are partitioned into several instances each with identical instances.
A process must request a resource before using it and it must release the resource after
using it. It can request any number of resources to carry out a designated task. The amount of
resource requested may not exceed the total number of resources available.
A process may utilize the resources in only the following sequences:
Request:-If the request is not granted immediately then the requesting process must wait it can
acquire the resources.
Use:-The process can operate on the resource.
Deadlock may involve different types of resources. For eg:-Consider a system with one printer
and one tape drive. If a process Pi currently holds a printer and a process Pj holds the tape drive.
If process Pi request a tape drive and process Pj request a printer then a deadlock occurs.
Multithread programs are good candidates for deadlock because they compete for shared
resources.
Deadlock Characterization:
Necessary Conditions: A deadlock situation can occur if the following 4 conditions occur
simultaneously in a system:-1. Mutual Exclusion: Only one process must hold the resource at a
time. If any other process requests for the resource, the requesting process must be delayed until
the resource has been released.
Hold and Wait:-A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by the other process.
No Preemption:-Resources can‘t be preempted i.e., only the process holding the resources must
release it after the process has completed its task.
Circular Wait:-A set {P0,P1 Pn} of waiting process must exist such that P0 is waiting for a
1
resource i.e., held by P1, P1 is waiting for a resource i.e., held by P2. Pn-1 is waiting for
resource held by process Pn and Pn is waiting for the resource i.e., held by P1. All the four
conditions must hold for a deadlock to occur.
Resource Allocation Graph:
Deadlocks are described by using a directed graph called system resource allocation
graph. The graph consists of set of vertices (v) and set of edges (e).
The set of vertices (v) can be described into two different types of nodes P={P1,P2 .......... Pn}
i.e., set consisting of all active processes and R={R1,R2… ........ Rn}i.e., set consisting of all
resource types in the system
A directed edge from process Pi to resource type Rj denoted by Pi->Ri indicates that Pi
requested an instance of resource Rj and is waiting. This edge is called Request edge.
8. A directed edge Ri-> Pj signifies that resource Rj is held by process Pi. This is
called Assignment edge
Eg: R1 R3
R2 R4
If the graph contain no cycle, then no process in the system is deadlock. If the graph
contains a cycle then a deadlock may exist. If each resource type has exactly one instance than
a cycle implies that a deadlock has occurred. If each resource has several instances then a cycle
do not necessarily implies that a deadlock has occurred.
To ensure that the deadlock never occur the system can use either deadlock avoidance
or a deadlock prevention.
Deadlock prevention is a set of method for ensuring that at least one of the necessary
conditions does not occur.
Deadlock avoidance requires the OS is given advance information about which resource
a process will request and use during its lifetime.
If a system does not use either deadlock avoidance or deadlock prevention then a deadlock
situation may occur. During this it can provide an algorithm that examines the state of the system
to determine whether a deadlock has occurred and algorithm to recover from deadlock.
Undetected deadlock will result in deterioration of the system performance.
2
Deadlock Prevention
For a deadlock to occur each of the four necessary conditions must hold. If at least one of the
there condition does not hold then we can prevent occurrence of deadlock.
Mutual Exclusion: This holds for non-sharable resources. Eg:-A printer can be used by only one
process at a time.
Mutual exclusion is not possible in sharable resources and thus they cannot be involved
in deadlock. Read-only files are good examples for sharable resources. A process never waits
for accessing a sharable resource. So we cannot prevent deadlock by denying the mutual
exclusion condition in non-sharable resources.
Hold and Wait: This condition can be eliminated by forcing a process to release all its resources
held by it when it request a resource i.e., not available. One protocol can be used is that each
process is allocated with all of its resources before its start execution.
Eg:-consider a process that copies the data from a tape drive to the disk, sorts the file and then
prints the results to a printer. If all the resources are allocated at the beginning then the tape
drive, disk files and printer are assigned to the process. The main problem with this is it leads
to low resource utilization because it requires printer at the last and is allocated with it from the
beginning so that no other process can use it.
Another protocol that can be used is to allow a process to request a resource when the process
has none. i.e., the process is allocated with tape drive and disk file. It performs the required
operation
and releases both. Then the process once again request for disk file and the printer and the
problem and with this is starvation is possible.
No Preemption: To ensure that this condition never occurs the resources must be preempted.
The following protocol can be used. If a process is holding some resource and request another
resource that cannot be immediately allocated to it, then all the resources currently held by the
requesting process are preempted and added to the list of resources for which other processes
may be waiting. The process will be restarted only when it regains the old resources and the
new resources that it is requesting.
When a process request resources, we check whether they are available or not. If they
are available we allocate them else we check that whether they are allocated to some other
waiting process. If so we preempt the resources from the waiting process and allocate them to
the requesting process. The requesting process must wait.
Circular Wait:-The fourth and the final condition for deadlock is the circular wait condition.
One way to ensure that this condition never, is to impose ordering on all resource types and
each process requests resource in an increasing order.
Let R={R1,R2,………Rn} be the set of resource types. We assign each resource type with
a unique integer value. This will allows us to compare two resources and determine whether
one precedes the other in ordering. Eg:-we can define a one to one function
Deadlock Avoidance
Deadlock prevention algorithm may lead to low device utilization and reduces system
throughput. Avoiding deadlocks requires additional information about how resources are to be
requested. With the knowledge of the complete sequences of requests and releases we can
decide for each requests whether or not the process should wait.
For each requests it requires to check the resources currently available, resources that
are currently allocated to each processes future requests and release of each process to decide
whether the current requests can be satisfied or must wait to avoid future possible deadlock.
A deadlock avoidance algorithm dynamically examines the resources allocation state to
ensure that a circular wait condition never exists. The resource allocation state is defined by the
number of available and allocated resources and the maximum demand of each process.
Safe State:
A state is a safe state in which there exists at least one order in which all the process
will run completely without resulting in a deadlock.
A system is in safe state if there exists a safe sequence.
A sequence of processes <P1,P2, ............. Pn> is a safe sequence for the current allocation
state if for each Pi the resources that Pi can request can be satisfied by the currently available
resources.
If the resources that Pi requests are not currently available then Pi can obtain all of its
needed resource to complete its designated task.
A safe state is not a deadlock state.
Whenever a process request a resource i.e., currently available, the system must decide
whether resources can be allocated immediately or whether the process must wait. The request
is granted only if the allocation leaves the system in safe state.
In this, if a process requests a resource i.e., currently available it must still have to wait.
Thus resource utilization may be lower than it would be without a deadlock avoidance algorithm.
Resource Allocation Graph Algorithm:
This algorithm is used only if we have one instance of a resource type. In addition to the request
edge and the assignment edge a new edge called claim edge is used. For eg:-A claim edge Pi to
Rj indicates that process Pi may request Rj in future. The claim edge is represented by a dotted
line. When a process Pi requests the resource Rj, the claim edge is converted to a request edge.
When resource Rj is released by process Pi, the assignment edge Rj to Pi is replaced by the
claim edge Pi to Rj.
When a process Pi requests resource Rj the request is granted only if converting the request edge
Pi to Rj to as assignment edge Rj to Pi do not result in a cycle. Cycle detection algorithm is used
to detect the cycle. If there are no cycles then the allocation of the resource to process leave the
system in safe state
4
Banker’s Algorithm:
This algorithm is applicable to the system with multiple instances of each resource
types, but this is less efficient then the resource allocation graph algorithm.
When a new process enters the system it must declare the maximum number of
resources that it may need. This number may not exceed the total number of resources in the
system. The system must determine that whether the allocation of the resources will leave the
system in a safe state or not. If it is so resources are allocated else it should wait until the process
release enough resources.
Several data structures are used to implement the banker‘s algorithm. Let ‗n‘ be the
number of processes in the system and ‗m‘ be the number of resources types. We need the
following data structures:
Available:-A vector of length m indicates the number of available resources. If Available[i]=k,
then k instances of resource type Rj is available.
Max:-An n*m matrix defines the maximum demand of each process if Max[i,j]=k, then Pi may
request at most k instances of resource type Rj.
Allocation:-An n*m matrix defines the number of resources of each type currently allocated
to each process. If Allocation[i,j]=k, then Pi is currently k instances of resource type Rj.
Need:-An n*m matrix indicates the remaining resources need of each process. If Need[i,j]=k,
then Pi may need k more instances of resource type Rj to compute its task. So
Need[i,j]=Max[i,j]- Allocation[i]
Safety Algorithm:
This algorithm is used to find out whether or not a system is in safe state or
not. Step 1. Let work and finish be two vectors of length M and N
respectively.
Initialize work = available and Finish[i]=false for i=1,2,3,…….n
Step 2. Find i such that both Finish[i]=false Need i <= work If no such i exist then go
4 Step 3. Work = work + Allocation Finish[i]=true Go to step 2
Step 4. If finish[i]=true for all i, then the system is in safe state. This algorithm may require
an order of m*n*n operation to decide whether a state is safe.
Resource Request Algorithm: Let Request(i) be the request vector of process Pi. If
Request(i)[j]=k, then process Pi wants K instances of the resource type Rj. When a request for
resources is made by process Pi the following actions are taken.
If Request(i) <= Need(i) go to step 2 otherwise raise an error condition since the process
has exceeded its maximum claim. If Request(i) <= Available go to step 3 otherwise Pi must
wait. Since the resources are not available. If the system want to allocate the requested resources
to process Pi then modify the state as follows.
5
If the resulting resource allocation state is safe, the transaction is complete and Pi is
allocated its resources. If the new state is unsafe then Pi must wait for Request(i) and old
resource allocation state is restored.
Deadlock Detection
If a system does not employ either deadlock prevention or a deadlock avoidance
algorithm then a deadlock situation may occur. In this environment the system may provide x
An algorithm that examines the state of the system to determine whether a deadlock has
occurred. x An algorithm to recover from the deadlock.
The wait for graph is applicable to only a single instance of a resource type. The following
algorithm applies if there are several instances of a resource type. The following data
structures are used:-
o Available:-Is a vector of length m indicating the number
ofavailable resources of each type .
o Allocation:-Is an m*n matrix which defines the number of
resourcesof each type currently allocated to each process.
o Request:-Is an m*n matrix indicating the current request of
each process. If
request[i,j]=k then Pi is requesting k more instances of resources
6
type Rj.
Step 1. let work and finish be vectors of length m and n respectively.
Initialize Work =
available/expression For i=0,1,2
.............................. n
if allocation(i)!=0 then
Finish[i]=0 else Finish[i]=true
Step 2. Find an index(i) such that both Finish[i] = false
Request(i)<=work If no such I exist go to step 4.
Step 3. Work = work + Allocation(i) Finish[i] = true Go to step 2.
Step 4. If Finish[i] = false for some I where m>=i>=1. When a system is in a deadlock
state. This algorithm needs an order of m*n square operations to detect whether the
system is in deadlock state or not.
Ostrich Algorithm: Deadlock Ignorance (Ostrich Algorithm).
The ostrich algorithm means that the deadlock is simply ignored and it is assumed that it will never
occur. This is done because in some systems the cost of handling the deadlock is much higher than simply
ignoring it as it occurs very rarely. So, it is simply assumed that the deadlock will never occur and the system
is rebooted if it occurs by any chance.
This algorithm is vastly used to solve Deadlock issues in computer operating systems. Deadlocks can
occur when processes have been granted exclusive access to devices, files, and so on. Deadlocks can also
occur across machines.
Briefly, Ostrich Algorithm is about ignoring deadlock if it occurs. Just hope that deadlock will never occur in
a system. In general, it is a reasonable strategy. A system can run for years without any occurrence of
deadlock. If the operating system has a deadlock prevention or detection system in place, then it may have a
negative impact on the performance of the system. Because whenever a process or thread requests a resource,
the system will need to check whether granting the resource can cause a potential deadlock situation. This
check will be done even if there‘s no deadlock occurring. And this slows down the performance of the system.
7
MASS-STORAGE STRUCTURE INTRODUCTION:
Overview of Mass Storage Structure
Magnetic disks provide bulk of secondary storage of modern computers
– Drives rotate at 60 to 200 times per second
– Transfer rate is rate at which data flow between drive and computer
– Positioning time (random-access time) is time to move disk arm to desired
cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency)
– Head crash results from disk head making contact with the disk
surface Disks can be removable
Drive attached to computer via I/O bus
– Busses vary, including SATA, USB, Fibre Channel, SCSI
– Host controller in computer uses bus to talk to disk controller built into drive or
storage array
• Magnetic tape
– Was early secondary-storage medium
– Relatively permanent and holds large quantities of data
– Access time slow
– Random access ~1000 times slower than disk
– Mainly used for backup, storage of infrequently-used data, transfer medium between systems
8
– Once data under head, transfer rates comparable to disk
– 20-200GB typical storage
Disk Structure
• Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the logical
block is the smallest unit of transfer.
• The 1-dimensional array of logical blocks is mapped into the sectors of the disk sequentially.
– Sector 0 is the first sector of the first track on the outermost cylinder.
– Mapping proceeds in order through that track, then the rest of the tracks in that cylinder, and
then through the rest of the cylinders from outermost to innermost.
Disk Attachment
• Host-attached storage accessed through I/O ports talking to I/O busses
• SCSI itself is a bus, up to 16 devices on one cable, SCSI initiator requests operation and
SCSI targets perform tasks
– Each target can have up to 8 logical units (disks attached to device controller
• FC is high-speed serial architecture
– Can be switched fabric with 24-bit address space – the basis of storage area networks
(SANs) in which many hosts attach to many storage units
Network-Attached Storage
• Network-attached storage (NAS) is storage made available over a network rather than over a
local connection (such as a bus)
• NFS and CIFS are common protocols
9
Storage Area Network
Disk Scheduling
FCFS
SSTF
• Selects the request with the minimum seek time from the current head position.
• SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests.
• Illustration shows total head movement of 236 cylinders.
10
SCAN
• The disk arm starts at one end of the disk, and moves toward the other end, servicing requests
until it gets to the other end of the disk, where the head movement is reversed and servicing
continues.
• Sometimes called the elevator algorithm.
• Illustration shows total head movement of 208 cylinders.
C-SCAN
• Provides a more uniform wait time than SCAN.
• The head moves from one end of the disk to the other. servicing requests as it goes. When it
reaches the other end, however, it immediately returns to the beginning of the disk, without
servicing any requests on the return trip.
11
C-LOOK
• Version of C-SCAN
Arm only goes as far as the last request in each direction, then reverses direction immediately,
without first going all the way to the end of the disk
12
the disk.
– Partition the disk into one or more groups of cylinders.
– Logical formatting or ―making a file system‖.
• Boot block initializes system.
– The bootstrap is stored in ROM.
– Bootstrap loader program.
Methods such as sector sparing used to handle bad blocks
Swap-Space Management
• Swap-space — Virtual memory uses disk space as an extension of main memory.
• Swap-space can be carved out of the normal file system,or, more commonly, it can be ina
separate disk partition.
• Swap-space management
– 4.3BSD allocates swap space when process starts; holds text segment (the program) and data
segment.
– Kernel uses swap maps to track swap-space use.
– Solaris 2 allocates swap space only when a page is forced out of physical memory, notwhen
the virtual memory page is first created.
Data Structures for Swapping on Linux Systems
RAID Structure
• RAID – multiple disk drives provides reliability via redundancy.
• RAID is arranged into six different levels.
• Several improvements in disk-use techniques involve the use of multiple disksworking
cooperatively.
• Disk striping uses a group of disks as one storage unit.
13
• RAID schemes improve performance and improve the reliability of the storage systemby
storing redundant data.
– Mirroring or shadowing keeps duplicate of each disk.
– Block interleaved parity uses much less redundancy.
RAID (0 + 1) and (1 + 0)
Stable-Storage Implementation
• Write-ahead log scheme requires stable storage.
• To implement stable storage:
– Replicate information on more than one nonvolatile storage media with independent failure
14
modes.
– Update information in a controlled manner to ensure that we can recover the stable data after
any failure during data transfer or recovery.
15
the tape, it opens the whole tape drive as a raw device.
• Usually the tape drive is reserved for the exclusive use of that application.
• Since the OS does not provide file system services, the application must decide how to use the
array of blocks.
• Since every application makes up its own rules for how to organize a tape, a tape full ofdata
can generally only be used by the program that created it.
Tape Drives
• The basic operations for a tape drive differ from those of a disk drive.
• locate positions the tape to a specific logical block, not an entire track (corresponds toseek).
• The read position operation returns the logical block number where the tape headis.
• The space operation enables relative motion.
• Tape drives are ―append-only‖ devices; updating a block in the middle of the tape also
effectively erases everything beyond that block.
• An EOT mark is placed after a block that is written.
File Naming
• The issue of naming files on removable media is especially difficult when we want to write
data on a removable cartridge on one computer, and then use the cartridge in anothercomputer.
• Contemporary OSs generally leave the name space problem unsolved for removable media,
and depend on applications and users to figure out how to access and interpret the data.
• Some kinds of removable media (e.g., CDs) are so well standardized that all computers use
them the same way.
Storage Structure
A disk can be used in its entirety for a file system. But at times, it is desirable to place multiple file
systems on a disk or to use parts of a disk for a file system and other parts for other things. These
parts are known variously as partitions, slices or minidisks. A file system can be created on each
of these parts of the disk. These parts can be combined together to form larger structures known as
volumes and file systems can be created on these too. Each volume can be thought of as a virtual
disk. Volumes can also store multiple OS‘s allowing a system to boot and run more than one.
Each volume that contains a file system must also contain information about the files in the
system. This information is kept in entries in a device directory or volume table of contents.
The device directory/directory records information for all files on that volume.
21
Directory Overview
The directory can be viewed as a symbol table that translates file names into their directory
entries. The operations that can be performed on the directory are:
Search for a file
Create a file
Delete a file
List a directory
Rename a file
Traverse the file system
Single level directory
The simplest directory structure is the single level directory. All files are contained in the same
directory which is easy to support and understand. But this implementation has limitations when
the number of files increases or when
the system has more than one user. Since all files are in same directory, all files names must be
unique. Keeping track of so many files is a difficult task. A single user on a single level directory
may find it difficult to remember the names of all the files as the number of files increases.
In the two level directory structure, each user has his own user file directory (UFD) . The
UFD‘s have similar structures but each lists only the files of a single user. When a user job starts
or a user logs in, the system‘s master file directory (MFD) is searched. The MFD is indexed by
user name or account number and each entry points to the UFD for that user. When a user refers
to a particular file, only his own UFD is searched. Different users may have files with the same
name as long as all the files names within each UFD are unique.
Root of the tree is MFD. Its direct descendants are UFDs. The descendants of
the UFDs are the files themselves. The files are the leaves of the tree. The sequence of directories
searched when a file is names is called the search path.
22
Although the two level directory structure solves the name collision problem, it still has
disadvantages. This structure isolates on user from another. Isolation is an advantage when the
users are completely independent but a disadvantage when the users want to cooperate on some
task and to access one another‘s files.
Here, we extend the two level directory to a tree of arbitrary height. This generalization allows
users to create their own subdirectories and to organize their files accordingly. A tree is the most
common directory structure. The tree has a root directory and every file in the system has a unique
path name. A directory contains a set of files or sub directories. All directories have the same
internal format. One bit in each directory entry defines the entry as a file (0) or as a subdirectory
(1).
Each process has a current directory. The current directory should contain most of the files that
are of current interest to the process.
Path names can be of two types – absolute and relative. An absolute path name begins at the root
and follows a path down to the specified file giving the directory names on the path. A relative
path name defines a path from the current directory.
Deletion of directory under tree structured directory – If a directory is empty, its entry in the
directory that contains it can simply be deleted. If the directory to be deleted is not empty, then
use one of the two approaches –
User must first delete all the files in that directory
If a request is made to delete a directory, all the directory‘s files and sub directories are also
to be deleted.
A path to a file in a tree structured directory can be longer than a path in a two level directory.
23
no cycles allows directories to share subdirectories and files. The same file or subdirectory may
be in two different directories.
With a shared file, only one actual file exists. Sharing is particularly important for subdirectories.
Shared files and subdirectories can be implemented in several ways. One way is to create a new
directory entry called a link. A link is a pointer to another file or subdirectory. Another approach
in implementing shared files is to duplicate all information about them in both sharing directories.
An acyclic graph directory structure is flexible than a tree structure but it is more complex.
Several problems may exist such as multiple absolute path names or deletion.
A problem with using an acyclic graph structure is ensuring that there are no cycles. The primary
advantage of an acyclic graph is the relative simplicity of the algorithms to traverse the graph and
to determine when there are no more references to a file. If cycles are allowed to exist in the
directory, avoid searching any component twice. A similar problem exists when we are trying to
determine when a file can be deleted. The difficulty is to avoid cycles as new links are added to
the structure.
File System Mounting
A file system must be mounted before it can be available to processes on the system. OS is given
the name of the device and a mount point – the location within the file structure where the file
system is to be attached. This mount point is an empty directory. Next, OS verifies that the device
contains a valid file system. It does so by asking the device driver to read the device directory and
verifying that the directory has the expected format. Finally OS notes in its directory structure that
a file system is mounted at the specified mount point.
erating Systems 94 Dept. of CSE
File Sharing
File sharing is desirable for users who want to collaborate and to reduce the effort required to
achieve a computing goal.
Multiple users
When an OS accommodates multiple users, the issues of file sharing, file naming and file
protection become preeminent. System mediates file sharing. The system can either allow a user
to access the files of other users by default or require that a user specifically grant access to the
files.
system is mounted, file operation requests are sent on behalf of the user across the network to the
25
server via the DFS protocol.
Distributed Information Systems
To make client server systems easier to manage, distributed information systems also known as
distributed naming services provide unified access to the information needed for remote
computing. The domain name system provides host name to network address translations for the
entire Internet.
Local file systems can fail for a variety of reasons including failure of the disk containing the file
system, corruption of the delivery structure or other disk management information, disk controller
failure, cable failure and host adapter failure. User or system administrator failure can also cause
files to be lost or entire directories or volumes to be deleted. Many of these failures will cause a
host to crash and an error condition to be displayed and human intervention will be required to
repair the damage.
Remote fail systems have even more failure modes. In the case of networks, the network can be
interrupted between two hosts. Such interruption can result from hardware failure, poor hardware
configuration or networking implementation issues.
For a recovery from a failure, some kind of state information may be maintained on both the client
and server.
Consistency semantics
These represent an important criterion for evaluating any file system that supports file sharing.
These semantics specify how multiple users of a system are to access a shared file simultaneously.
These are typically implemented as code with the file system.
26
Protection
When information is stored in a computer system, it should be kept safe from physical damage
(reliability) and improper access (protection).
Types of Access
Complete protection to files can be provided by prohibiting access. Systems that do not permit
access to the files of other users do not need protection. Both these approaches are extreme. Hence
controlled access is required.
Protection mechanisms provide controlled access by limiting the types of file access that can be
made. Access is permitted or denied depending on many factors. Several different types of
operations may be controlled –
i. Read
ii. Write
iii. Execute
iv. Append
v. Delete
vi. List
Access Control
The most common approach to the protection problem is to make access dependent on the identity
of the user. The most general scheme to implement identity- dependent access is to associatewith
each file and directory an access- control list (ACL) specifying user names and the types of access
allowed for each user.
This approach has the advantage of enabling complex access methodologies. The main problem
with access lists is their length. To condense the length of the access control list, many systems
recognize three classifications of users in connection with each file:
27
With the more limited protection classification, only three fields are needed to define protection.
Each field is a collection of bits and each bit either allows or prevents the access associated with
it. A separate field is kept for the file owner for the file‘s group and for all the other users.
Another approach to protection problem is to associate a password with each file. If the passwords
are chosen randomly and changed often, this scheme may be effective in limiting access to a file.
1. The number of passwords that a user needs to remember may become large making the
scheme impractical.
2. If only one password is used for all the files, then once it is discovered, all files are
accessible.
SOLVED PROBLEMS:
1. Deadlock avoidance
2. Deadlock detection
3. Disk scheduling algorithms
28
PART- A QUESTIONS(2Marks):
1. What is a file?
2. What is a single-level directory?
3 What is a tree-structured directory?
4. What are the different allocation methods.
5. Difference between primary storage and secondary storage.
6. List different file attributes and file types.
7. What is file access mechanism? And list out the access mechanisms.
8. When designing the file structure for an operating system, what attributes are considered?
9. What is Free Space Management?.
UNIT-4
PART- A QUESTIONS(2Marks):
1. What is a file?
A file is an abstract data type defined and implemented by the operating system. It is a sequence
of logical records.
2. What is a single-level directory?
A single-level directory in a multiuser system causes naming problems, since each file must have
a unique name.
3 What is a tree-structured directory?
PART- B QUESTIONS(10Marks):
30