Introduction to Operating System 1$2 Topic
Introduction to Operating System 1$2 Topic
An operating system is a program that acts as an interface between the user and
the computer hardware and controls the execution of all kinds of application
programs and assistant system software programs (i.e. Utilities).
Files: A collection of data or information that has a name, called the filename.
Almost all information stored in a computer must be in a file. There are many
different types of files: data files, text files , program files, directory files, and so
on. Different types of files store different types of information. For example,
program files store programs, whereas text files store text.
A system call is a way for programs to interact with the operating system. A
computer program makes a system call when it makes a request to the operating
system's kernel. System calls are used for hardware services, to create or execute
a process, and for communicating with kernel services, including application and
process scheduling.
In the 1960s, IBM was the first computer manufacturer to take on the task of
operating system development and began distributing operating systems with
their computers. However, IBM wasn't the only vendor creating operating
systems during this time. Control Data Corporation, Computer Sciences
Corporation, Burroughs Corporation, GE, Digital Equipment Corporation, and
Xerox all released mainframe operating systems in the 1960s as well.
In the late 1960s, the first version of the Unix operating system was developed.
Written in C, and freely available during it's earliest years, Unix was easily
ported to new systems and rapidly achieved broad acceptance. Many modern
operating systems, including Apple OS X and all Linux flavors, trace their roots
back to Unix.
5
2. Layered Architecture of operating system
The layered Architecture of operating system was developed in 60‟s in this
approach; the operating system is broken up into number of layers. The bottom
layer (layer 0) is the hardware layer and the highest layer (layer n) is the user
interface layer as shown in the figure.
The layered are selected such that each user functions and services of only lower
level layer. The first layer can be debugged wit out any concern for the rest of the
system. It user basic hardware to implement this function once the first layer is
debugged., its correct functioning can be assumed while the second layer is
debugged & soon . If an error is found during the debugged of particular layer,
the layer must be on that layer, because the layer below it already debugged.
Because of this design of the system is simplified when operating system is
broken up into layer. Os/2 operating system is example of layered architecture of
operating system another example is earlier version of Windows NT.
The main disadvantage of this architecture is that it requires an appropriate
definition of the various layers & a careful planning of the proper placement of
the layer.
In this model, the main task of the kernel is to handle all the communication
between the client and the server by splitting the operating system into number of
ports, each of which only handle some specific task. I.e. file server, process
server, terminal server and memory service.
7
Another advantage of the client-server model is its adaptability to user in
distributed system. If the client communicates with the server by sending it the
message, the client need not know whether it was send a ……. Is the network to
a server on a remote machine? As in case of client, same thing happens and
occurs in client side that is a request was send and a reply come back.
Multiple jobs are executed by the CPU by switching between them, but the
switches occur so frequently. Thus, the user can receive an immediate response.
For example, in a transaction processing, the processor executes each user
program in a short burst or quantum of computation. That is, if n users are
present, then each user can get a time quantum. When the user submits the
command, the response time is in few seconds at most.
8
• Provides the advantage of quick response
• Avoids duplication of software
• Reduces CPU idle time
9
The advantages of network operating systems are as follows:
• Centralized servers are highly stable.
• Security is server managed.
• Upgrades to new technologies and hardware can be easily integrated
into the system.
• Remote access to servers is possible from different locations and
types of systems.
The disadvantages of network operating systems are as follows:
• High cost of buying and running a server.
• Dependency on a central location for most operations.
Regular maintenance and updates are required.
10
• The OS defines a job which has predefined sequence of commands,
programs and data as a single unit.
• The OS keeps a number a jobs in memory and executes them without any
manual information.
• Jobs are processed in the order of submission, i.e., first come first served
fashion.
• When a job completes its execution, its memory is released and the
output for the job
gets copied into an output spool for later printing or processing.
Advantages
• Batch processing takes much of the work of the operator to the computer.
• Increased performance as a new job get started as soon as the previous
job is finished, without any manual intervention.
Disadvantages
11
• Due to lack of protection scheme, one batch job can affect other
pending jobs.
Multitasking
Multitasking is when multiple jobs are executed by the CPU simultaneously by
switching between them. Switches occur so frequently that the users may interact
with each program while it is running. An OS does the following activities
related to multitasking:
12
• A program that is loaded into memory and is executing is commonly
referred to as a process.
• When a process executes, it typically executes for only a very short
time before it either finishes or needs to perform I/O.
• Since interactive I/O typically runs at slower speeds, it may take a
long time to complete. During this time, a CPU can be utilized by
another process.
• The operating system allows the users to share the computer
simultaneously. Since each action or command in a time-shared
system tends to be short, only a little CPU time is needed for each
user.
• As the system switches CPU rapidly from one user/program to the
next, each user is given the impression that he/she has his/her own
CPU, whereas actually one CPU is being shared among many users.
Multiprogramming
Sharing the processor, when two or more programs reside in memory at the same
time, is referred as multiprogramming. Multiprogramming assumes a single
shared processor. Multiprogramming increases CPU utilization by organizing
jobs so that the CPU always has one to execute
The following figure shows the memory layout for a multiprogramming system.
13
An OS does the following activities related to multiprogramming.
• The operating system keeps several jobs in memory at a time.
• This set of jobs is a subset of the jobs kept in the job pool.
• The operating system picks and begins to execute one of the jobs
in the memory.
• Multiprogramming operating systems monitor the state of all
active programs and system resources using memory management programs
to ensures that the CPU is never idle, unless there are no jobs to process.
Advantage
Disadvantages
Interactivity
Interactivity refers to the ability of users to interact with a computer system. An
Operating system does the following activities related to interactivity:
• Provides the user an interface to interact with the system.
• Manages input devices to take inputs from the user. For example,
keyboard.
• Manages output devices to show outputs to the user. For example,
Monitor.
The response time of the OS needs to be short, since the user submits and waits
for the result.
Real-Time Systems
Real-time systems are usually dedicated, embedded systems. An operating
system does the following activities related to real-time system activity.
• In such systems, Operating Systems typically read from and react to
sensor data.
14
• The Operating system must guarantee response to events within fixed
periods of time to ensure correct performance.
Distributed Environment
A distributed environment refers to multiple independent CPUs or processors in a
computer system. An operating system does the following activities related to
distributed environment:
• The OS distributes computation logics among several physical
processors.
• The processors do not share memory or a clock. Instead, each processor
has its own local memory.
• The OS manages the communications between the processors. They
communicate with each other through various communication lines.
Spooling
Spooling is an acronym for simultaneous peripheral operations on line. Spooling
refers to putting data of various I/O jobs in a buffer. This buffer is a special area
in memory or hard disk which is accessible to I/O devices.
15
Advantages
Spooling is capable of overlapping I/O operation for one job with processor
operations for another job.
Job control
job control refers to the control of multiple tasks or jobs on a computer system,
ensuring that they each have access to adequate resources to perform correctly, that
competition for limited resources does not cause a deadlock where two or more jobs
are unable to complete, resolving such situations where they do occur, and terminating
jobs that, for any reason, are not performing as expected.
Command language
Sometimes referred to as a command script, a command language is a
language used for executing a series of commands instructions that would
otherwise be executed at the prompt (text or symbols used to represent the system's
readiness to perform the next). A good example of a command language is
Microsoft Windows batch files(A batch file or batch job is a collection, or list, of
commands that are processed in sequence often without requiring user input or
intervention). Although command languages are useful for executing a series of
commands, their functionality is limited to what is available at the command line
which can make them easier to learn.
Advantages of command
languages Very easy
16
for all types of users to
write.
• Do not require the files to be compiled.
• Easy to modify and make additional commands.
• Very small files.
• Do not require any additional programs or files that are not already found on the
operating system.
17
PROCESS MANAGEMENT
To put it in simple terms, we write our computer programs in a text file and when
we execute this program, it becomes a process which performs all the tasks
mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image
shows a simplified layout of a process inside main memory:
18
4 Data This section contains the global and static variables.
Process Levels
A process hierarchy is defined by its levels and the information given in these levels. It
is key to have a defined information base on each level (e.g. a process step is always
performed by a specific role instead of an abstract organizational unit), otherwise
process levels are realized in threads.
Threads
Despite of the fact that a thread must execute in process, the process and its
associated threads are different concept. Processes are used to group resources
together and threads are the entities scheduled for execution on the CPU.
19
A thread is a single sequence stream within in a process. Because threads have
some of the properties of processes, they are sometimes called lightweight
processes. In a process, threads allow multiple executions of streams. In many
respect, threads are popular way to improve application through parallelism. The
CPU switches rapidly back and forth among the threads giving illusion that the
threads are running in parallel. Like a traditional process i.e., process with one
thread, a thread can be in any of several states (Running, Blocked, Ready or
Terminated). Each thread has its own stack. Since thread will generally call
different procedures and thus a different execution history. This is why thread
needs its own stack. An operating system that has thread facility, the basic unit of
CPU utilization is a thread. A thread has or consists of a program counter (PC), a
register set, and a stack space. Threads are not independent of one other like
processes as a result threads shares with other threads their code section, data
section, OS resources also known as task, such as open files and signals.
Processes Vs Threads
As we mentioned earlier that in many respect threads operate in the same way as
that of processes. Some of the similarities and differences are:
Similarities
• Like processes threads share CPU and only one thread active (running) at a
time.
• Like processes, threads within a processes, threads within a processes execute
sequentially.
• Like processes, thread can create children.
• And like process, if one thread is blocked, another thread can run.
Differences
• Unlike processes, threads are not independent of one another.
• Unlike processes, all threads can access every address in the task .
• Unlike processes, thread are design to assist one other. Note that processes
might or might not assist one another because processes may originate from
different users.
Why Threads?
Following are some reasons why we use threads in designing operating systems.
1. A process with multiple threads make a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess
communication.
3. Because of the very nature, threads can take advantage of multiprocessors.
But this cheapness does not come free - the biggest drawback is that there is no
protection between threads.
Advantages:
The most obvious advantage of this technique is that a user-level threads package
can be implemented on an Operating System that does not support threads. Some
other advantages are
• User-level threads does not require modification to operating systems.
• Simple Representation:
Each thread is represented simply by a PC, registers, stack and a small control
block, all stored in the user process address space.
• Simple Management:
21
This simply means that creating a thread, switching between threads and
synchronization between threads can all be done without intervention of the
kernel.
• Fast and Efficient:
Thread switching is not much more expensive than a procedure call.
Disadvantages:
• There is a lack of coordination between threads and operating system kernel.
Therefore, process as whole gets one time slice irrespect of whether process has
one thread or 1000 threads within. It is up to each thread to relinquish control to
other threads.
• User-level threads requires non-blocking systems call i.e., a multithreaded kernel.
Otherwise, entire process will blocked in the kernel, even if there are runable
threads left in the processes. For example, if one thread causes a page fault, the
process blocks.
Kernel-Level Threads
In this method, the kernel knows about and manages the threads. No runtime
system is needed in this case. Instead of thread table in each process, the kernel
has a thread table that keeps track of all threads in the system. In addition, the
kernel also maintains the traditional process table to keep track of processes.
Operating Systems kernel provides system call to create and manage threads.
Advantages:
• Because kernel has full knowledge of all threads, Scheduler may decide to give
more time to a process having large number of threads than process having
small number of threads.
• Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
• The kernel-level threads are slow and inefficient. For instance, threads
operations are hundreds of times slower than that of user-level threads.
• Since kernel must manage and schedule threads as well as processes. It
requires a full thread control block (TCB) for each thread to maintain
information about threads. As a result, there is significant overhead and
increased in kernel complexity.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in
22
parallel on multiple processors and a blocking system call need not block the
entire process. Multithreading models are three types
The following diagram shows the many-to-many threading model where 6 user
level threads are multiplexing with 6 kernel level threads. In this model,
developers can create as many user threads as necessary and the corresponding
Kernel threads can run in parallel on a multiprocessor machine. This model
provides the best accuracy on concurrency and when a thread performs a
blocking system call, the kernel can schedule another thread for execution.
23
If the user-level thread libraries are implemented in the operating system in such
a way that the system does not support them, then the Kernel threads use the
many-to-one relationship modes.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread.
OS/2, windows NT and windows 2000 use one to one relationship model.
24
Advantages of Threads over Multiple Processes
• Context Switching Threads are very inexpensive to create and destroy, and
they are inexpensive to represent. For example, they require space to store,
the PC, the SP, and the general-purpose registers, but they do not require
space to share memory information, Information about open files of I/O
devices in use, etc. With so little context, it is much faster to switch between
threads. In other words, it is relatively easier for a context switch using threads.
• Sharing Treads allow the sharing of a lot resources that cannot be shared in
process, for example, sharing code section, data section, Operating System
resources like open file etc.
25
Application that cannot benefit from Threads
Any sequential process that cannot be divided into parallel task will not benefit
from thread, as they would block until the previous one completes. For example,
a program that displays the time of the day would not benefit from multiple
threads.
When a new thread is created it shares its code section, data section and
operating system resources like open files with other threads. But it is allocated
its own stack, register set and a program counter.
The creation of a new process differs from that of a thread mainly in the fact that
all the shared resources of a thread are needed explicitly for each process. So
though two processes may be running the same piece of code they need to have
their own copy of the code in the main memory to be able to run. Two processes
also do not share other resources with each other. This makes the creation of a
new process very costly compared to that of a new thread.
Context Switch
To give each process on a multiprogrammed machine a fair share of the CPU, a
hardware clock generates interrupts periodically. This allows the operating
system to schedule all processes in main memory (using scheduling algorithm) to
run on the CPU at equal intervals. Each time a clock interrupt occurs, the
interrupt handler checks how much time the current running process has used. If
it has used up its entire time slice, then the CPU scheduling algorithm (in kernel)
picks a different process to run. Each switch of the CPU from one process to
another is called a context switch.
26
In a multiprogrammed uniprocessor computing system, context switches occur
frequently enough that all processes appear to be running concurrently. If a
process has more than one thread, the Operating System can use the context
switching technique to schedule the threads so they appear to execute in parallel.
This is the case if threads are implemented at the kernel level. Threads can also
be implemented entirely at the user level in run-time libraries. Since in this case
no thread scheduling is provided by the Operating System, it is the responsibility
of the programmer to yield the CPU frequently enough in each thread so all
threads in the process can make progress.
When the PCB of the currently executing process is saved the operating system
loads the PCB of the next process that has to be run on CPU. This is a heavy task
and it takes a lot of time.
In general, a process can have one of the following five states at a time.
The PCB is maintained for a process throughout its lifetime, and is deleted once
the process terminates.
Inter-process communication
Inter-process communication or inter-process communication (IPC) refers specifically
to the mechanisms an operating system provides to allow the processes to manage
shared data.
Race Conditions
In operating systems, processes that are working together share some common
storage (main memory, file etc.) that each process can read and write. When two
or more processes are reading or writing some shared data and the final result
depends on who runs precisely when, are called race conditions. Concurrently
29
executing threads that share data need to synchronize their operations and
processing in order to avoid race condition on shared data. Only one „customer‟
thread at a time should be allowed to examine and update the shared variable.
Race conditions are also possible in Operating Systems. If the ready queue is
implemented as a linked list and if the ready queue is being manipulated during
the handling of an interrupt, then interrupts must be disabled to prevent another
interrupt before the first one completes. If interrupts are not disabled than the
linked list could become corrupt.
Critical Section
How to avoid race conditions?
The key to preventing trouble involving shared storage is find some way to
prohibit more than one process from reading and writing the shared data
simultaneously. That part of the program where the shared memory is accessed is
called the Critical Section. To avoid race conditions and flawed results, one must
identify codes in Critical Sections in each thread. The characteristic properties of
the code that form a Critical Section are
Here, the important point is that when one process is executing shared modifiable
data in its critical section, no other process is to be allowed to execute in its
30
critical section. Thus, the execution of critical sections by the processes is
mutually exclusive in time.
Mutual Exclusion
A way of making sure that if one process is using a shared modifiable data, the
other processes will be excluded from doing the same thing.
Formally, while one process executes the shared variable, all other processes
desiring to do so at the same time moment should be kept waiting; when that
process has finished executing the shared variable, one of the processes waiting;
while that process has finished executing the shared variable, one of the
processes waiting to do so should be allowed to proceed. In this fashion, each
process executing the shared data (variables) excludes all others from doing so
simultaneously. This is called Mutual Exclusion.
Note that mutual exclusion needs to be enforced only when processes access
shared modifiable data - when processes are performing operations that do not
conflict with one another they should be allowed to proceed concurrently.
Mutual Exclusion Conditions
If we could arrange matters such that no two processes were ever in their critical
sections simultaneously, we could avoid race conditions. We need four
conditions to hold to have a good solution for the critical section problem
(mutual exclusion).
• No two processes may at the same moment inside their critical sections.
• No assumptions are made about relative speeds of processes or number of
CPUs.
• No process should outside its critical section should block other processes.
• No process should wait arbitrary long to enter its critical section.
Problem
When one process is updating shared modifiable data in its critical section, no
other process should allowed to enter in its critical section.
Proposal 1 -Disabling Interrupts (Hardware Solution)
31
Each process disables all interrupts just after entering in its critical section and
re-enable all interrupts just before leaving critical section. With interrupts turned
off the CPU could not be switched to other process. Hence, no other process will
enter its critical and mutual exclusion achieved.
Conclusion
Disabling interrupts is sometimes a useful interrupts is sometimes a useful
technique within the kernel of an operating system, but it is not appropriate as a
general mutual exclusion mechanism for users process. The reason is that it is
unwise to give user process the power to turn off interrupts.
Conclusion
The flaw in this proposal can be best explained by example. Suppose process A
sees that the lock is 0. Before it can set the lock to 1 another process B is
scheduled, runs, and sets the lock to 1.
When the process A runs again, it will also set the lock to 1, and two processes
will be in their critical section simultaneously.
In this proposed solution, the integer variable 'turn' keeps track of whose turn is
to enter the critical section. Initially, process A inspect turn, finds it to be 0, and
enters in its critical section. Process B also finds it to be 0 and sits in a loop
continually testing 'turn' to see when it becomes 1.Continuously testing a variable
waiting for some value to appear is called the Busy-Waiting.
Conclusion
Taking turns is not a good idea when one of the processes is much slower than
the other. Suppose process 0 finishes its critical section quickly, so both
processes are now in their noncritical section. This situation violates above
mentioned condition 3.
32
Using Systems calls 'sleep' and 'wakeup'
Basically, what above mentioned solution do is this: when a processes wants to
enter in its critical section , it checks to see if then entry is allowed. If it is not,
the process goes into tight loop and waits (i.e., start busy waiting) until it is
allowed to enter. This approach waste CPU time
Now look at some inter process communication primitives is the pair of steep-
wakeup.
o Sleep: It is a system call that causes the caller to block, that is, be suspended
until some other process wakes it up.
o Wakeup: It is a system call that wakes up the process.
Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory
address used to match up 'sleeps' and 'wakeups'
Statement
To suspend the producers when the buffer is full, to suspend the consumers when
the buffer is empty, and to make sure that only one process at a time manipulates
a buffer so there are no race conditions or lost updates.
As an example how sleep-wake up system calls are used, consider the producer-
consumer problem also known as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts
information into the buffer and the consumer takes information out.
Trouble arises when
1. The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has
removed data.
2. The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes
consumer up.
Conclusion
This approaches also leads to same race conditions we have seen in earlier
approaches. Race condition can occur due to the fact that access to 'count' is
unconstrained. The essence of the problem is that a wakeup call, sent to a process
that is not sleeping, is lost.
33
Semaphore&
Monitor
Definition
Semaphore
Being a process s synchronization tool, Semaphore is an integer variable S.
This integer variable S is initialized to the number of resources present in the
system. The value of semaphore S can be modified only by two functions wait ()
and signal () apart from initialization.
The wait () and signal () operation modifies the value of the semaphore S
indivisibly. Which means when a process is modifying the value of the
semaphore, no other process can simultaneously modify the value of the
semaphore. Further, the operating system distinguishes the semaphore in two
categories Counting semaphores and Binary semaphore.
Definition of Monitor
To overcome the timing errors that occurs while using semaphore for process
synchronization, the researchers have introduced a high-level synchronization
34
construct i.e. the monitor type. A monitor type is an abstract data type that is
used for process synchronization.
Being an abstract data type monitor type contains the shared data variables that
are to be shared by all the processes and some programmer-defined operations
that allow processes to execute in mutual exclusion within the monitor. A
process can not directly access the shared data variable in the monitor; the
process has to access it through procedures defined in the monitor which allow
only one process to access the shared variables in a monitor at a time.
1. monitor monitor_name
2. {
3. //shared variable declarations
4. procedure P1 ( . . . ) {
5. }
6. procedure P2 ( . . . ) { 7. }
8. procedure Pn ( . . . ) {
9. }
10. initialization code ( . . . ) {
11. }
12. }
A monitor is a construct such as only one process is active at a time within the
monitor. If other process tries to access the shared variable in monitor, it gets
blocked and is lined up in the queue to get the access to shared data when
previously accessing process releases it.
The conditional variable can invoke only two operation wait () and signal ().
Where if a process P invokes a wait () operation it gets suspended in the monitor
till other process Q invoke signal () operation i.e. a signal () operation invoked
by a process resumes the suspended process.
35
system whereas, the monitor is the abstract data type which allows only one
process to execute in critical section at a time.
2. The value of semaphore can be modified by wait () and signal () operation only.
On the other hand, a monitor has the shared variables and the procedures only
through which shared variables can be accessed by the processes.
3. In Semaphore when a process wants to access shared resources the process
performs wait () operation and block the resources and when it releases the
resources it performs signal () operation. In monitors when a process needs to
access shared resources, it has to access them through procedures in monitor.
4. Monitor type has condition variables which semaphore does not have.
Process scheduling
Definition
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
36
A newly arrived process is put in the ready queue. Processes waits in ready queue
for allocating the CPU. Once the CPU is assigned to a process, then that process
will execute. While executing the process, any one of the following events can
occur.
- The process could issue an I/O request and then it would be placed in an I/O
queue.
- The process could create new sub process and will wait for its termination.
The process could be removed forcibly from the CPU, as a result of interrupt and put back in the
ready queue.
37
The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.).
The OS scheduler determines how to move processes between the ready and
run queues which can only have one entry per processor core on the system; in
the above diagram, it has been merged with the CPU.
Job scheduling is performed using job schedulers. Job schedulers are programs
that enable scheduling and, at times, track computer "batch" jobs, or units of
38
work like the operation of a payroll program. Job schedulers have the ability to
start and control jobs automatically by running prepared job-control-language
statements or by means of similar communication with a human operator.
Generally, the present-day job schedulers include a graphical user interface
(GUI) along with a single point of control.
In-house developers can write these advanced capabilities; however, these are
usually offered by providers who are experts in systems-management software.
In scheduling, many different schemes are used to determine which specific job
to run. Some parameters that may be considered are as follows:
• Job priority
• Availability of computing resource
• License key if the job is utilizing a licensed software
• Execution time assigned to the user
• Number of parallel jobs permitted for a user
• Projected execution time
• Elapsed execution time
• Presence of peripheral devices \
• Number of cases of prescribed events
Schedulers
Schedulers are special system software which handle process scheduling in
various ways. Their main task is to select the jobs to be submitted into the
system and to decide which process to run. Schedulers are of three types:
• Long-Term Scheduler
39
• Short-Term Scheduler Medium-
Term Scheduler
Long-Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes
from the queue and loads them into memory for execution. Process loads
into the memory for CPU scheduling.
Short-Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of
ready state to running
40
Medium-Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes
from the memory. It reduces the degree of multiprogramming. The medium-
term scheduler is in charge of handling the swapped out-processes.
41
Context switches are computationally intensive since register and memory state
must be saved and restored. To avoid the amount of context switching time,
some hardware systems employ two or more sets of processor registers. When
the process is switched, the following information is stored for later use.
• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information
43
Shortest Job Next (SJN)
• This is also known as shortest job first, or SJF.
• This is a non-preemptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known
in advance.
• Impossible to implement in interactive systems where the required CPU
time is not known.
• The processer should know in advance how much time a process will
take.
45
• Once a process is executed for a given time period, it is preempted and
other process executes for a given time period.
• Context switching is used to save states of preempted processes.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound
jobs in another queue. The Process Scheduler then alternately selects jobs from
each queue and assigns them to the CPU based on the algorithm assigned to the
queue.
Deadlock
Introduction
In a multiprogramming environment, several processes may compete for a finite
number of resources. A process requests resources; if the resources are not
46
available at that time, the process enters a wait state. It may happen that waiting
processes will never again change state, because the resources they have
requested are held by other waiting processes. This situation is called deadlock.
Deadlock Characterization
In deadlock, processes never finish executing and system resources are tied up,
preventing other jobs from ever starting.
Necessary Conditions
A deadlock situation can arise if the following four conditions hold
simultaneously in a system: 1. Mutual exclusion: At least one resource must be
held in a non-sharable mode; that is, only one process at a time can use the
resource. If another process requests that resource, the requesting process must
be delayed until the resource has been released.
2. Hold and wait: There must exist a process that is holding at least one
resource and is waiting to acquire additional resources that are currently being
held by other processes.
3. No preemption: Resources cannot be preempted; that is, a resource can
be released only voluntarily by the process holding it, after that process, has
completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting processes
such that P0 is waiting for a resource that is held by P1, P1 is waiting for a
resource that is held by P2, …., Pn-1 is waiting for a resource that is held by Pn,
and Pn is waiting for a resource that is held by P0.
Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a directed graph called a
system resource allocation graph. The set of vertices V is partitioned into two
different types of nodes P = {P1, P2,
47
… Pn} the set consisting of all the active processes in the system; and R = {R1,
R2, …, R1}, the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi → Rj, it
signifies that process Pi requested an instance of resource type Rj and is currently
waiting for that resource. A directed edge from resource type Rj to process Pi is
denoted by Rj_ Pi it signifies that an instance of resource type Rj has been
allocated to process Pi. A directed edge Pi_ Rj is called a request edge; a directed
edge Rj _ Pi is called an assignment edge.
When process Pi requests an instance of resource type Rj, a request edge is
inserted in the resource-allocation graph. When this request can be fulfilled, the
request edge is instantaneously transformed to an assignment edge. When the
process no longer needs access to the, resource it releases the resource, and as
a result the assignment edge is deleted.
o R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
request edge – directed edge P1 Rj
assignment edge – directed edge Rj Pi
a) Mutual Exclusion
The mutual-exclusion condition must hold for non-sharable resources. For
example, a printer cannot be simultaneously shared by several processes.
Sharable resources, on the other hand, do not require mutually exclusive access,
and thus cannot be involved in a deadlock.
b) Hold and Wait
1. When whenever a process requests a resource, it does not hold any other
resources. One protocol that be used requires each process to request and be
allocated all its resources before it begins execution.
2. An alternative protocol allows a process to request resources only when
the process has none. A process may request some resources and use them.
Before it can request any additional resources, however it must release all the
resources that it is currently allocated here are two main disadvantages to these
protocols. First, resource utilization may be low, since many of the resources
may be allocated but unused for a long period. In the example given, for
instance, we can release the tape drive and disk file, and then again request the
disk file and printer, only if we can be sure that our data will remain on the disk
file. If we cannot be assured that they will, then we must request all resources at
the beginning for both protocols.
Second,
starvatioe.
c)No Preemption
If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are
preempted. That is this resources are implicitly released. The preempted
resources are added to the list of resources for which the process is waiting
process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting.
50
d) Circular Wait
Circular-wait condition never holds is to impose a total ordering of all resource
types, and to require that each process requests resources in an increasing order
of enumeration.
Let R = {R1, R2, ..., Rn} be the set of resource types. We assign to each resource
type a unique integer number, which allows us to compare two resources and to
determine whether one precedes another in our ordering. Formally, we define a
one-to-one function F: R _ N, where N is the set of natural numbers.
2. Deadlock Avoidance
Prevent deadlocks requests can be made. The restraints ensure that at least one of
the necessary conditions for deadlock cannot occur, and, hence, that deadlocks
cannot hold. Possible side effects of preventing deadlocks by this, melted,
however, are Tow device utilization and reduced system throughput.
An alternative method for avoiding deadlocks is to require additional information
about how resources are to be requested. For example, in a system with one tape
drive and one printer, we might be told that process P will request first the tape
drive, and later the printer, before releasing both resources. Process Q on the
other hand, will request first the printer, and then the tape drive.
With this knowledge of the complete sequence of requests and releases for each
process we can decide for each request whether or not the process should wait.
A deadlock-avoidance algorithm dynamically examines the resource-
allocation state to ensure that there can never be a circular wait condition. The
resource allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes. a. Safe State
A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock. More formally, a system is in
a safe state only if there exists a safe sequence. A sequence of processes <P1,
P2, .. Pn> is a safe sequence for the current allocation state if, for each Pi the
resources that Pj can still request can be satisfied by the currently available
resources plus the resources held by all the Pj, with j < i. In this situation, if the
resources that process Pi needs are not immediately available, then Pi can wait
until all Pj have finished. When they have finished, Pi can obtain all of its needed
resources, complete its designated task return its allocated resources, and
terminate. When Pi terminates, Pi + 1 can obtain its needed resources, and so
on.
51
Fig. Safe, Unsafe & Deadlock State
If no such sequence exists, then the system state is said to be unsafe.
3. Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock
avoidance algorithm, then a deadlock situation may occur.
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred.
• An algorithm to recover from the deadlock.
a) Single Instance of Each Resource Type
If all resources have only a single instance, then we can define a deadlock
detection algorithm that uses a variant of the resource-allocation graph, called a
wait-for graph. We obtain this graph from the resource-allocation graph by
removing the nodes of type resource and collapsing the appropriate edges.
b) Several Instances of a Resource Type
The wait-for graph scheme is not applicable to a resource-allocation system with
multiple instances of each resource type.
52
The algorithm used are :
• Available: A vector of length m indicates the number of available resources of
each type.
• Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process.
• Request: An n x m matrix indicates the current request of each process. If
Request [i, j] = k, then process P, is requesting k more instances of resource
type Rj.
c) Detection-Algorithm Usage
If deadlocks occur frequently, then the detection algorithm should be invoked
frequently.
Resources allocated to deadlocked processes will be idle until the deadlock can
be broken.
53
3. Starvation
Summary
A deadlocked state occurs when two or more processes are waiting indefinitely
for an event that can be caused only one of the waiting processes. There are three
principal methods for dealing with deadlocks:
Use some protocol to prevent or avoid deadlocks, entering that the system will
never enter a deadlocked state.
Allow the system to enter a deadlocked state, detect it, and then recover.
Ignore the problem altogether and pretend that deadlocks never occur in the
system.
Deadlock prevention is a set of methods for ensuring that at least one of the
necessary condition cannot hold. Deadlock avoidance requires additional
information about how resources are to be requested. Deadlock avoidance
algorithm dynamically examines the resource allocation state to ensure that a
circular wait condition can never exist. Deadlock occurs only when some process
makes a request that cannot be granted immediately.
Diagnosis System Error Logs/beeps and other critical errors can occur when your
Windows operating system becomes corrupted. Opening programs will be slower
and response times will lag. When you have multiple applications running, you
may experience crashes and freezes. There can be numerous causes of this error
including excessive startup entries, registry errors, hardware/RAM decline,
fragmented files, unnecessary or redundant program installations and so on.
54
A Scan (approx. 5 minutes) into your PC‟s Windows Operating System detects
problems divided into 3 categories – Hardware, Security and Stability. At the end
of the scan, you can review your PC‟s Hardware, Security and Stability in
comparison with a worldwide average. You can review a summary of the
problems detected during your scan.
Windows Errors
A Windows error is an error that happens when an unexpected condition occurs
or when a desired operation has failed. When you have an error in Windows, it
may be critical and cause your programs to freeze and crash or it may be
seemingly harmless yet annoying.
Damaged DLLs
One of the biggest causes of DLL‟s becoming corrupt/damaged is the practice of
constantly installing and uninstalling programs. This often means that DLL‟s
will get overwritten by newer versions when a new program is installed, for
example. This causes problems for those applications and programs that still
need the old version to operate. Thus, the program begins to malfunction and
crash.
Freezing Computer
Computer hanging or freezing occurs when either a program or the whole system
ceases to respond to inputs. In the most commonly encountered scenario, a
program freezes and all windows belonging to the frozen program become static.
Almost always, the only way to recover from a system freeze is to reboot the
machine, usually by power cycling with an on/off or reset button.
Virus Damage
Once your computer has been infected with a virus, it’s no longer the same. After
removing it with your anti-virus software, you’re often left with lingering side-
effects. Technically, your computer might no longer be infected, but that doesn’t
mean its error-free. Even simply removing a virus can actually harm your
system.
55
compromised system settings and registry values to their default Microsoft
settings. You may always return your system to its pre-repair condition.
Reimage patented technology, is the only PC Repair program of its kind that
actually reverses the damage done to your operating system. The online database
is comprised of over 25,000,000 updated essential components that will replace
any damaged or missing file on a Windows operating system with a healthy
version of the file so that your PC‟s performance, stability & security will be
restored and even improve. The repair will deactivate then quarantine all
Malware found then remove virus damage. All System Files, DLLs, and Registry
Keys that have been corrupted or damaged will be replaced with new healthy
files from our continuously updated online database.
56