0% found this document useful (0 votes)
15 views

Chapter 2-8

The document discusses the concept of processes in operating systems, detailing their definitions, components, and the importance of process control blocks (PCBs) for managing process execution. It explains the evolution from a two-state to a five-state process model, highlighting the roles of various states such as New, Ready, Waiting, Running, and Exit. Additionally, it covers process creation, execution modes, and the necessity of protecting system resources from user programs.

Uploaded by

Nivya babu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Chapter 2-8

The document discusses the concept of processes in operating systems, detailing their definitions, components, and the importance of process control blocks (PCBs) for managing process execution. It explains the evolution from a two-state to a five-state process model, highlighting the roles of various states such as New, Ready, Waiting, Running, and Exit. Additionally, it covers process creation, execution modes, and the necessity of protecting system resources from user programs.

Uploaded by

Nivya babu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Process : Concept,

Description and Control


Process : Concept,
2 Description and Control NOTES

The Chapter Covers :


n Concept of process
n Process state model
n Process description - PCB
n Process control
n Threads
n Threads in Linux

2.1 HOW THE CONCEPT OF PROCESS COMES IN EXISTENCE/


WHY DO WE NEED PROCESS?
In a batch operating system switching from one program to another is handled by
a code written in JCL (Job Control Language)
While multiprogramming , time sharing and real time systems requires more frequent
switching of CPU from one program to the next that can give illusion of parallesism.
But actually a CPU can execute only one program at a time. A program is a big
entity and maintaining multiple, parallel program is very difficult for the OS design- 27
Operating Systems ers. Hence, the designers actually need a more modularization so that they can
acheive firmer control on the various programs. Modularization could be achieved
by dividing a program into small piece of code (called subroutine or subprogram,
each can perform a specific task). To achieve multiprogramming some additional
NOTES information is required about each subprogram, such as which instruction will be
executed next, where this subroutine is residing in memory etc. The subroutine and
the additional information when loaded in main memory, together called process,
can help OS to perform a work. In simple words process is a unit of work for most
of the OS.

2.2 PROCESSES
Many definitions have been given for the term process, three of these are as follows:
 Process is a program in execution.
 Process is the animated spirit of a program.
 Process is that entity that can be assigned to and executed on a processor.
A process consists of the 3 components :
 An executable code
 The associated data needed in the code (variable, workspace, buffer, etc.).
 The execution context.
The execution context contains all the information (contents of the various processor
registers, such as the program counter, limit register) that the OS needs to manage
the process and (priority of the process and process state, scheduling etc.) that the
processor needs to properly execute the process.
A program is not a process. Program is a passive entity such as the contents of a
file stored on disk, whereas a process is an active entity, which instructs the CPU
what to do.
Figure 2.1 indicates a way in which processes may be implemented. There are two
processes say A and B. A block of memory is allocated to each of them. Each block
has programs, data and context information. There is one entry per process in a
process table which also resides in memory. The process table is built and main-
tained by the OS. Each entry in process table includes a pointer to the location of
the block of memory that contains the process and part or all of the execution
context of the process. The remainder of the execution context is stored with the
process itself. The process index register contains the index of the process table
which corresponds to the process currently controlling the processor. The program
counter register indicates the next instruction to be excuted. The base and limit
registers define the region in memory occupied by the process. Value of base
register is added to the program counter to form absolute address and this address
must not exceed the value in the limit register. As shown in the fig. 2.1 the process
28
index register has value k, the pointer at the index k in the process table points to
Process : Concept,
Description and Control

NOTES

Figure 2.1: An example of a Process Implementation

the starting location (4000) of process B in the memory. That means currently the
instruction of process B is executing. If any interrupt occur and OS decides to
execute Process A next, then the execution context of process B is recorded and
values for process A is loaded into the registers. This process is called context
switching. Later the processor can perform a context switch again and resume
execution of process B. The context switch consists of storing the context of A and
restoring the context of B. Thus the process can either be executing or awaiting
execution. The entire “state” of the process is contained in its context. This process
model help designers in the development of powerful technique for ensuring coor-
dination and cooperation among processes. New features can be designed and
incorporated into the OS (e.g. priority) by expanding the context to include any new
information needed to support the features.

2.3 A PROCESS MODEL

Since process is the unit of work in a modern systems we need to understand how
these processes are handled by OS. In order to understand the working of any OS.
The behaviour of an individual process can be characterized by listing the sequence
of instructions that execute for that process. And behavior of the processor can be
characterized by showing the way in which processes are interleaved.
The main responsibility of the OS is to control the execution of processes. This
include determining the interleaving pattern that is followed and allocating resources 29
Operating Systems to process. To be able to design the OS effectively we need to have a clear model
of the behaviour of a process. The first step in designing a program to control
process is to describe the behaviour that we would like the process to exhibit. In
the beginning the designers assumed that a process can be in one of the two states
NOTES
Running or Not Running as shown in the figure 2.2, but this model was very soon
evolved into five state model because all processes are not always ready to execute.
A more natural way to handle this situation is to split the Not Running state into two
states : Ready and Waiting. For good measure, two additional states have been
added Five states of this model are as follows :
New : A process that has just been created but has not yet been admitted
to the pool of executable processes by the OS.
Ready : The process is waiting to be assigned to processor.
Waiting : The process is waiting for some event to occur (such as
I/O completion or reception of a signal).
Running : The process which instructions are being executed.
Exit : The processor has finished execution or A process that has been
released from the pool of executable processes by the OS either
because it halted or because it aborted for some reason.
The New and Exit states are useful constructs for process management. The New
state corresponds to a process that has just been defined.
2.4 PROCESS DESCRIPTION
As we know OS works as a manager of resources and controller of processes. To
perform these role OS needs information about the current status of each process
and resource. To keep these information OS maintains 4 different tables these are
: memory, I/O, file system and process. Since we are discussing process we will see
only process part of this table in detail.

Fig. 2.2: two state process model

30
Fig. 2.3: five state process model
2.4.1 Process Control Structures Process : Concept,
To manage and control a process the OS must know where a process is located Description and Control
and the attribute of the process that are necessary for its management.
2.4.1.1 Process Location : NOTES
As we know that, at a minimum a process consists of :
1. Program or set of programs to be executed.
2. User data consist of the modifiable part of the user space (a set of data loca-
tions for local and global variables and any defined constants). It may include
program data, a user stack area, and programs that may be modified.
3. A stack is used to store parameter and calling address for procedure and
system calls. Each process has one or more system stacks associated with it.

Fig. 2.4 : General structure of OS control system

The location of a process image depends on the memory management scheme being
used. As shown in the figure 2.4 process image is a collection of program, data,
stack and attributes. Figure depicts the structure of the location information in the
following way. There is a primary process table with one entry for each process.
Each entry contains, at least a pointer to a process image. If the process image
contains multiple blocks then this information will be contained directly in the pri- Check Your Progress
mary process table or by cross reference to entries in memory tables. This depiction
may differ from one OS to another. 1. Define any three
definitions of the
2.4.1.2 Process Attributes term process.
In modern operating system a great deal of information about each process is 2. What is process
required for process management. The OS collects all information that it needs identification?
about a particular process into a data structure called process descriptor, task
control block, task descriptor or a process control block (PCB). Whenever a
process is created the operating system creates a corresponding process control
block to serve as its run-time description during the lifetime of the process. When
the process terminates, its PCB is released to the pool of free cells from which new
PCBs are drawn. A process becomes known to the operating system and thus 31
Operating Systems eligible to compete for system resources only when it has an active PCB associated
with it. The process control block is a data structure with fields for recording various
aspects of process execution and resource usage. Information stored in a PCB can
be typically divided into three general categories:
NOTES
 Process identification.
 Processor state information
 Process control information
1. Process identification : This is actually a numeric identifier or a mapping
based on process identifiers. It may be stored with the PCB and includes :
 Process ID
 ID of the parent process
 User ID
2. Processor State Information : Processor state is described by the value of
its registers. Hence, processor state information includes the value stored in
various processor registers. It includes the value of following registers-
a. User Visible Registers : A user visible register is one that may be refer-
enced by means of the machine language that the processor executes. Such
as AH, AX, BH, BL etc.
b. Control and status Register : Includes those registers that help to control
the execution.These values include:
 Program counter : Contains the address of the next instruction to be
accessed.
 Condition codes : Contains the value of flag registers. These values
include the result of the most recent arithmetic or logical operation (e.g.
sign, zero, carry, equal, overflow).
 Status information : Includes information about interrupts enabled/
disabled flags, execution mode.
c. Stack Pointers or Stack Register : Provides pointers to stack that is
used by the OS to control program execution and to keep track of inter-
rupts.
3. Process Control Information : Operating system needs some additional in-
formation to control and coordinate the various active processes. Most impor-
tant information contained are-
a. Scheduling and State Information : This information is needed by the
OS to perform scheduling function. This may include process state (running,
waiting, halted), scheduling algorithm used, and identity of the event for
which process is waiting before it can be resumed.

32 b. Data Structuring : To understand the interdependency and relationship


among processes. All processes must be arranged according to a specific
data structuring technique. For example a process may be linked to other Process : Concept,
processes in a queue, ring or some other structure. A process may exhibit Description and Control
a parent child relationship with another process to support these structures.
c. Inter process Communication : What will be the methods for commu- NOTES
nication? Various flags, signals, and messages may be associated with com-
munication between two independent processes. Process control Block keep
track some or all of these information.
d. Process Privileges : Processes are granted privileges in terms of the
memory that may be accessed and the type of instructions that may be
executed. It also contains the privileges given to the process to use system
utilities and services.

figure 2.5, Structure of PCBs in visual memory

e. Memory Management : This section include pointers to segment and/or


page table that describe the virtual memory assigned to this process.
f. Resource Ownership and Utilization : Information about which re-
source is being used and which will be required. A history of the use of the
processor or other resources may also be included, this information is needed
by the scheduler.
Figure 2.5 suggests the structure of PCBs(or process image) in visual memory. As
shown in the figure each process image appears as a contiguous range of address.
In an actual implementation, the case may differ, it depends on the memory man-
agement scheme and the way in which control structures are organized by the
operating system.
2.4.2 The Role of the PCB
Since PCB contains all the information about a process that is needed by the OS.
It is the most important control data structure in an OS. Even we can say that the
set of PCB defines the state of the OS. 33
Operating Systems This brings up an important design issue. A number of routines within the
OS will need access to information in process control blocks. The provision of
direct access to these table is not difficult. Each process has a unique ID which can
be used as an index into a table of pointers to the PCB. But there are two problems
NOTES of this direct access.
 A bug in a simple routine, such as an interrupt handler, could damage the
system’s ability to manage the affected processes.
 A change in the design of structure or semantics of the PCB could affect a
number of modules in the OS.
These problems can be solved if all routines in the OS to go through a handler
routines which job is to protect PCB and which controls all the readings and writings
in these blocks. The disadvantage of using such a routine results in degradation in
the performance.
2.5 PROCESS CONTROL
2.5.1 Modes of Execution
It is necessary to protect the OS and key OS table, such as PCB, from interference
by user program. For this reason instructions executed by the processor are widely
divided into two modes:
 System Mode : More privileged mode also referred to as the system mode,
control mode, or kernel mode. This mode includes all the important system
functions such as process management, Memory management functions, I/O
management functions and support functions, that comes in the kernel portion
of the OS.
 User Mode : The less privileged mode is the user mode because user pro-
grams typically execute in this mode.
In which mode system is being executed can be determined by examining the PSW
bit, which is changed in response to certain events.
2.5.2 Process Creation
There are four common events that lead to the creation of a process :
1. In a batch environment, a process is created whenever a job is submitted.
2. In an interactive environment, a process is created when a new user attempts
to log on.
3. An OS may also create a process on behalf of an application. For example, if
a user issued a save file command the OS will create a process that will manage
the saving of the file. This allows the requesting process to proceed independent
of the amount of time that it takes to saving the file on the disk.
4. May be a result of expansion of existing process. Many times to achieve modu-
larity or parallelism user program creates number of processes.
For case 1 and 2 the OS is responsible for the creation of the new process.
Once the OS decides to create a new process operating system first builds the data
structures that are used to manage the process and allocates the address space to
34 be used by the process. OS proceeds as follows :
1. Assign a unique process identifier to the new process. And add a new entry in Process : Concept,
the process table, which contains one entry per process. Description and Control
2. Allocate space for the process.
3. The PCB must be initialized. This includes initialization of the identification NOTES
section, processor state information section and the process control information
section.
4. The appropriate linkages must be set. For example if OS is managing scheduling
queue as a linked list than pointers must be set between PCBs to create a
predecessor and successor relationship.
5. For each process there must be an additional data structure created to maintain
some additional information such as accounting information, performance record
etc.
2.5.3 Termination of process
After a process has been created, it starts running and does whatever it is assigned.
But we know, nothing lasts forever. There may be a lot of reason that may cause
a process to terminate. A process will terminate, due to one of the four major
reasons:
1. Termination due to normal completion of a process.
2. Termination due to error or fault condition.
3. Termination due to Fatal error.
4. Termination by other process.
1. Normal completion : After the execution of its last instruction process volun-
tarily terminates itself by executing an OS system call. This is the normal way
of process termination.
2. Error or fault condition : Some error or fault condition that cause the process
to be terminated are as follows :
 Allotted Time violation – The process has not completed in the specified
time interval.
 Memory unavailable – Required amount of memory for the process is not
available.
 Bounds violation – The process tries to access memory location that is not
assigned to this process.
 Protection error – The process attempts to use a resource or a file it is
not allowed to use.
 Arithmetic error – The process tries to perform restricted computation
such as division by zero.
 I/O failure – An error occurs in I/O issued by the process.
 Privileged instruction – The process attempts to use instruction reserved
for the OS.
3. Termination due to Fatal Error : If a process attempts to execute a nonex-
isting instruction or want to perform action that is not possible, such as want to 35
Operating Systems open a non existing file, it causes a fatal error. As the name suggests (fatal
means subject to death) when this error comes it terminates the process.
4. Termination by other process : Some of the possible cases when a process
can be terminated by other process.
NOTES
 Operator or OS intervention – For some reason, the operator or the OS
can terminate the process.
 Parent termination – When a process is terminated, the OS may termi-
nate all the child process of this process.
 Parent request – A parent process has the authority to terminate any of
its child by executing a system call to terminate the process. In Unix this call
is ‘kill’.
Termination moves the process to exit state. At this point, the process may return
data to its parent process (calling process). All the resources of the process-
including physical and virtual memory, open files, and I/O buffers- are deallocated
by the operating system. The process is no longer eligible for execution. However,
the tables and other information associated with the process are temporarily pre-
served by the OS. This provides time for auxiliary or supporting programs to
extract any needed information.
2.5.4 Context switching
In modern operating system CPU have to frequently switch from one process to
another process. When an interruption occurs, the system decides the next process
to execute. Since the current process is not terminated and will be resumed in near
future from the position it is stopped, OS has to save the context of that process
in its PCB. This context includes the value of the processor registers, the process
state and memory management information. Hence, switching the CPU from one
process to another requires storing the context of the current process and restoring
the context of the next process so that CPU can execute the instruction of the next
process. This whole process is called context switching.
2.5.5 Process switching
A transition between two memory-resident processes in a multiprogramming system
is called a process switch. In other words due to some reason when a running
process is interrupted and the OS assigns another process to running state and
turns control over to that process is called process switching. A process switch
may occur due to following 3 reasons :
1. Interrupt
2. Trap
3. Supervisor call
Interrupt : Interrupt can occur due to some sort of event which is external and
independent of the currently running process, such as the completion of on I/O
operation. In case of simple interrupt, control is first transferred to an interrupt
handler routine, which does some basic housekeeping and then branches to an OS
routine that is concerned with the particular type of interrupt that has occurred.
Some examples are Clock interrupt, I/O interrupt and Memory fault.
36
Trap : This is caused due to an error or exception condition generated within the Process : Concept,
currently running process, such as an illegal attempt to access a file. With a trap, the Description and Control
OS determines whether the error is fatal. If so, then the currently running process
is moved to the exit state and a process switch occurs. If not, then the action of
the OS will depend on the nature of the error and the design of the OS. It may NOTES
attempt some recovery procedure or it may simply notify the user. It may do a
process switch, or it may resume the currently running process.
Supervisor call : The OS may be activated by a super user call from the program
being executed. This call results in a transfer to that routine which is part of the
OS code. Generally, the use of a system call results in placing the user process in
the waiting state.
The steps involved in a full process switch are as follows :
1. Save the context of the processor, including program counter and other regis-
ters.
2. Update the process control block of the process that is currently in the running
state.
3. Move the process control block of this process to the appropriate queue be-
cause the state of the process is changed.
4. Select another process for execution.
5. Update the process control block of the process selected.
6. Update memory management data structures.
7. Select a swapped out process to assign it to CPU and restore the context of
the processor so that CPU can resume its execution.
Thus the process switch, which involves a state change, requires considerably more
effort than a context switch.
Now it is very clear that process switching and context switching are two totally
different concepts. A context switch may occur without changing the state of the
process that is currently in the running state. This involves less storing overheads.
However, if the currently running process is to be moved to another state, then the
OS must make substantial changes in its environment.

2.6 PROCESSES AND THREADS


In the discussion about process so far, we have found that a process model is based
on two independent concepts :
Unit of resource ownership : A process is allocated an address space to hold a
program text and data, as well as other resources, such as I/O devices, accounting
information etc.
Unit of Dispatching : This unit decides the execution sequence. It has a program
counter that keeps track of which instruction to execute next. It has some registers
that hold its current working variables. It has a stack that is used to contain the
execution history (one entry for each procedure that is called but not yet returned
from).
37
Operating Systems

NOTES

Fig. 2.6 Single and Multithreaded Processes

In traditional OS, these two concepts are indeed the essence of a process. How-
ever, in modern OS these two concepts are treated independently. The Unit of
dispatching is usually referred to as a thread in these OS. Whereas the unit of
resource ownership is usually referred to as a process. So we can say :
``A thread sometimes called a light weight process (LWP), is basic unit of CPU
utilization. It comprises of a thread ID, a program counter, a register set, and a
stack. It shares with other threads belonging to the same process set and a stack.
It shares with other threads belonging to the same process, its code section, data
section, and other operating-system resources, such as open files and signals.''
A heavy weight (or traditional) process (HWP) is a single thread of control. If the
process has multiple threads of control, it can do more than one task at a time. The
difference between a traditional single-threaded process and a multithreaded pro-
cess is shown in Figure 2.6. Many software packages that run on modern desktop
PCs are multithreaded. An application typically is implemented as a separate pro-
cess with several threads of control. In some situations a single application may be
required to perform several similar tasks for example a web server accepts client
requests for web pages, images, sound, and so forth. A busy web server may have
several of clients concurrently accessing it. If the web server ran as a traditional
single-threaded process, it would be able to service only one client at a time. The
amount of time that a client have to wait for its request to be serviced could be
enormous. So it is efficient to have one process that contains multiple threads to
serve the same purpose. This approach would multithread the web-server process,
the server would create a separate thread that would listen for client requests when
a request was made rather than creating another process it would create another
thread to service the request. In another example of multithreading a word pro-
cessor may have a thread for displaying graphics, another thread for reading
keystrokes from the user, and a third thread for performing spelling and grammar
checking in the background.
Different threads in a process are not quite as independent as different processes.
All threads have exactly the same address space, which means that they also share
the same global variables. Since every thread can access every memory address
within the process address space, one thread can read, write, or even completely
wipe out another thread’s stack. There is no protection between threads because
(1) it is impossible, and (2) it should not be necessary. Different processes, which
may be from different users and which may be hostile to one another, a process is
38
always owned by a single user, who has presumably created multiple threads so that
they can cooperate. In addition to sharing an address space, all the threads share Process : Concept,
the same set of open files, child processes, alarms, and signals, etc. as shown in fig. Description and Control
2.7.

Per Thread Items Per Process Items NOTES

Program counter Address space


Stack Global Variables
Register set Open files
Child threads Child Processes
State Timers
Signals and singal handless
Semaphores
Accounting information
Figure 2.7 Per Thread and Per Process Concepts

To get the advantages like responsiveness, Resource sharing economy and utilization
of multiprocessor architectures multithreading concept can be used.The items in the
second coloumn are process properties, not thread properties. Like a traditional
process, a thread can be in any one of several states-running, waiting, ready or Exit.
A thread which currently has CPU is called active. A blocked (waiting) thread is
waiting for some event to unblock it. Threads can be broadly categorize into User
thread and kernel threads.
User Threads : They are supported above the kernel and are implemented by a
thread library at the user level. Support for thread creation scheduling, and manage-
ment are provided through a library. Kernel does not provide support, because the
kernel is unaware of user-level threads, all thread creation and scheduling are done
in user space without the need for kernel intervention. Thus, user-level threads are
fast to create and manage. However, they have drawback, for instance, if the kernel
is single-threaded, then any user-level thread performing a waiting system call will
cause the entire process to wait, even if other threads are available to run within the
Check Your Progress
application.
3. What is Schedul-
Kernel Threads : They are supported directly by the operating system. In this
ing and state infor-
kernel performs creation, scheduling, and management of threads in kernel space.
Since the thread management is done by the operating system, kernel threads are mation under pro-
generally slower to create and manage than user threads. However, since the kernel cess controll infor-
is managing the threads, if a thread performs a waiting system call, the kernel can mation?
schedule another thread in the application for execution. Also, in a multiprocessor 4. Write any two
environment, the kernel can schedule threads on different processors. commonevents
2.6.1 Advantages of Multithreaded Programming that lead to the
creating a process.
The advantages of multithreaded programming can be broken down into four major
categories:
1. Responsiveness : Multithreading allows a program to continue running even
if part of it is blocked or is performing a lengthy operation, thereby increasing
responsiveness to the user. For example, a multithreaded web browser could
still allow user interaction in one thread while an image is being loaded in another
thread. 39
Operating Systems 2. Resource sharing : Generally, threads share the memory and the resources of
the process to which they belong. The benefit of code sharing is that it allows
an application to have several different threads of activity all within the same
address space.
NOTES 3. Economy : Allocating memory and resources for process creation is costly.
Alternatively, because threads share resources of the process to which they
belong, it is more economical to create and context switch threads. In general,
it is much more time consuming to create and manage processes than threads.
4. Utilization of Multiprocessor Architectures : The advantages of multithreading
is increased in a multiprocessor architecture, where each thread may be running
in parallel on a different processor. A single-threaded process can only run on
one CPU, no matter how many are available. Multithreading on a multi-CPU
machine increases concurrency. In a single processor architecture, the CPU
generally moves between each thread so quickly as to create an illusion of
parallelism, but in reality only one thread is running at a time.
2.7 CASE STUDY : THREADS IN LINUX
As we know threads are “light weight processes” (LWPs). A process has five
fundamental parts: code (“text”), data (VM), stack, file I/O, and signal tables. That
is why switching a process incurred significant amount of overhead. Switching a
process require all the tables have to be flushed from the processor. Also, the only
way to achieve shared information between heavy weight process (HWPs) is through
pipes and “shared memory”. If a HWP spawns a child HWP using fork(), the only
part that is shared is the text.
Threads reduce overhead by sharing fundamental parts. By sharing these parts,
switching happens much more frequently and efficiently. Also, sharing information is
not very “difficult''. Even more can be shared.
Threads in Linux are handled in a different manner than other operating systems
because of its open source nature. A user/system administrator can configure any
part of the Linux according to the needs. There are two types of threads. First we
see user-level thread then we see Kernel-level threads in Linux. Also discussed are
the general topics regarding whether or not Linux is multithreaded and whether or
not Linux programmers consider Linux threads to be better than other operating
systems’ threads.
Linux which was the development of Linus Torvalds is an open-source code version
of Unix. Linux is simply, the kernel structure of an operating system. It is like a
barebones packaging of what is required. Extra services are most often come in
Libraries. Threads are considered to be a part of the kernel. But time to time Linux
users create their own libraries. Hence there are various libraries in existence. Since
version 1.3.56, Linux has supported kernel-level threading, User-level thread librar-
ies have been around since as early as version 1.0.9.
User-Level Threads
User-level thread avoids the kernel and manages the tables itself. Often this is called
“cooperative multitasking” where the task defines a set of routines that get “switched
to” by manipulating the stack pointer. Typically each thread “gives-up” the CPU by
calling an explicit switch, sending a signal or doing an operation that involves the
40 switcher. Also, a timer signal can force switches. User threads typically can switch
faster than kernel threads
Disadvantages : Process : Concept,
User-space threads have a problem that a single thread can monopolize the timeslice Description and Control
thus starving the other threads within the task. Also, it has no way of taking advan-
tage of SMPs (Symmetric MultiProcessor Systems, e.g. dual-/quad-Pentiums). Lastly,
when a thread becomes I/O blocked, all other threads within the task lose the NOTES
timeslice as well.
Some user-thread libraries have provided solutions to these problems with work-
arounds. Timeslice monopolization was addressed by controlling it with an external
monitor that uses its own clock tick. I/O blocking can be solved by creating special
wrappers over system calls or the task can be written for nonblocking I/O.
Kernel-Level Threads
Kernel-level threads are implemented in the kernel using several tables (each task
getting a table of threads). The kernel schedules each thread within the timeslice of
each process. There is more overhead with switching in kernel-level threads but
Linux’s kernel-level threads perform nearly as well as user-level.
Linux can operate by using either entirely user-level or entierly kernel-level threads
or a combination of both.
Advantages:
1. It is less likely for a thread to monopolize a timeslice.
2. I/O blocking is not a problem. If properly coded, the process can automatically
take advantage of SMPs as well and will run incrementally faster with each
CPU added.
The biggest change in Linux is that it does not support multithreading as these are
more likely to crash. Linux prides itself on being one of the most stable, configurable,
and free operating systems on the market. With multithreading, there are several
objects that are hidden from applications and may be shared inadvertently. Addi-
tionally, there are conflicts between some libraries, which is an inherent problem with
open-source solutions (each coder has a vision of what is good and that differs from
coder to coder). For example, SVGAlib & LinuxThreads both use SIGUSR1 and
SIGUSR2, which are application-reserved signals. If an application were to use
these libraries together, the application would, at best, have problems. The more
likely result would be that it would crash. Debugging such conflicts is a nightmare.
There is an attempt within the open source community to get the libraries in sync
with each other.
User of the Linux can ask whether Linux Threads are better ? This is, simply, a
matter of opinion. Many Linux die-hards believe that the Linux thread libraries are
better than any other implementation out there while mostly keeping the same API.
Then again, these are the same people who believe Bill Gates is the anti-Christ, that
Windows has set the industry back ten years rather than moving forward, and that
all programs should have configurable kernels.
Their argument rests on the belief that Linux is an inherently more stable OS than
many. Linus Torvalds defined a thread as “a context of execution” which means that
only one process/thread table and one scheduler is needed. Obviously, this lends to
stability because there is less risk of conflict. 41
Operating Systems ANSWER TO CHECK YOUR PROGRESS
1) Many definitions have been given for the term process, three of these are as
follows:
NOTES  Process is a program in execution.
 Process is the animated spirit of a program.
 Process is that entity that can be assigned to and executed on a processor.
2) Process identification : This is actually a numeric identifier or a mapping
based on process identifiers. It may be stored with the PCB and includes :
 Process ID
 ID of the parent process
 User ID
3) Scheduling and State Information : This information is needed by the OS
to perform scheduling function. This may include process state (running, waiting,
halted), scheduling algorithm used, and identity of the event for which process
is waiting before it can be resumed.
4) Process Creation
There are four common events that lead to the creation of a process :
1. In a batch environment, a process is created whenever a job is submitted.
2. In an interactive environment, a process is created when a new user at-
tempts to log on.
TEST YOURSELF
1. What is process concept? Explain Thread.
2. Define process? Explain states of process with the help of suitable diagram.
3. What are the various process states ? Explain process control block (PCB).
4. Explain Process status with the help of transition diagram.
5. What is the role of PCB in Operating System ?
6. Discuss the contents of PCB.
7. Explain Process concept and PCB.
8. Give the situation for process creation and deletion.
9. Give the steps of process creation and deletion.
10. What is thread? Compare process and thread.
11. What information is kept in the PCB?
12. What do you understand by context switching and process switching? Give
the difference between them.
42

You might also like