0% found this document useful (0 votes)
112 views

Threads & Semaphore

The document discusses threads and CPU scheduling in operating systems. It defines threads as multiple flows of control within a process that share code, data and OS resources. Key points: 1. Threads can be user-level or kernel-level, with kernel threads managed directly by the OS kernel for better performance. 2. CPU scheduling algorithms like FCFS, SJF and priority scheduling are used to maximize CPU utilization, throughput, and minimize waiting and turnaround times. 3. Preemptive and non-preemptive variants of SJF are discussed, with preemptive SJF having better average waiting times than non-preemptive.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views

Threads & Semaphore

The document discusses threads and CPU scheduling in operating systems. It defines threads as multiple flows of control within a process that share code, data and OS resources. Key points: 1. Threads can be user-level or kernel-level, with kernel threads managed directly by the OS kernel for better performance. 2. CPU scheduling algorithms like FCFS, SJF and priority scheduling are used to maximize CPU utilization, throughput, and minimize waiting and turnaround times. 3. Preemptive and non-preemptive variants of SJF are discussed, with preemptive SJF having better average waiting times than non-preemptive.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 24

Threads

Thread Definition:
Multiple flow of control within a process is called thread. A thread of execution is the smallest unit of processing that can be scheduled by an operating system. It is also called as light weight process for its CPU utilization. It has thread ID, a program counter, a register set and a stack. In the same process it shares 1. Code section 2. Data section and 3. Other OS resource with other threads. A traditional thread (heavy weight thread) has only one thread of control. But a software package will have multithread. Code Register Stack Data Register Stack Files Register Stack

Code Register

Data

Files Stack

Thread

Single Threaded

Multi Threaded

Benefits:
Four major categories for multithreaded

1. Responsiveness
It is an interactive application that allows a program to continue even if a part was blocked.

2. Resource sharing
It share the memory resource of a process to which they belong. The benefit of this is all thread share the same address space.

3. Economy
Since the threads share resources, it is economical to create process.

4. Utilization of multiprocessor architecture.


In this each thread may run in parallel on different processor. So it increase concurrency.

User and kernel threads


Thread are provided in two levels Prepared by Anne Page 1

1. User threads 2. Kernel threads.

User threads
It was implemented by a thread library at user level. This library support for thread creation, scheduling and management without the support of OS. It is fast to create and manage threads. The drawback in this is, if kernel is single thread and a user is performing a blocking will cause the entire process to block, even though the other threads are available. It was implemented directly by the Operating system. The kernel performs to create, schedule, and manage threads in kernel level. It is slower to create and manage threads. In this if a thread perform a block, the kernel schedule other thread to manage that thread.

Kernel Threads

Threading Issues
1. The fork and exec system calls
The system call fork is used to create a duplicate process. There are two versions of fork. o Duplicates all the threads o Duplicate only the thread which invoked the fork system call. The exec system call is used to after the fork system call to execute the process. Thread cancellation is the task of terminating a thread before it has completed. Example: if multiple threads are concurrently searching a database and one thread returns the result and the other threads might be cancelled. The remaining threads are cancelled before its task completion. The threads that is to be cancelled is called as target thread. Cancellation may occur in two ways: o Asynchronous cancellation: The thread immediately terminates o Deferred cancellation: It has an opportunity to check before terminate. A signal is to notify a process that a particular event has occurred. All signals follow the same pattern o A signal is generated by the occurrence of a particular event. o A generated signal is delivered to a process. o Once delivered, the signal must be handled. Signal may receive either synchronously or asynchronously. Page 2

2. Cancellation

3. Signal Handling

Prepared by Anne

o Synchronous signals signal generated and deliver within a process. o Asynchronous signal signal generated by an external event and deliver to the running process. The signal will deliver in the following options: o Deliver the signal to the thread to which the signal applies. o Deliver the signal to every thread in the process. o Deliver the signal to certain threads in the process. o Assign a specific thread to receive all signals for the process. The signal will be handled by two possible handlers: o Default Signal handler Run by the kernel to handle the signals. o User defined signal handler user function is called to handle the signal. Windows 2000 doesnt support signals so Asynchronous Procedure call (APC) is used. In this, it allows the user threads to specify a function that is to be called when the thread receives notification of a particular event.

4. Thread Pool
Whenever a sever receive a request it create a separate thread to receive the thread in this two issues arise: o Amount of time takes to create a thread. o Allow concurrently with new thread So to avoid these and to hold all these thread a thread pool was used. In this when a thread was created in will be stored in the thread pool and wait for work. If a request receives the particular thread will be awaken from the pool and serve the request and after completion again it will go to the thread pool. o Faster to service a request because the threads are waiting instead of creating as new. o It limits the number of threads that exist at any point. The number of thread was set based on the number of CPUs, amount of physical memory etc

Benefit of using thread pool.

5. Thread specific data


Each thread will belong to a process that shares data of the process. Each thread might need its own copy of certain data in some circumstance this is called as Thread specific data.

Prepared by Anne

Page 3

CPU Scheduling.
Concept:
Scheduling is a fundamental operating system function. This is used to select a process if the CPU is in idle state.

CPU-I/O Burst Cycle


Process execution consists of a cycle of CPU execution and I/O Wait. Process alternate between these two states. Process begins with a CPU execution that followed by I/O Wait again CPU and so on and the last CPU execution will end with a system request to terminate execution.

CPU Scheduler
Whenever a CPU becomes idle, the Operating System must select a process from the ready queue. The selection is carried out by Short term scheduler. It may use any of the scheduling algorithms.

Preemptive Scheduling
Scheduling decision may take place under four circumstance: 1. If process switches from running to waiting state. 2. If process switches from waiting to ready Queue. 3. If process switches from waiting to ready queue. 4. If Process terminates.

Non-Preemptive Once the CPU was allocated to a process it will release the
process in either by terminating or by switching to waiting state. Preemptive the process permitted that an execution may be interrupted prior to completion and it can be resumed later.

Dispatcher
Another component is dispatcher. This gives control of the CPU to the process selected by the scheduler. Functions are 1. Switch context 2. Switching to user mode 3. Jumping to proper location in the user program to restart the program. It is fast and invoked during every process switch. Dispatch Latency o The time taken for the dispatcher to stop one process and start another process is called dispatch latency.

Prepared by Anne

Page 4

Scheduling Criteria
There are more number of algorithms used for selecting a process and each algorithms have different property. While choosing an algorithm for a process some criteria should be determined. They are

CPU Utilization

o The CPU will be busy all the time this increase the utilization. o It may range from 0 to 100 %. In real time system from 40 %(light weight process) to 90 % (heavy weight process).

Throughput

o If CPU is executing work is done. o One measure of work is the number of process completed per time unit. For Long process rate may be 1 process per hour. For short transaction, rate may be 10 process per second. o Time taken to complete the process. o The interval form the time of submission and the time of completion is called turnaround time. o It is the sum of period spends waiting in the ready queue. o It is the time of submission of a request and the first response is produced

Turnaround Time

Waiting Time

Response Time

Scheduling Algorithm
By using this algorithm the process are effectively selected for CPU.

First Come First Serve (FCFS)


In this if a process that requests a CPU first is allocated the CPU first. Process Burst Time (millisecond) P1 24 Gantt chart P2 3 P3 3 P1 P2 0 Average Waiting Time: =(0+24+27)/3 =51/3 = 17 millisecond. 24 27

P3 30

Prepared by Anne

Page 5

Shortest Job First Scheduling (SJF)


This algorithm is mainly based on the length of the process.\ The CPU will be allocated to the process whose burst time is small. If two processes have same burst time then FCFS is used to break the tie. Burst Time (millisecond) 24 Gantt chart 3 3 P2 P3 0 3 6

Process P1 P2 P3

P1 30

Average Waiting Time: = (0+3+6)/3 = 9/3 = 3 millisecond

The Difficulty in SJF is, knowing the length of all process. o If all the process are arrived at 0 time then selection will be easy, if not the SJF will be difficult so there are two types in SJF they are Preemptive Non-Preemptive.

Preemptive SJF algorithm


When a new process arrives at the ready queue while a previous process is executing. In this, it will check whether the arrived process burst time is short or not, if so it will preempt (stop) the running process and execute the arrived process. If not the execution will continue with the running process. This is also called as Shortest- remaining- time-first scheduling.

Non-Preemptive SJF algorithm


When a new process arrives at the ready queue while the previous process is executing In this, the running process will complete first then only it will go to the next smallest arrived process. Process Arrival Burst (millisecond) Time Time P1 0 7 P2 1 4 P3 2 9 P4 3 5

Prepared by Anne

Page 6

Non-Preemptive.
Gantt chart P1 0 7 P2 P4 11 16 P3 25

Average waiting time = ((0 - 0) + (7 1) + (16 2) + (11 - 3)) / 4 = 28 / 4 = 7 millisecond

Preemptive:
Gantt chart P1 P2 P4 P1 10 16 P3 25

5 0 1 Average Waiting Time:

=( ( (0 0) + (10 0 )) + ( 1 1) +( 16 2 ) + ( 5 3 ) ) / 4 = ( 10 + 0 + 14 + 2 ) / 4 = 26 /4 = 6.5 millisecond.

Priority Scheduling:
A priority is associated with each process, and the CPU is allocated to the process with high priority. Some system use low numbers to represent high priority and other use high numbers to represent high priority. Process P1 P2 P3 P4 P5 P2 P5 P1 Priority 3 1 4 5 2 Gantt chart P3 Burst (millisecond) Time 10 1 2 1 5 P4

6 16 18 19 0 1 Average waiting time: = ( 6 + 0 + 16 + 18 + 1 ) / 5 = 41 / 5 = 8.2 millisecond Priority may be given as Internal or External. o Internal Priority it is computed based on some measurable quantities such as time limit, memory requirement etc o External priority it is set external to OS based on the importance of the process, amount paid for the process etc Prepared by Anne Page 7

Priority may be Preemptive or Non- Preemptive. o Preemptive the running process is preempted if the priority of the arrived process is higher. o Non-Preemptive the higher priority process is simply put in the ready queue when a process is running. The problem priority scheduling is, indefinite blocking (Starvation). That is, the low priority process waits for a long time for CPU without executing. In order to avoid this AGING technique is used. This technique gradually increases the priority of the process for every time quantum that waits for the system.

Round Robin Scheduling


This was designed for time sharing system. A small unit of time, called a time quantum (time slice) is used in the execution. The ready queue is treated as circular queue so that the scheduler goes around the ready queue, allocating the CPU to each process for a time interval. Burst Time (millisecond) the time quantum = 4 millisecond. 24 3 Gantt chart 3 P3 P1 P1 P1 P1 P1 P2 P1 0 Average Waiting Time: Waiting Time = Turnaround time Burst Time = (( 30 24 ) + ( 7 3 ) + ( 10 3 ) ) / 3 = ( 6 + 4 + 7 ) / 3 = 17 / 3 = 5.66 millisecond. 4 7 10 14 18 22 26 30

Process P1 P2 P3

Multilevel Queue Scheduling


The process are classified into different groups commonly it was divided into o Foreground(Interactive process) o Background (Batch process). This partition the ready queue into several separate queues. Such as 1. System Process 2. Interactive Process 3. Interactive Editing Process 4. Batch Process 5. Student Process. The processes are permanently assigned to one queue based on some of the property of the process such as memory size, process priority and process type. Each queue has its own scheduling algorithm. The foreground queues have higher priority than the background queues. The low priority will execute only when the high priority queue is empty.

Prepared by Anne

Page 8

If an interactive editing process entered the ready queue while a batch process was Higherrunning, the batch process would be preempted. priority SYSTEM PROCESS INTERACTIVE PROCESS INTERACTIVE EDITITNG PROCESS BATCH PROCESS Background STUDENT PROCESS Lowest Priority Foreground

Multilevel Feedback Queue Scheduling


Level 0 Quantum = 8

Level 1 Quantum = 16

Level 2 FCFS This allows the process to move between queues. Consider, the ready queue is divided into three queues and each have a time quantum. All the processes that enter the queue have to be completed within the time limit if not the remaining process will be moved to the tail of the lowest priority queue. In this also the level 1 will execute only when the level 0 is empty, For this reason the remaining process in the higher priority are moved to the lower queues. If a process use too much of CPU time it is moves then it will be moved to lower priority queue and if a process wait for CPU for long time will be moved to higher priority queue. 1. The multilevel feedback queue is defined by the following parameters: 2. The number of queue. 3. The scheduling algorithm for each queue. Page 9

Prepared by Anne

4. The method used to determine when to upgrade a process to a higher priority queue. 5. The method used to determine when to demote a process to a lower priority queue. 6. The method used to determine which queue a process will enter when that process needs service.

Multi Processor Scheduling.


If there is multiple CPU, the scheduling problem is more complex. The multi processor scheduling can be done in two issues. Homogeneous o In this consider all the processors are identical in terms of their functionality. o Any available processor can then used to run any process in the queue. o If several processors are identical then load sharing occurs. o In this a common queue is used for scheduling to any available process and two scheduling approaches are used. Self Scheduling Each processor examines the ready queue and selects a process to execute. Master Slave Structure One processor is made as a Scheduler for other processor.(Asymmetric multiprocessing) Heterogeneous In this, all the processors are different in terms of their functionality. Each processor will have a separate queue.

Real-Time Scheduling
Real time computing are divided into two types Hard Real Time Systems It requires to complete a critical task within a guaranteed amount of time. If a process submitted along with the amount of time in which it needs to complete the scheduler admit only if it complete within the time limit if not the process will be rejected this is called as Resource Reservation. Soft Real Time System Computing in less restrictive. Implementing soft real time functionality requires careful design. The system must have a priority scheduling and the real time process should get high priority. Dispatch latency must be small. Smaller the latency, faster the real time can start execution. To keep dispatch latency low, system call preemptible is used. o The way to achieve this are Preemption Point In long duration system call, check whether a higher priority needs to run if so a context switch takes place and high priority process terminates. Points can be placed at only safe locations in the kernel. Page 10

Prepared by Anne

Kernel Preemption In this the entire kernel is preemptible. Some time the higher priority process may wait for low priority process to be complete this is called as Priority Inversion. The Critical Section Problem In Each process a segment of code is called as Critical section, in this the process may be changing common variable, updating, writing a file and so on In this, when one process is executing its critical section no other process is allowed to enter its critical section. The execution of critical process is design by the processes is mutually exclusive. Each process must request permission to enter its critical section the code for implementing this is called as Entry Section. This was followed by a Exit Section and the remaining code is called as remaining section. General Structure do{ entry section critical section exit section remainder section }while(1); A solution to the critical section problem must satisfy the following three requirements: Mutual Exclusive: o When Process P1 executing its critical section P2 is not allowed to enter its critical section. Progress: o If no process is executing in its critical section and some process wish to enter its critical section, then only the process that are not executing its remainder section is allowed to execute its critical section. Bounded Waiting: o A bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Two Process Solution In this section, two process are used to solve the problem Prepared by Anne Page 11

The processes are numbered P0 and P1. When pi and pj are used to denote two process

Algorithm 1: To process share a common integer variable turn=1. If turn==1, then process pi is allowed to execute in its critical section. do{ While ( turn ! = i ) ; critical section turn = j; remainder section }while(1); Mutual Exclusive: Satisfy. Progress : this was not satisfy, because the turn was given to j so j can start doing its remainder section. Since progress means the process have to wait completely for its critical section. Bounded Waiting: Satisfy. Algorithm 2: Algorithm 1 does not retain sufficient information about the status of each process, it remember its critical section alone. So flag was initialized for the process to get the status of the process Turn in the algorithm 1 was replaced with boolean flag[2]; This was initialized to false. If flag[i]=true and its ready to enter its critical section. do { Flag[i] = true; While (flag[j]); critical section Flag[i] =false; remainder section Prepared by Anne Page 12

} while(1);

Mutual Exclusive: Satisfy. Progress : This was not satisfy, because the flag tells only about the status of that process only. So the position of process j is not known. Since progress means the process have to wait completely for its critical section. Bounded Waiting: Satisfy. Algorithm 3: Algorithm 1 and 2 doesnt satisfy the progress. In this, processes share two variables. boolean flag[2]; int turn; Initially falg[i] = flag[j] = 0l In this, the flag tells about the status of the current process and turn tells the position of other process also. do { flag[i] = true; turn = j; While (flag[j] && turn == j); critical section flag[i] =false; remainder section } while(1); Mutual Exclusive: Satisfy. Progress : Satisfy, because the flag tells about the status of the running process and in the condition turn is made to wait completely for its critical section without doing its remainder section. Since progress means the process have to wait completely for its critical section. Bounded Waiting: Satisfy.

Prepared by Anne

Page 13

Multiple Process Solution (bakery algorithm) This was developed to solve the critical section problem for n process. In this, when a process enter it receives a number. The process with lowest value will serve first. This cannot guarantee that two process do not receive the same number. In this case the process with lowest name will serve first. The common data structure are; Boolean choosing[n]; int number[n]; do { Choosing [i] = true; number[i] = max(number[0],number[1],...,number[n-1])+1; chossing[i] =false; for (j=0; j<n ;j++){ while (choosing[j]); while((number[j]!=0) && ( number [j],j < number [i],i)); } critical section number[i] = 0; remainder section } while(1); Synchronization Hardware In this simple hardware instructions are used to solve the critical section problem. In a uniprocessor environment, the interrupts are prevent to occur while a shared variable is being modified. So the instructions would allow to execute in order without preemption. In Multiprocessor environment, displaying interrupts on a multiprocessor can be time-consuming, as the message is passed to all the processors. This message pasiing delays entry into each critical section, and system efficiency decreases. In order to solve this a message was given to the hardware and they are 1. Test And Set 2. Swap Test And Set The definition of Test And Set Boolean TestAndSet (Boolean &target) { Boolean rv = target; Target = true; Return rv; Prepared by Anne Page 14

} If Two TestAndSet instructions are executed simultaneously they will be executed sequentially in some order. If a machine support this instruction, then it can implement mutual exclusive by declaring a Boolean variable lock = false. do{ While ( TestAndSet(lock)); critical section lock = false; remainder section

}while(1); Swap Instruction Definition for Swap instruction Void Swap ( Boolean &a, Boolean &b) { Boolean temp = a; a = b; b = temp; } If a machine support this instruction, then it can implement mutual exclusive by declaring a Boolean variable lock = false. do { Key = true; While ( key == true) Swap(lock,key); critical section lock = false; remainder section }while(1); Semaphores A semaphore is a data structure that is shared by several processes. Semaphores are most often used to synchronize operations (to avoid race conditions) when multiple processes access a common, non-shareable resource. Semaphore is a nonnegative integer that is stored in the kernel. Access to the semaphore is provided by a series of semaphore system calls. Prepared by Anne Page 15

It can be accessed only through two standards: Wait and Signal. The classical definition of wait Wait(S) { While (S<=0); // no op S--; } The classical definitions of signal is Signal(S) { S++; } Usage Semaphore deal with n process critical section problem. N processes share a semaphore, mutex(mutual exclusive), mutex=1 do{ wait(mutex); critical section signal(mutex); remainder section }while(1); o Consider two concurrently running processes: P1 with statement S1 and P2 with Statement S2. o In this S2 will execute only after S1 have completed. o So these share a common variable synch=0 P1 P2 S; wait(synch); Signal(synch); S2; In this synch =0, P2 execute S2 only after P1 signal(synch), that will give after S1. Implementation While a process is in its critical section, any other process tries to enter its critical section must loop continuously in the entry section so there will be a busy waiting in order to avoid this Spin lock(Spins while waiting) is used. So modify the definition of wait and signal semaphores are modify. When a process executes the wait operation and find the semaphore value is not positive, it must wait instead of busy waiting the process can block itself. The block operation places a process into a waiting queue . A process that is blocked, waiting on a semaphore S, should be restarted when some other process executes a signal operation. The process is restarted by a wakeup operation. The definition for waiting Queue: Prepared by Anne Page 16

Typedef struct{ o Int value; o Struct process *L; }semaphore; When a process wait it will be added in the list L. a signal operation remove that process from the list L. The wait semaphore operation Void wait (semaphore S) { S.value--; if (S.value<0) { Add this process to S.L; Block (); } } The signal semaphore operation Void signal (semaphore S) { S.value++; If (S.value<=0){ Removes a process from S.L; Wakeup (); } } The block operation suspends the process and wakeup operation resumes the execution.

Deadlock and Starvation Where two or more processes are waiting indefinitely for an event that can be caused by one of the waiting process. This event is called as deadlock. Consider a system consisting of two process, P0 and P1, each having two semaphores S and Q initialized to 1. And are executing concurrently. P0 P1 wait(S) wait (Q); wait (Q) wait(S); . . . . . . signal(S); signal (Q); signal (Q); signal(S); In this the first line P0 execute wait(S) and P1 execute wait (Q). Since the signal for P0 have to come from P1 but P1 is waiting for signal (Q) which is in P0. So both the process will wait in the first statement itself. This is called deadlock Binary Semaphore Prepared by Anne Page 17

There are two types of semaphores Binary semaphore ranges between 0 and 1 Counting semaphore ranges any integer number The counting semaphore can be implemented using binary semaphores. S is a counting semaphore. Binary-semaphore S1, S2; int C; S1=1, S2=0 C= initial value of counting semaphore S. Wait operation wait(S); C--; if(C<0) { signal (S1); wait (S2); } signal (S2); Signal operation wait (S1); C++; if(C<=0) signal (S2); else signal (s1); Classic Problem of Synchronization. There are different synchronization problem arise some of them are Bounded buffer Readers Writers problem Dinning Philosopher Problem Bounded Buffer Problem This is commonly used to illustrate the power of synchronization primitives. In this, assume there is a pool consist of n buffer, each capable of storing one item. The mutex semaphore is used for accessing the pool which was initialized to 1. Empty and full are the two semaphores used to count the number of empty and full buffer respectively. Empty=n and full = 0; The Structure of the producer Process do { .. Produce an item in nextp . wait (empty); Products are waiting wait (mutex); . Process will be added in the buffer Prepared by Page 18 Anne

add nextp to buffer . Signal was given to next produced item signal (mutex); signal (full); } while (1); In the above algorithm if an item was produced it will be added in the buffer till the buffer gets full. After the adding one item the signal was to next item to be added in the buffer by signal(mutex); The structure of Consumer Process do { wait (full); Products are waiting wait (mutex); .. Process will be removed from Removes an item from buffer to nextc the buffer . Signal was given to next produced item signal (mutex); signal (empty); . consumes the item in nextc . } while (1); In the above algorithm, it wait for the buffer to get full if full then a single item from the buffer will be removed by wait(mutex); This will repeat for all the process till it become empty and that will be consumed by the consumer.

The Readers Writers Problem A data object is to be shared among several concurrent processes. Some process may want to read the file commonly called as readers Some process may want to update the files commonly called as Writers. The problem in this, if readers and writer access the shared simultaneously problem occur. This synchronization problem is called Readers Writers Problem. The readers writers problem has several variations 1. In this no reader will kept waiting unless a writer has already permission to use the shared object.(no reader should wait for other reader because writer is waiting) 2. Once the writer is ready that writer performs its writing as soon as possible(if writer is ready no reader can read) The solution for the above two may result in starvation. Writer may starve Readers may starve. Prepared by Anne Page 19

Solution for Readers Writers problem. The reader process share the following datastructures: semaphores mutex,wrt; int readcount; The semaphores are initialized mutex= 1 and wrt=1, readcount =0.wrt semaphore is common for both reader and writer The structure for writer process wait (wrt); . Waiting is performed . signal (wrt); The structure for reader process wait (mutex); readcount++; if (readcount == 1) wait (wrt); signal (mutex); . Reading is performed . wait (mutex); readcount --; if (readcount == 0) signal (wrt); signal (mutex);

The Dining Philosophers problem There are 5 philosophers sitting around a circular table and spends their time in eating and thinking. Each philosopher will have a bowl and a chopstick (5 bowl of food and 5 chopsticks) When a philosopher thinks, he cannot interact with others. After some time if a philosopher gets hungry and tries to pick up two chopsticks nearby him (for eating two chopsticks are needed). The philosopher cannot pick up a chopstick that was already in the hand of other philosopher. When he found that there are two chopsticks nearby him he can take it for eating and after finishing he can release his chopstick. The problem in this, if two philosopher who is sitting adjacent cannot eat simultaneously and a starvation occur. The solution to this is, chopstick is represented as a semaphore.

Prepared by Anne

Page 20

If a philosopher tries to grab a chopstick a wait operation is executed on that semaphore and when he release her chopstick signal operation is executed on that semaphores. Semaphores chopstick [5]; initialized to 1. This solution not guarantees that no two neighbors are eating simultaneously and reject. So deadlock occur. Several possible remedies are there they are Allow at most four philosophers to be sitting simultaneously at the table. Allow a philosopher to pick up chopstick only if both chopsticks are available. Allow the odd philosopher to pick left chopstick then right and allow the even philosopher to pick right first then the left Finally in all situations, that one philosopher may die in starvation.

Critical Region Semaphore provide convenient and effective mechanism for process synchronization but if they used in incorrect place then still result in error. So timing error may still occur with the use of semaphores Since all process shares the same semaphore mutex initialized to 1. Each process must execute wait (mutex) before entering its critical section and after signal (mutex). If process sequence is not observed then it results in difficult. Some of the situation based on the above. Suppose that a process interchange the order in which wait and signal operation on the semaphores mutex are executed Signal (mutex); . Critical section . Wait (mutex); o In this several processes may execute its critical section violating the mutual exclusive requirement. Suppose that a process replace signal (mutex) with wait (mutex). Wait (mutex); . Critical section . Prepared by Anne Page 21

Wait (mutex); o In this case, a deadlock occurs. Suppose a process omit wait (mutex) or signal (mutex). Then mutual exclusive or deadlock occurs. These are the example of incorrect use of semaphores. To solve this problem a number of high level language are used They are Critical Region and Monitors.

Critical Region In this a process consists of a local data, and a sequential program that can operate on that data. The data can be accessed by only the sequential program within the process. Other process cannot directly access the local data. The critical region requires that a variable v of type T, shared among many process. V:shared T; The variable v can be accessed only inside the region Region v when B do S; When S is being executed, no other process can access the variable v. The expression B is Boolean used to access critical region. When a process tries to enter its critical section region, it evaluates the expression B. If the expression is true statement S is executed or there will be a delay until B becomes true. Region v when (true) S1; Executed concurrently Region v when (true) S2; The critical region guard simple errors associated with semaphores. Since it doesnt elimates all errors but it effectively solve some problems. Consider the bounded buffer, the buffer space and pointers are encapsulated in struct buffer{ item pool[n]; int count, in, out; } The producer insert a new item nextp into the shared buffer by executing region buffer when(count < n) { pool[in]=nexp; in=(in+1)%n; count++; } The consumer removes an item from the shared buffer and puts it in nextc by executing region buffer when(count > 0) { nextc = pool[out]; out=(out+1)%n; count--; Prepared by Anne Page 22

} Monitors Monitors are a programming language construct Anonymous lock issues handled by compiler and OS Detection of invalid accesses to critical sections happens at compile time Any process can call a monitor procedure at any time But only one process can be inside a monitor at any time (mutual exclusion) No process can directly access a monitors local variables (data encapsulation) A monitor may only access its local variables The representation of monitor type consist of declaration of variable whose value define the state of an instance of the type. The representation cannot be used directly by various processes within the monitor only it can be accesses. To allow a process to wait within the monitor, a condition variable must be declared, as var x, y: condition Condition variable can only be used with the operations wait and signal. The operation x.wait; means that the process invoking this operation is suspended until another process invokes x.signal; The x.signal operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect.

Schematic view of monitor

Monitors with condition variable

Prepared by Anne

Page 23

If x.signal() operation is invoked by P, there is suspended process Q associated with condition x. if Q allow to resume, the signaling process P must wait if not Both execute simultaneously within the monitor.

Prepared by Anne

Page 24

You might also like