0% found this document useful (0 votes)
26 views

Os - Unit-2

Uploaded by

aarthikurra77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Os - Unit-2

Uploaded by

aarthikurra77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 46

UNIT-2

I. Process Concept
1. Process scheduling
2. Operations on processes
3. Inter-process communication
4. Communication in client server systems.

II. Multithreaded Programming


1. Multithreading models
2. Thread libraries
3. Threading issues

III. Process Scheduling


1. Basic concepts
2. Scheduling criteria
3. Scheduling algorithms
4. Multiple processor scheduling
5. Thread scheduling

IV. Inter-process Communication


1. Race conditions
2. Critical Regions
3. Mutual exclusion with busy waiting
4. Sleep and wakeup
5. Semaphores
6. Mutexes
7. Monitors
8. Message passing
9. Barriers

V. Classical IPC Problems


1. Dining philosophers problem
2. Readers and writers problem.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 1
UNIT-2

I. Process Concept

1. Process Scheduling

Process – a program in execution; process execution must progress in


sequential fashion
Multiple parts
a. The program code, also called text section
b. Current activity including program counter, processor registers
c. Stack containing temporary data
i. Function parameters, return addresses, local variables
d. Data section containing global variables
e. Heap containing memory dynamically allocated during run time
Process States :As a process executes, it changes state
a. new: The process is being created
b. running: Instructions are being executed
c. waiting: The process is waiting for some event to occur
d. ready: The process is waiting to be assigned to a processor
e. terminated: The process has finished execution

Process Scheduling Maximize CPU use, quickly switch processes onto CPU for time sharing

Process scheduler selects among available processes for next execution on


CPU. Maintains scheduling queues of processes
a. Job queue – set of all processes in the system
b. Ready queue – set of all processes residing in main memory, ready and
waiting to execute
c. Device queues – set of processes waiting for an I/O device
d. Processes migrate among the various queues

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 2
Queuing diagram represents queues, resources, flows

Schedulers

a. Short-term scheduler (or CPU scheduler) – selects which process


should be executed next and allocates CPU
i. Sometimes the only scheduler in a system
ii. Short-term scheduler is invoked frequently (milliseconds) Þ (must be
fast)
b. Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue
i. Long-term scheduler is invoked infrequently (seconds, minutes) Þ
(may be slow)
ii. The long-term scheduler controls the degree of multiprogramming
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 3
c. Processes can be described as either:
i. I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
ii. CPU-bound process – spends more time doing computations; few
very long CPU bursts
d. Long-term scheduler strives for good process mix
e. Medium-term scheduler can be added if degree of multiple
programming needs to decrease
i. Remove process from memory, store on disk, bring back in from disk
to continue execution: swapping

2. Operations on processes
System must provide mechanisms for:
a. process creation,
b. process termination

i. Process Creation
ii. Parent process create children processes, which, in turn create other
processes, forming a tree of processes
iii. Generally, process identified and managed via a process identifier (pid)
iv. Resource sharing options
a. Parent and children share all resources
b. Children share subset of parent’s resources
c. Parent and child share no resources
v. Execution options
a. Parent and children execute concurrently
b. Parent waits until children terminate
vi. Address space
a. Child duplicate of parent
b. Child has a program loaded into it
vii. UNIX examples
a. fork() system call creates new process
b. exec() system call used after a fork() to replace the process’ memory
space with a new program

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 4
A Tree of Processes in Linux

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 5
C Program Forking Separate Process

b. Process Termination
1. Process executes last statement and then asks the operating system to delete it
using the exit() system call.
a. Returns status data from child to parent (via wait())
b. Process’ resources are de-allocated by operating system
2. Parent may terminate the execution of children processes using the abort()
system call. Some reasons for doing so:
a. Child has exceeded allocated resources
b. Task assigned to child is no longer required
3. The parent is exiting and the operating systems does not allow a child to
continue if its parent terminates
4. Some operating systems do not allow child to exists if its parent has terminated.
If a process terminates, then all its children must also be terminated.
cascading termination. All children, grandchildren, etc. are terminated.
The termination is initiated by the operating system.
5. The parent process may wait for termination of a child process by using the
wait()system call. The call returns status information and the pid of the
terminated process
pid = wait(&status);
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 6
6. If no parent waiting (did not invoke wait()) process is a zombie
7. If parent terminated without invoking wait , process is an orphan

3. Inter-process communication
Processes within a system may be independent or cooperating. Cooperating process can
affect or be affected by other processes, including sharing data. Reasons for cooperating
processes:
a. Information sharing
b. Computation speedup
c. Modularity
d. Convenience
Cooperating processes need Inter-process communication (IPC) Two models of IPC
a. Message passing
b. Shared memory

Cooperating Processes
 Independent process cannot affect or be affected by the execution of another process
 Cooperating process can affect or be affected by the execution of another process
 Advantages of process cooperation
o Information sharing
o Computation speed-up
o Modularity
o Convenience
 Paradigm for cooperating processes, producer process produces information that is
consumed by a consumer process
o unbounded-buffer places no practical limit on the size of the buffer
o bounded-buffer assumes that there is a fixed buffer size

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 7
7 Data Lost
Wait()
6
5
4
Producer 3
Consumer
2
1
Buffer –
Shared
Resource
Direct Communication
 Processes must name each other explicitly:
o send (P, message) – send a message to process P
o receive(Q, message) – receive a message from process Q
 Properties of communication link
o Links are established automatically
o A link is associated with exactly one pair of communicating processes
o Between each pair there exists exactly one link
o The link may be unidirectional, but is usually bi-directional
Indirect Communication
 Messages are directed and received from mailboxes (also referred to as ports)
o Each mailbox has a unique id
o Processes can communicate only if they share a mailbox
 Properties of communication link
o Link established only if processes share a common mailbox
o A link may be associated with many processes
o Each pair of processes may share several communication links
o Link may be unidirectional or bi-directional
 Operations
o create a new mailbox (port)
o send and receive messages through mailbox
o destroy a mailbox
 Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A

Examples of IPC Systems – POSIX


 POSIX Shared Memory
o Process first creates shared memory segment
shm_fd = shm_open(name, O CREAT | O RDWR, 0666);
o Also used to open an existing segment to share it
o Set the size of the object
ftruncate(shm fd, 4096);
o Now the process could write to the shared memory
sprintf(shared memory, "Writing to shared memory");

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 8
Examples of IPC Systems – Windows
 Message-passing centric via advanced local procedure call (LPC) facility
o Only works between processes on the same system
o Uses ports (like mailboxes) to establish and maintain communication channels
o Communication works as follows:
 The client opens a handle to the subsystem’s connection port object.
 The client sends a connection request.
 The server creates two private communication ports and returns the
handle to one of them to the client.
 The client and server use the corresponding port handle to send
messages or callbacks and to listen for replies.
Local Procedure Calls in Windows

4. Communication in client server systems.

A. Sockets
B. Remote Procedure Calls
C. Pipes
D. Remote Method Invocation (Java)

A. Sockets
 A socket is defined as an endpoint for communication
 Concatenation of IP address and port – a number included at start of message packet
to differentiate network services on a host
 The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
 Communication consists between a pair of sockets
 All ports below 1024 are well known, used for standard services
 Special IP address 127.0.0.1 (loopback) to refer to system on which process is
running

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 9
Socket Communication

Three types of sockets


o Connection-oriented (TCP)
o Connectionless (UDP)
o MulticastSocket class– data can be sent to multiple recipients

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 10
Sockets program in Java

B. Remote Procedure Calls


 Remote Procedure Call (RPC) is a client-server mechanism that enables an
application on one machine to make a procedure call to code on another machine.
 The client calls a local procedure—a stub routine—that packs its arguments into a
message and sends them across the network to a particular server process. The client-
side stub routine then blocks.
 Meanwhile, the server unpacks the message, calls the procedure, packs the return
results into a message, and sends them back to the client stub.
 The client stub unblocks, receives the message, unpacks the results of the RPC, and
returns them to the caller. This packing of arguments is sometimes called marshaling.
 On Windows, stub code compile from specification written in Microsoft Interface
Definition Language (MIDL)
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 11
Execution of RPC

C. Pipes
 Acts as a conduit allowing two processes to communicate
 Issues:
o Is communication unidirectional or bidirectional?
o In the case of two-way communication, is it half or full-duplex?
o Must there exist a relationship (i.e., parent-child) between the communicating
processes?
o Can the pipes be used over a network?
 Ordinary pipes – cannot be accessed from outside the process that created it.
Typically, a parent process creates a pipe and uses it to communicate with a child
process that it created.
 Named pipes – can be accessed without a parent-child relationship.
Ordinary Pipes
 Ordinary Pipes allow communication in standard producer-consumer style
 Producer writes to one end (the write-end of the pipe)
 Consumer reads from the other end (the read-end of the pipe)
 Ordinary pipes are therefore unidirectional
 Require parent-child relationship between communicating processes

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 12
 Windows calls these anonymous pipes
 See Unix and Windows code samples in textbook
Named Pipes
 Named Pipes are more powerful than ordinary pipes
 Communication is bidirectional
 No parent-child relationship is necessary between the communicating processes
 Several processes can use the named pipe for communication
 Provided on both UNIX and Windows systems

D. Remote Method Invocation (Java)


 Remote Method Invocation (RMI) is an API that allows an object to invoke a method
on an object that exists in another address space, which could be on the same machine
or on a remote machine.

 Through RMI, an object running in a JVM present on a computer (Client-side) can


invoke methods on an object present in another JVM (Server-side).

 RMI creates a public remote server object that enables client and server-side
communications through simple method calls on the server object.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 13
II. Multithreaded Programming

Introduction
Thread
A thread is a path of execution within a process. A process can contain multiple threads.
A thread is also known as lightweight process.

Multithreading
A thread is also known as lightweight process. The idea is to achieve parallelism by
dividing a process into multiple threads.
For example,
o in a browser, multiple tabs can be different threads.
o MS Word uses multiple threads: one thread to format the text, another thread
to process inputs, etc.

Process vs Thread?
The primary difference is that threads within the same process run in a shared memory
space, while processes run in separate memory spaces.

Threads are not independent of one another like processes are, and as a result threads
share with other threads their code section, data section, and OS resources (like open
files and signals). But, like process, a thread has its own program counter (PC), register
set, and stack space.

Process Thread
A Process simply means any program in Thread simply means a segment of a
execution. process.
The process consumes more resources Thread consumes fewer resources.
The process requires more time for Thread requires comparatively less time for
creation. creation than process.
The process is a heavyweight process Thread is known as a lightweight
process
The process takes more time to The thread takes less time to terminate.
terminate
Processes have independent data and code A thread mainly shares the data segment,
segments code segment, files, etc. with its peer
threads.
The process takes more time for context The thread takes less time for context
switching. switching.
Communication between processes needs Communication between threads needs less
more time as compared to thread. time as compared to processes.
For some reason, if a process gets blocked In case if a user-level thread gets blocked, all
then the remaining processes can continue of its peer threads also get blocked.
their execution

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 14
Benefits of threads
 Responsiveness – may allow continued execution if part of process is blocked,
especially important for user interfaces
 Resource Sharing – threads share resources of process, easier than shared memory
or message passing
 Economy – cheaper than process creation, thread switching lower overhead than
context switching
 Scalability – process can take advantage of multiprocessor architectures

Multicore Programming

 Multicore or multiprocessor systems putting pressure on programmers,


challenges include:
o Dividing activities
o Balance
o Data splitting
o Data dependency
o Testing and debugging
 Parallelism implies a system can perform more than one task simultaneously
 Concurrency supports more than one task making progress
o Single processor / core, scheduler providing concurrency

Concurrency vs. Parallelism

 Concurrent execution on single-core system:

 Parallelism on a multi-core system:

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 15
Single and Multithreaded Processes

User Threads and Kernel Threads

User Threads Kernel Threads


These threads are implemented by These threads are implemented by
users. Operating systems
These threads are not recognized by These threads are recognized by
operating systems, operating systems,
In User Level threads, the Context switch In Kernel Level threads, hardware
requires no hardware support. support is needed.
These threads are mainly designed as These threads are mainly designed
dependent threads. as independent threads.
In User Level threads, if one user-level On the other hand, if one kernel thread
thread performs a blocking operation then the performs a blocking operation then another
entire process will be blocked. thread can continue the execution.
Example of User Level threads: Java thread, Example of Kernel level threads:
POSIX Window Solaris.
threads.
Implementation of User Level thread is done While the Implementation of the kernel-level
by a thread library and is easy. thread is done by the operating system and is
complex
This thread is generic in nature and can run This is specific to the operating system.
on any operating system.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 16
1. Multithreading models
Thread
A thread is a path of execution within a process. A process can contain multiple threads.
A thread is also known as lightweight process.

Multithreading
A thread is also known as lightweight process. The idea is to achieve parallelism by
dividing a process into multiple threads.
For example,
o in a browser, multiple tabs can be different threads.
o MS Word uses multiple threads: one thread to format the text, another thread
to process inputs, etc.

Single and Multithreaded Processes

There are three types of models in Multithreading


1. Many-to-One
2. One-to-One
3. Many-to-Many

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 17
1. Many-to-One
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block Multiple threads may not run in parallel on
muticore system because only one may be in kernel at a time. Few systems currently
use this model
Examples: Solaris Green Threads , GNU Portable Threads

2. One-to-One
 Each user-level thread maps to kernel thread
 Creating a user-level thread creates a kernel thread
 More concurrency than many-to-one
 Number of threads per process sometimes restricted due to overhead
Examples
o Windows , Linux, Solaris 9 and later

3. Many-to-Many
 Allows many user level threads to be mapped to many kernel threads
 Allows the operating system to create a sufficient number of kernel threads
 Solaris prior to version 9
 Windows with the ThreadFiber package

 Similar to M:M, except that it allows a user thread to be bound to kernel thread
Examples
IRIX, HP-UX, Tru64 UNIX, Solaris 8 and earlier

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 18
2. Thread libraries

 Thread library provides programmer with API for creating and managing
threads
 Two primary ways of implementing
o Library entirely in user space
o Kernel-level library supported by the OS
Pthreads
 May be provided either as user-level or kernel-level
 A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
 Specification, not implementation
 API specifies behavior of the thread library, implementation is up to development of
the library
 Common in UNIX operating systems (Solaris, Linux, Mac OS X)

Pthreads Example

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 19
Pthreads Example (Cont.)

Pthreads Code for Joining 10 Threads


Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 20

em Concepts – 9 th Edition 4. 20 Silberschatz, Galvin and Gagne ©2013


Java Threads
 Java threads are managed by the JVM
 Typically implemented using the threads model provided by underlying OS
 Java threads may be created by:

System Concepts – 9 th Edition 4. 21 Silberschatz, Galvin and Gagne ©2013

 Extending Thread class


 Implementing the Runnable interface

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 21
3. Threading issues
 Semantics of fork() and exec() system calls
 Signal handling
o Synchronous and asynchronous
 Thread cancellation of target thread
o Asynchronous or deferred
 Thread-local storage
 Scheduler Activations

Semantics of fork() and exec()


 Does fork()duplicate only the calling thread or all threads?
o Some UNIXes have two versions of fork
 exec() usually works as normal – replace the running process including all threads

Signal Handling
 Signals are used in UNIX systems to notify a process that a particular event has
occurred.
 A signal handler is used to process signals
o Signal is generated by particular event
o Signal is delivered to a process
o Signal is handled by one of two signal handlers:
 default
 user-defined
 Every signal has default handler that kernel runs when handling signal
o User-defined signal handler can override default
o For single-threaded, signal delivered to process
 Where should a signal be delivered for multi-threaded?
o Deliver the signal to the thread to which the signal applies
o Deliver the signal to every thread in the process
o Deliver the signal to certain threads in the process
o Assign a specific thread to receive all signals for the process
Thread Cancellation
 Terminating a thread before it has finished
 Thread to be canceled is target thread
 Two general approaches:
o Asynchronous cancellation terminates the target thread immediately
o Deferred cancellation allows the target thread to periodically check if it
should be cancelled
 Pthread code to create and cancel a thread:

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 22
Thread-Local Storage
 Thread-local storage (TLS) allows each thread to have its own copy of data
 Useful when you do not have control over the thread creation process (i.e., when
using a thread pool)
 Different from local variables
o Local variables visible only during single function invocation
o TLS visible across function invocations
 Similar to static data
o TLS is unique to each thread

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 23
III. Process Scheduling

1. Basic concepts
To introduce Process(CPU) scheduling, which is the basis for multiprogrammed
operating systems
 Maximum CPU utilization obtained with multiprogramming
 CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution
and I/O wait
 CPU burst followed by I/O burst
 CPU burst distribution is of main concern
 Burst time is the total time taken by the process for its execution on the CPU.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 24
Histogram of CPU-burst Times

Process(CPU) Scheduler

 Short-term scheduler selects from among the processes in ready queue, and allocates
the CPU to one of them
o Queue may be ordered in various ways
 CPU scheduling (non-preemptive) decisions may take place when a process:
o Switches from running to waiting state
o Switches from running to ready state
 Switches from waiting to ready
o Terminates
 All other scheduling is preemptive
o Consider access to shared data
o Consider preemption while in kernel mode
o Consider interrupts occurring during crucial OS activities
Dispatcher
 Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
o switching context
o switching to user mode
o jumping to the proper location in the user program to restart that program
 Dispatch latency – time it takes for the dispatcher to stop one process and start
another running

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 25
2. Scheduling criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted until
the first response is produced, not output (for time-sharing environment)

3. Scheduling algorithms
Scheduling algorithms maintains the following standards.
 Maximum CPU utilization
 Maximum throughput
 Minimum turnaround time
 Minimum waiting time
 Minimum response time

Scheduling Algorithms are as follows


i. First- Come, First-Served (FCFS) Scheduling
ii. Shortest-Job-First (SJF) Scheduling
iii. Priority Scheduling
iv. Round Robin (RR)
v. Multilevel Queue

i. First- Come, First-Served (FCFS) Scheduling


First come first served (FCFS) scheduling algorithm simply schedules the jobs according
to their arrival time. The job which comes first in the ready queue will get the CPU first. The
lesser the arrival time of the job, the sooner will the job get the CPU. FCFS scheduling may
cause the problem of starvation if the burst time of the first process is the longest among all
the jobs.

Advantages of FCFS
o Simple
o Easy
Disadvantages of FCFS
 The scheduling method is non preemptive, the process will run to the completion.
 Due to the non-preemptive nature of the algorithm, the problem of starvation may
occur.
 Although it is easy to implement, but it is poor in performance since the average
waiting time is higher as compare to other scheduling algorithms.

Example-1
Let's take an example of The FCFS scheduling algorithm. In the Following schedule, there
are 3 processes with process ID P1, P2, and P3. The processes and their Burst time are
given in the following table.

The Turnaround time and the waiting time are calculated by using the following formula.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 26
Process Burst Time
P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

1. Turn Around Time = Completion Time - Arrival Time


2. Waiting Time = Turnaround time - Burst Time

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17

Example-2

Suppose that the processes arrive in the order:


P2 , P3 , P1

 The Gantt chart for the schedule is:


P2 P3 P1
0 3 6 30
 Waiting time for P1 = 6; P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
o Consider one CPU-bound and many I/O-bound processes

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 27
Example-3
Let's take an example of The FCFS scheduling algorithm. In the Following schedule, there
are 4 processes with process ID P1, P2,P3 and P4. The processes and their Arrival Time and
Burst time are given in the following table.

Criteria : Non Preemptive.

Arrival Burst Completion Turn Around Waiting Time


Process Time Time Time Time (TAT) (WT) = TAT-BT
(AT) (BT) (CT) = CT - AT
P1 0 2 2 2 0
P2 1 2 4 3 1
P3 5 3 8 3 0
P4 6 4 12 6 2

Gantt Chart

ii. Shortest-Job-First (SJF) Scheduling


Shortest Job First (SJF) scheduling algorithm, schedules the processes according to their
burst time.
In SJF scheduling, the process with the lowest burst time, among the list of available
processes in the ready queue, is going to be scheduled next.

However, it is very difficult to predict the burst time needed for a process hence this
algorithm is very difficult to implement in the system.

Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time

Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process can't be known in
advance.

Example-1
Let's take an example of The SJF scheduling algorithm. In the Following schedule, there are
4 processes with process ID P1, P2, P3 and P4. The processes and their respective Arrival
and Burst time are given in the following table.

Criteria : Non Preemptive.


Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 28
Process Arrival Time Burst Time
P1 0 6
P2 2 8
P3 4 7
P4 5 3

 SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

Example-2: Shortest-remaining-time-first
 Now we add the concepts of varying arrival times and preemption to the
analysis
Criteria: Preemptive
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
 Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26
 Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec

iii. Priority Scheduling


In Priority scheduling, there is a priority number assigned to each process. In some systems,
the lower the number, the higher the priority.

 A priority number (integer) is associated with each process


 The CPU is allocated to the process with the highest priority (smallest integer º
highest priority)
o Preemptive
o Nonpreemptive
 SJF is priority scheduling where priority is the inverse of predicted next CPU
burst time
 Problem º Starvation – low priority processes may never execute
 Solution º Aging – as time progresses increase the priority of the process

Example-1
Let's take an example of The Priority scheduling algorithm. In the Following schedule, there
are 5 processes with process ID P1, P2, P3, P4 and P5. The processes and their respective
Burst time and Priority are given in the following table.

Criteria : Non Preemptive.


Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 29
Process Burst Time Priortity
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

 Priority scheduling Gantt Chart

Process Waiting Time


P1 6
P2 0
P3 16
P4 18
P5 1
Total 41

 Average waiting time = 41/5 = 8.2 msec

iv. Round Robin (RR)


 Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to
the end of the ready queue.
 If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
 Timer interrupts every quantum to schedule next process
 Performance
o q large Þ FIFO
o q small Þ q must be large with respect to context switch, otherwise
overhead is too high

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 30
Example:

Let's take an example of The Round Robin scheduling algorithm. In the Following schedule,
there are 3 processes with process ID P1, P2, and P3. The processes and their respective
Burst time are given in the following table.

Criteria : Preemptive.

Process Burst Time


P1 24
P2 3
P3 3

 The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

 Typically, higher average turnaround than SJF, but better response


 q should be large compared to context switch time
 q usually 10ms to 100ms, context switch < 10 usec

v. Multilevel Queue
 Ready queue is partitioned into separate queues, eg:
o foreground (interactive)
o background (batch)
 Process permanently in a given queue
 Each queue has its own scheduling algorithm:
o foreground – RR
o background – FCFS
 Scheduling must be done between the queues:
o Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
o Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
o 20% to background in FCFS

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 31
4. Multiple processor scheduling
 CPU scheduling more complex when multiple CPUs are available
 Homogeneous processors within a multiprocessor
 Asymmetric multiprocessing – only one processor accesses the system data
structures, alleviating the need for data sharing
 Symmetric multiprocessing (SMP) – each processor is self-scheduling, all
processes in common ready queue, or each has its own private queue of ready
processes
o Currently, most common
 Processor affinity – process has affinity for processor on which it is currently
running
o soft affinity
o hard affinity
o Variations including processor sets

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 32
NUMA and CPU Scheduling

NUMA= Non Uniform Memory Access


Note that memory-placement algorithms can also consider affinity

Multiple-Processor Scheduling – Load Balancing


 If SMP, need to keep all CPUs loaded for efficiency
 Load balancing attempts to keep workload evenly distributed
 Push migration – periodic task checks load on each processor, and if found
pushes task from overloaded CPU to other CPUs
 Pull migration – idle processors pulls waiting task from busy processor

Multicore Processors
 Recent trend to place multiple processor cores on same physical chip
 Faster and consumes less power
 Multiple threads per core also growing
o Takes advantage of memory stall to make progress on another thread
while memory retrieve happens

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 33
Multithreaded Multicore System

5. Thread scheduling

 Distinction between user-level and kernel-level threads


 When threads supported, threads scheduled, not processes
 Many-to-one and many-to-many models, thread library schedules user-level threads to
run on LWP
o Known as process-contention scope (PCS) since scheduling competition is
within the process
o Typically done via priority set by programmer
 Kernel thread scheduled onto available CPU is system-contention scope (SCS) –
competition among all threads in system

Pthread Scheduling
 API allows specifying either PCS or SCS during thread creation
o PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling
o PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling
 Can be limited by OS – Linux and Mac OS X only allow
PTHREAD_SCOPE_SYSTEM

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 34
Pthread Scheduling API
#include <pthread.h>
#include <stdio.h>
#define NUM_THREADS 5
int main(int argc, char *argv[]) {
int i, scope;
pthread_t tid[NUM THREADS];
pthread_attr_t attr;
/* get the default attributes */
pthread_attr_init(&attr);
/* first inquire on the current scope */
if (pthread_attr_getscope(&attr, &scope) != 0)
fprintf(stderr, "Unable to get scheduling
scope\n");
else {
if (scope == PTHREAD_SCOPE_PROCESS)
printf("PTHREAD_SCOPE_PROCESS");
else if (scope == PTHREAD_SCOPE_SYSTEM)
/* set the scheduling algorithm to PCS
printf("PTHREAD_SCOPE_SYSTEM");
or SCS */
else
pthread_attr_setscope(&attr,
fprintf(stderr, "Illegal scope value.\n");
PTHREAD_SCOPE_SYSTEM);
}
/* create the threads */
for (i = 0; i < NUM_THREADS; i++)

pthread_create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM_THREADS; i++)
pthread_join(tid[i], NULL);
}
/* Each thread will begin control in this
function */
void *runner(void *param)
{
/* do some work ... */
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 35
pthread_exit(0);
}
IV. Inter-process Communication
 Processes frequently need to communicate with other processes.
 For example,
 in a shell pipeline, the output of the first process must be passed to the second
process, and so on down the line.
 Thus there is a need for communication between processes, preferably in a well-
structured way not using interrupts.
 The communication between process is called as InterProcess Communication,
or IPC.
There are three issues in IPC.
1. One process can pass information to another.
2. Two or more processes do not get in each other’s way,
For example, two processes in an airline reservation system each trying to
grab the last seat on a plane for a different customer.
3. If process A produces data and process B prints them, B has to wait until A
has produced some data before starting to print.

1. Race conditions

 In some operating systems, processes that are working together may share some
common storage that each one can read and write.
 The shared storage may be in main memory (possibly in a kernel data structure) or
it may be a shared file;
 The location of the shared memory does not change the nature of the
communication or the problems that arise.
 To see how interprocess communication works in practice,
 For example:
 a print spooler. When a process. wants to print a file, it enters the file name in a
special spooler directory.
 Another process, the printer daemon, periodically checks to see if there are any
files to be printed, and if there are, it prints them and then removes their names
from the directory.
 Imagine that our spooler directory has a very large number of slots, numbered 0,
1, 2, ..., each one capable of holding a file name.
 Also imagine that there are two shared variables, out, which points to the next file
to be printed, and in, which points to the next free slot in the directory.
 These two variables might well be kept in a two-word file available to all
processes. At a certain instant, slots 0 to 3 are empty (the files have already been
printed) and slots 4 to 6 are full (with the names of files queued for printing).
 More or less simultaneously, processes A and B decide they want to queue a file
for printing. This situation is shown in Fig. 2-21.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 36
 Here Process B access location 7 in the Spooler and enter the value, moves
forward to location 8
 After that Process A access location 7 in the Spooler and enter the value, moves
forward to location 8
 The spooler directory is now internally consistent, so the printer daemon will not
notice anything wrong, but process B will never receive any output.
 User B will hang around the printer for years.
 Situations like this, where two or more processes are reading or
writing some shared data and the final result depends on who runs
precisely when, are called race conditions.

2. Critical Regions

 The problem of avoiding race conditions can also be formulated in an abstract


way.
 That part of the program where the shared memory is accessed is called the
critical region or critical section.
 If we could arrange matters such that no two processes were ever in their
critical regions at the same time, we could avoid races.
 We need four conditions to hold to have a good solution:
1. No two processes may be simultaneously inside their critical regions.
2. No assumptions may be made about speeds or the number of CPUs.
3. No process running outside its critical region may block any process.
4. No process should have to wait forever to enter its critical region.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 37
3. Mutual exclusion with busy waiting

 Mutual exclusion: One process is busy updating shared memory in its critical
region, no other process will enter its critical region and cause trouble.
 Various proposals for achieving mutual exclusion are as follows
 Disabling Interrupts
 Lock Variables
 Strict Alternation
 The TSL Instruction
Disabling Interrupts
 On a single-processor system, the simplest solution is to have each process
disable all interrupts just after entering its critical region and re-enable them
just before leaving it.
 With interrupts disabled, no clock interrupts can occur.
 The CPU is only switched from process to process as a result of clock or other
interrupts, after all, and with interrupts turned off the CPU will not be switched to
another process.
 Thus, once a process has disabled interrupts, it can examine and update the
shared memory.
Lock Variables
 As a second attempt, let us look for a software solution. Consider having a single,
shared (lock) variable, initially 0. When a process wants to enter its critical
region, it first tests the lock.
 If the lock is 0, the process sets it to 1 and enters the critical region.
 If the lock is already 1, the process just waits until it becomes 0.
 Thus, a lock variable 0 means that no process is in its critical region,
 The lock variable 1 means that some process is in its critical region.
Strict Alternation
 The integer variable turn, initially 0, keeps track of whose turn it is
 to enter the critical region and examine or update the shared memory.
 Initially, process 0 inspects turn, finds it to be 0, and enters its critical region.
 Process 1 also finds it to be 0 and therefore sits in a tight loop continually testing
turn to see when it becomes 1.
 Continuously testing a variable until some value appears is called busy waiting.
 It should usually be avoided, since it wastes CPU time.
 Only when there is a reasonable expectation that the wait will be short is busy
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 38
waiting used.
 A lock that uses busy waiting is called a spin lock.

The TSL Instruction


 TSL (Test and Set Lock) that works as follows.
 It reads the contents of the memory word lock into register RX and then stores a
nonzero value at the memory address lock.
 The operations of reading the word and storing into it are guaranteed to be
indivisible - no other processor can access the memory word until the instruction
is finished.
 The CPU executing the TSL instruction locks the memory bus to prohibit other
CPUs from accessing memory until it is done.

4. Sleep and wakeup

 Consider a computer with two processes, H, with high priority, and L, with low
priority.
 The scheduling rules are such that H is run whenever it is in ready
 state.
 At a certain moment, with L in its critical region, H becomes ready to run (e.g., an
I/O operation completes).
 H now begins busy waiting, but since L is never scheduled while H is running, L
never gets the chance to leave its critical region, so H loops forever.
 This situation is sometimes referred to as the priority inversion problem.
 Now let us look at some interprocess communication primitives that block instead
of wasting CPU time when they are not allowed to enter their critical regions.
 One of the simplest is the pair sleep and wakeup. Sleep is a system call that causes
the caller to block, that is, be suspended until another process wakes it up.
 The wakeup call has one parameter, the process to be awakened.
 Alternatively, both sleep and wakeup each have one parameter, a memory address
used to match up sleeps with wakeups.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 39
Producer-Consumer Problem
 Paradigm for cooperating processes, producer process produces information
that is consumed by a consumer process
o unbounded-buffer places no practical limit on the size of the buffer
o bounded-buffer assumes that there is a fixed buffer size

7 Data Lost
Wait()
6
Producer 5
4 Consumer
3
2
1
Buffer –
Shared
Resource

5. Semaphores

 The down operation on a semaphore checks to see if the value is greater than 0.
If so, it decrements the value (i.e., uses up one stored wakeup) and just continues.
 If the value is 0, the process is put to sleep without completing the down for the
moment.
 It is guaranteed that once a semaphore operation has started, no other process
can access the semaphore until the operation has completed or blocked.
 This atomicity is absolutely essential to solving synchronization problems and
avoiding race conditions.
 The down operation on a semaphore checks to see if the value is greater than 0.
If so, it decrements the value (i.e., uses up one stored wakeup) and just continues.
 If the value is 0, the process is put to sleep without completing the down for the
moment.
 It is guaranteed that once a semaphore operation has started, no other process
can access the semaphore until the operation has completed or blocked.
 This atomicity is absolutely essential to solving synchronization problems and
avoiding race conditions.

6. Mutexes

 A mutex is a shared variable that can be in one of two states: unlocked or locked.
 Consequently, only 1 bit is required to represent it, but in practice an integer
often is used, with 0 meaning unlocked and all other values meaning locked.
 Two procedures are used with mutexes. When a thread (or process) needs access
to a critical region, it calls mutex_lock.
 If the mutex is currently unlocked (meaning that the critical region is available),
the call succeeds and the calling thread is free to enter the critical region.
 On the other hand, if the mutex is already locked, the calling thread is blocked
until the thread in the critical region is finished and calls mutex_unlock.
 If multiple threads are blocked on the mutex, one of them is chosen at random
and allowed to acquire the lock.
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 40
 Because mutexes are so simple, they can easily be implemented in user space
provided that a TSL

The code for mutex_lock and mutex_unlock are as follows

Some of the Pthread Calls are as follows

Some of the Pthread Calls relating Condition variables are as follows

7. Monitors

 Brinch Hansen (1973) and Hoare (1974) proposed a higher-level synchronization


primitive called a monitor.
 A monitor is a collection of procedures, variables, and data structures that are all
grouped together in a special kind of module or package.
 Processes may call the procedures in a monitor whenever they want to, but they
cannot directly access the monitor’s internal data structures from procedures
declared outside the monitor.
 Monitors have an important property that makes them useful for achieving mutual
exclusion: only one process can be active in a monitor at any instant.
Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 41
 Monitors are a programming-language construct, so the compiler knows they are
special and can handle calls to monitor procedures differently from other
procedure calls.
 Monitors have an important property that makes them useful for achieving mutual
exclusion: only one process can be active in a monitor at any instant.
 Monitors are a programming-language construct, so the compiler knows they are
special and can handle calls to monitor procedures differently from other
procedure calls.

8. Message passing
 Message Passing in IPC uses two primitives, send and receive, which, like
semaphores and unlike monitors, are system calls rather than language constructs.
 As such, they can easily be put into library procedures, such as
send(destination, &message);
and
receive(source, &message);
 The former call sends a message to a given destination and the latter one
receives a message from a given source (or from ANY, if the receiver does not
care).
 If no message is available, the receiver can block until one arrives.
 Alternatively, it can return immediately with an error code.

9. Barriers
 Barrier is intended for groups of processes rather than two-process producer-
consumer type situations.
 Some applications are divided into phases and have the rule that no process may
proceed into the next phase until all processes are ready to proceed to the next
phase.
 This behavior may be achieved by placing a barrier at the end of each phase.
 When a process reaches the barrier, it is blocked until all processes have
reached the barrier.
 This allows groups of processes to synchronize.
 Barrier operation is illustrated in the following diagram

V. Classical IPC Problems


Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 42
1. Dining philosophers problem

 In 1965, Dijkstra posed and then solved a synchronization problem he called the
dining philosophers problem.
 The problem can be stated quite simply as follows.
 Five philosophers are seated around a circular table. Each philosopher has a plate
and two forks to eat the food.
 Between each pair of plates is one fork.
 The layout of the table is illustrated in the diagram

 When a philosopher gets sufficiently hungry, he tries to acquire her left and right
forks, one at a time, in either order.
 If successful in acquiring two forks, he eats for a while, then puts down the forks,
and continues to think.
 The key question is: Can you write a program for each philosopher that does
what it is supposed to do and never gets stuck?
Case-1 : All take their left fork.
 If we write a program that take fork waits until the specified fork is available and
then seizes it.
 Unfortunately, the obvious solution is wrong.
 Suppose that all five philosophers take their left forks simultaneously. None
will be able to take their right forks, and there will be a deadlock.
Case-2 : Waiting to take their right fork.
 Suppose that all five philosophers picking up their left forks, seeing that their
right forks were not available, putting down their left forks, waiting, picking up
their left forks again simultaneously, and so on, forever.
 A situation like this, in which all the programs continue to run indefinitely but
fail to make any progress, is called starvation..

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 43
Case-3 : Using two mutexes to take their two forks.
 One improvement in the above program that has no deadlock and no starvation
and using two semaphores.
 Before starting to acquire forks, a philosopher would do a down on mutex.
 After replacing the forks, he would do an up on mutex.
 From a theoretical viewpoint, this solution is adequate..
 With five forks available, we should be able to allow two philosophers to eat at
the same time.

2. Readers and writers problem.


 The dining philosophers problem is useful for modeling processes that are competing for
exclusive access to a limited number of resources, such as I/O devices.
 Another famous problem is the readers and writers problem (Courtois et al.,1971), which
models access to a database.
 Imagine, for example, an airline reservation system, with many competing processes
wishing to read and write it.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 44
Writer Readers
Database

 It is acceptable to have multiple processes reading the database at the same time, but if
one process is updating (writing) the database, no other processes may have access to
the database, not even readers.
 The question is how do you program the readers and the writers? One solution is shown
in Fig. 2-48.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 45
Only one copy is available to all the readers.

Writer Readers
Database

Writer Readers
Database

 In this solution, the first reader to get access to the database does a down on the
semaphore db.
 Subsequent readers merely increment a counter, rc.
 As readers leave, they decrement the counter, then database is available to writer.
 Only one copy is available to all the readers. Here writer will not access the database.
 Only one copy is available to one writer to update the database. Here the database is
locked so that no reader can access it.

Prepared by Mr. Isaac Paul P, Assoc Prof & HOD, Dept of CSE, RISE Krishna Sai Gandhi Groups 46

You might also like