0% found this document useful (0 votes)
7 views

Multiple – Processor Scheduling

Uploaded by

karunakar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Multiple – Processor Scheduling

Uploaded by

karunakar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Multiple – processor scheduling

When multiple processes are available, then the


scheduling gets more complicated, because there
is more than one CPU which must be kept busy
and in effective use at all times.
Load sharing resolves around balancing the load
between multiple processors. Multi processor
systems may be heterogeneous (It contains
different kinds of CPU’s) ( or ) Homogeneous(all
the same kind of CPU).
1)Approaches to multiple-processor scheduling
a)Asymmetric multiprocessing
• One processor is the master, controlling all
activities and running all kernel code, while the
other runs only user code.
b)Symmetric multiprocessing:
• Each processor schedules its own job. Each
processor may have its own private queue of
ready processes.
i)Symmetrical Scheduling with global queues
ii) Symmetrical Scheduling with per queues
2) Processor Affinity:
– Successive memory accesses by the process are often
satisfied in cache memory. what happens if the
process migrates to another processor.
– the contents of cache memory must be invalidated for
the first processor, cache for the second processor
must be repopulated.
– Most Symmetric multi processor systems try to avoid
migration of processes from one processor to another
processor, keep a process running on the same
processor.

– This is called processor affinity.


Soft affinity:
• Soft affinity occurs when the system attempts
to keep processes on the same processor but
makes no guarantees.
Hard affinity:
• Process specifies that it is not to be moved
between processors.
3) Load balancing:
• One processor wont be sitting idle while another is
overloaded. Balancing can be achived through push
migration or pull migration.

• Push migration:
• Push migration involves a separate process that runs
periodically(e.g every 200 ms) and moves processes
from heavily loaded processors onto less loaded
processors.
• Pull migration:
• Pull migration involves idle processors taking processes
from the ready queues of the other processors.
• Real time scheduling:
• Real time scheduling is generally used in the case of
multimedia operating systems. Here multiple
processes compete for the CPU. How to schedule
processes A,B,C so that each one meets its
deadlines.
• The general tendency is to make them pre-
emptable, so that a process in danger of missing its
deadline can preempt another process.
• When this process sends its frame, the preempted
process can continue from where it had left off. Here
throughput is not so significant. Important is that
tasks start and end as per their deadlines.
RATE MONOTONIC (RM) SCHEDULING
ALGORITHM

• Rate monotonic scheduling Algorithm works on the


principle of preemption.
• Preemption occurs on a given processor when higher
priority task blocked lower priority task from execution.
• This blocking occurs due to priority level of different
tasks in a given task set.
• rate monotonic is a preemptive algorithm which means
if a task with shorter period comes during execution it
will gain a higher priority and can block or preemptive
currently running tasks
• In RM priorities are assigned according to time
period.
• Priority of a task is inversely proportional to its
timer period.
• Task with lowest time period has highest
priority and the task with highest period will
have lowest priority.
Step-by-Step Execution:
At time = 0 ms, T1 and T2 are available. Since T1 has the shortest period (T=4ms), it
runs first.
At time = 1 ms, T1 completes. Now, T2 starts execution.
At time = 3 ms, T2 completes. Now, T3 starts execution.
At time = 4 ms, T1 is ready again (period = 4ms), so it preempts T3 and executes.
At time = 5 ms, T1 completes, and T3 resumes execution.
At time = 8 ms, T3 is still running, but T2 is ready again (period = 5ms), so T2
preempts T3 and runs.
At time = 10 ms, T2 completes, and T3 resumes until its completion.
• Gantt Chart Representation:
• | T1 | T2 | T2 | T3 | T1 | T3 | T3 | T2 | T2 |
• 0 1 3 4 5 6 8 9 10
• Advantages:
• Higher frequency tasks get priority.
• Ensures real-time constraints are met if CPU utilization is under logically
feasible limits.
• Simple to implement compared to dynamic priority scheduling like EDF.

• Limitations of RM Scheduling:
• Not optimal for all task sets – Some task sets may not be schedulable with
RM even if they are with EDF.
• High-priority tasks can starve low-priority tasks if CPU utilization is high.
• Fixed priorities – Cannot adapt to changing system conditions dynamically.
Earliest Deadline First (EDF) Scheduler
Algorithm
• The EDF is a dynamic algorithm, Job priorities
are re-evaluated at every decision point, this
re- evaluation is based on relative deadline of
a job or task, the closer to the deadline, the
higher the priority.
The EDF has the following advantages:
• Very flexible (arrival times and deadlines do not
need to be known before implementation).
• Moderate complexity.
• Able to handle a periodic jobs.
The EDF has the following disadvantages:
• Optimally requires pre-emptive jobs.
• Not optimal on several processors.
• Difficult to verify.
Example
• Consider the following task set in Table 1. P
represents the Period, e the Execution time
and D stands for the Deadline. Assume that
the job priorities are re-evaluated at the
release and deadline of a job.
• Consider three processes with the following properties:
• Process Arrival Time Execution Time Deadline
• P1 0 ms 3 ms 7 ms
• P2 2 ms 1 ms 4 ms
• P3 4 ms 2 ms 6 ms

• Step-by-Step Execution:
• At time = 0 ms: Only P1 is available, so it starts execution.
• At time = 2 ms: P2 arrives with a deadline of 4 ms (earlier than
P1's 7 ms). So, P1 is preempted, and P2 starts execution.
• At time = 3 ms: P2 completes, and P1 resumes execution.
• At time = 4 ms: P3 arrives with a deadline of 6 ms, but since P1
has an earlier deadline (7 ms), P1 continues.
• At time = 6 ms: P1 completes, and P3 starts execution.
• At time = 8 ms: P3 completes execution.
• Gantt Chart Representation:

• | P1 | P2 | P1 | P3 |
• 0 2 3 6 8
• P1 runs from 0-2 ms (until P2 arrives).
• P2 runs from 2-3 ms (has the earliest deadline).
• P1 resumes from 3-6 ms (finishes execution).
• P3 runs from 6-8 ms (only process left).

You might also like