Manuscript
Manuscript
CPU Scheduling
Module 6
IT – 21
Reporters:
Gaddi, Dicel D.
Brisenio, Mathew A.
Rapadas, Carl Angelo
Scheduling Algorithms
Scheduling Criteria
Multiple-Processor Scheduling
Multiple-Processor Scheduling
- When multiple processors are available, then the scheduling gets more complicated,
because now there is more than one CPU which must be kept busy and in effective use at
all times.
- Load sharing revolves around balancing the load between multiple processors.
- Virtually all modern OSes support SMP, including XP, Win 2000, Solaris, Linux, and
Mac OSX.
Processor Affinity
- Processors contain cache memory, which speeds up repeated accesses to the same
memory locations.
- If a process were to switch from one processor to another each time it got a time slice, the
data in the cache ( for that process ) would have to be invalidated and re-loaded from
main memory, thereby obviating the benefit of the cache.
- Therefore SMP systems attempt to keep processes on the same processor, via processor
affinity. Soft affinity occurs when the system attempts to keep processes on the same
processor but makes no guarantees. Linux and some other OSes support hard affinity, in
which a process specifies that it is not to be moved between processors.
- Main memory architecture can also affect process affinity, if particular CPUs have faster
access to memory on the same chip or board than to other memory loaded elsewhere.
( Non-Uniform Memory Access, NUMA. ) As shown below, if a process has an affinity
for a particular CPU, then it should preferentially be assigned memory storage in "local"
fast access areas.
Load Balancing
o Push migration involves a separate process that runs periodically, ( e.g. every 200
milliseconds ), and moves processes from heavily loaded processors onto less loaded
ones.
o Pull migration involves idle processors taking processes from the ready queues of
other processors.
o Push and pull migration are not mutually exclusive.
- Note that moving processes from processor to processor to achieve load balancing works
against the principle of processor affinity, and if not carefully managed, the savings
gained by balancing the system can be lost in rebuilding caches. One option is to only
allow migration when imbalance surpasses a given threshold.
Multicore Processors
- Traditional SMP required multiple CPU chips to run multiple kernel threads
concurrently.
- Recent trends are to put multiple CPUs ( cores ) onto a single chip, which appear to the
system as multiple processors.
- Compute cycles can be blocked by the time needed to access memory, whenever the
needed data is not already present in the cache. ( Cache misses. ) In Figure 5.10, as much
as half of the CPU cycles are lost to memory stall.
- By assigning multiple kernel threads to a single processor, memory stall can be avoided
(or reduced ) by running one thread on the processor while the other thread waits for
memory.
- A dual-threaded dual-core system has four logical processors available to the operating
system. The UltraSPARC T1 CPU has 8 cores per chip and 4 hardware threads per core,
for a total of 32 logical processors per chip.
1. Coarse-grained multithreading switches between threads only when one thread blocks,
say on a memory read. Context switching is similar to process switching, with
considerable overhead.
o On a lower level, the hardware schedules logical processors on each physical core
using some other algorithm.
The Intel Itanium is a dual-core chip which uses a 7-level priority scheme
(urgency ) to determine which thread to schedule when one of 5 different
events occurs.