SlideShare a Scribd company logo
5
Most read
11
Most read
12
Most read
Multiprocessor scheduling
Introduction
 When a computer system contains more than a single processor, several
new issues are introduced into the design of scheduling functions. We will
examine these issues and the details of scheduling algorithms for tightly
coupled multi-processor systems.
Classifications of Multiprocessor
Systems
 Loosely coupled or distributed multiprocessor, or cluster
 Each processor has its own memory and I/O channels
 Functionally specialized processors
 Such as I/O processor
 Controlled by a master processor
 Tightly coupled multiprocessing
 Processors share main memory
 Controlled by operating system
Interconnection Network
P P P
Memory
disk
Memory Memory
disk disk
Shared Nothing
(Loosely Coupled)
P P P
Interconnection Network
Global Shared Memory
disk disk disk
Shared Memory
(Tightly Coupled)
Memory Memory Memory
P P P
Interconnection Network
disk disk disk
Shared Disk
Granularity
 A good metric for characterizing multiprocessors and placing then in
context with other architectures is to consider the synchronization
granularity, or frequency of synchronization, between processes in a
system.
 Five categories of parallelism that differ in the degree of granularity
can be defined:
1. Independent parallelism
2. Very coarse granularity
3. Coarse granularity
4. Medium granularity
5. Fine granularity
Independent Parallelism
 With independent parallelism there is no explicit synchronization among
processes.
 Each process represents a separate, independent application or job.
 This type of parallelism is typical of a time-sharing system.
 The multiprocessor provides the same service as a multiprogrammed
uniprocessor, however, because more than one processor is available,
average response times to the user tend to be less.
Coarse and Very Coarse-Grained
Parallelism
 With coarse and very coarse grained parallelism, there is synchronization
among processes, but at a very gross level.
 This type of situation is easily handled as a set of concurrent processes running
on a multi-programmed uni-processor and can be supported on a
multiprocessor with little or no change to user software.
 Good for concurrent processes that need to communicate or synchronize can
benefit from the use of a multiprocessor architecture.
 In the case of very infrequent interaction among the processes, a distributed
system can provide good support. However, if the interaction is somewhat
more frequent, then the overhead of communication across the network may
negate some of the potential speedup. In that case, the multiprocessor
organization provides the most effective support.
Medium-Grained Parallelism
 Single application is a collection of threads
 Threads usually interact frequently
Fine-Grained Parallelism
 Highly parallel applications
 Specialized and fragmented area
Design Issues For Multiprocessor
Scheduling
 Scheduling on a multiprocessor involves three interrelated
issues:
1. The assignment of processes to processors
2. The use of multiprogramming on individual processors
3. The actual dispatching of a process
 Looking at these three issues, it is important to keep in mind
that the approach taken will depend, in general, on the
degree of granularity of the applications and on the
number of processors available.
1: Assignment of Processes to Processors
 Treat processors as a pooled resource and assign process to processors on
demand
 Permanently assign process to a processor
 Known as group or gang scheduling
 Dedicate short-term queue for each processor
 Less overhead
 Processor could be idle while another processor has a backlog
 Global queue
 Schedule to any available processor
Assignment of Processes to Processors
 Master/slave architecture
 Key kernel functions always run on a particular processor
 Master is responsible for scheduling
 Slave sends service request to the master
 Disadvantages
 Failure of master brings down whole system
 Master can become a performance bottleneck
 Peer architecture
 Kernel can execute on any processor
 Each processor does self-scheduling
 Complicates the operating system
 Make sure two processors do not choose the same process
Process Scheduling
 Single queue for all processes
 Multiple queues are used for priorities
 All queues feed to the common pool of processors
Multiprocessor Thread Scheduling
 Executes separate from the rest of the process
 An application can be a set of threads that cooperate and execute concurrently in the
same address space
 Load sharing
 Processes are not assigned to a particular processor
 Gang scheduling
 A set of related threads is scheduled to run on a set of processors at the same time
 Dedicated processor assignment
 Threads are assigned to a specific processor
 Dynamic scheduling
 Number of threads can be altered during course of execution
Load Sharing
 Load is distributed evenly across the processors
 No centralized scheduler required
 Use global queues
Disadvantages of Load Sharing
 Central queue needs mutual exclusion
 Preemptive threads are unlikely to resume execution on the same
processor
 If all threads are in the global queue, all threads of a program will not gain
access to the processors at the same time
Gang Scheduling
 Simultaneous scheduling of threads that make up a single process
 Useful for applications where performance severely degrades when any
part of the application is not running
 Threads often need to synchronize with each other
Dedicated Processor Assignment
 When application is scheduled, its threads are assigned to a processor
 Some processors may be idle
 No multiprogramming of processors
Dynamic Scheduling
 Number of threads in a process are altered dynamically by the application
 Operating system adjust the load to improve utilization
 Assign idle processors
 New arrivals may be assigned to a processor that is used by a job currently using
more than one processor
 Hold request until processor is available
 Assign processor a job in the list that currently has no processors (i.e., to all
waiting new arrivals)

More Related Content

PPTX
Multi processor scheduling
PPTX
Presentation on Segmentation
PPT
Unit 3 object analysis-classification
PPT
Compiler Design
PPTX
Architecture of operating system
PPTX
Cohesion and coupling
PPTX
Critical section problem in operating system.
PPTX
Problems of cooperative system
Multi processor scheduling
Presentation on Segmentation
Unit 3 object analysis-classification
Compiler Design
Architecture of operating system
Cohesion and coupling
Critical section problem in operating system.
Problems of cooperative system

What's hot (20)

PPTX
PPTX
Distributed and clustered systems
PPTX
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
PPTX
Process synchronization in Operating Systems
PPT
Operating system services 9
PPT
Multi core-architecture
PPTX
Lecture 3 general problem solver
PPTX
Distributed shred memory architecture
PPTX
Process synchronization
PPTX
ELEMENTS OF TRANSPORT PROTOCOL
PPT
Peterson Critical Section Problem Solution
PPTX
Segmentation in operating systems
PPT
Cluster Computing
PPTX
Passes of Compiler.pptx
PPTX
Software myths | Software Engineering Notes
PPT
Parallel processing
PPTX
SRS(software requirement specification)
PDF
Distributed Operating System_1
PPTX
Introduction to Parallel and Distributed Computing
PDF
Requirement Engineering
Distributed and clustered systems
AI_Session 11: searching with Non-Deterministic Actions and partial observati...
Process synchronization in Operating Systems
Operating system services 9
Multi core-architecture
Lecture 3 general problem solver
Distributed shred memory architecture
Process synchronization
ELEMENTS OF TRANSPORT PROTOCOL
Peterson Critical Section Problem Solution
Segmentation in operating systems
Cluster Computing
Passes of Compiler.pptx
Software myths | Software Engineering Notes
Parallel processing
SRS(software requirement specification)
Distributed Operating System_1
Introduction to Parallel and Distributed Computing
Requirement Engineering
Ad

Similar to Multiprocessor Scheduling (20)

PPTX
Parallel Processing (Part 2)
DOC
Wiki 2
PPT
Operating-System-10 for ph.d teachi.ppt
PDF
Parallel and Distributed Computing Chapter 7
PPTX
6.distributed shared memory
PPT
Chap10 real time for multiprocessor7.ppt
PPT
Chapter10-OS7el real time presentati.ppt
DOC
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
PPTX
PPT
Parallel processing Concepts
PDF
Shared memory Parallelism (NOTES)
PPT
Scheduler Activations - Effective Kernel Support for the User-Level Managemen...
DOC
Symmetric multiprocessing and Microkernel
PPT
Module2 MultiThreads.ppt
PPTX
Assignment-01.pptx
PDF
CS9222 ADVANCED OPERATING SYSTEMS
PDF
Operating system (OS) itself is a process, what approaches are there.pdf
PDF
Unit 2 part 2(Process)
PPTX
Operating system 20 threads
PPTX
Topic 4- processes.pptx
Parallel Processing (Part 2)
Wiki 2
Operating-System-10 for ph.d teachi.ppt
Parallel and Distributed Computing Chapter 7
6.distributed shared memory
Chap10 real time for multiprocessor7.ppt
Chapter10-OS7el real time presentati.ppt
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
Parallel processing Concepts
Shared memory Parallelism (NOTES)
Scheduler Activations - Effective Kernel Support for the User-Level Managemen...
Symmetric multiprocessing and Microkernel
Module2 MultiThreads.ppt
Assignment-01.pptx
CS9222 ADVANCED OPERATING SYSTEMS
Operating system (OS) itself is a process, what approaches are there.pdf
Unit 2 part 2(Process)
Operating system 20 threads
Topic 4- processes.pptx
Ad

Recently uploaded (20)

PDF
Mathematical Economics 23lec03slides.pdf
PDF
5a An Age-Based, Three-Dimensional Distribution Model Incorporating Sequence ...
PDF
Understanding University Research Expenditures (1)_compressed.pdf
PDF
Lecture1.pdf buss1040 uses economics introduction
PDF
how_to_earn_50k_monthly_investment_guide.pdf
PPTX
How best to drive Metrics, Ratios, and Key Performance Indicators
PPTX
OAT_ORI_Fed Independence_August 2025.pptx
PPTX
Session 14-16. Capital Structure Theories.pptx
PDF
Chapter 9 IFRS Ed-Ed4_2020 Intermediate Accounting
PPTX
The discussion on the Economic in transportation .pptx
PDF
THE EFFECT OF FOREIGN AID ON ECONOMIC GROWTH IN ETHIOPIA
PDF
HCWM AND HAI FOR BHCM STUDENTS(1).Pdf and ptts
PDF
Bitcoin Layer August 2025: Power Laws of Bitcoin: The Core and Bubbles
PPTX
social-studies-subject-for-high-school-globalization.pptx
PPTX
kyc aml guideline a detailed pt onthat.pptx
PPTX
Introduction to Customs (June 2025) v1.pptx
PDF
CLIMATE CHANGE AS A THREAT MULTIPLIER: ASSESSING ITS IMPACT ON RESOURCE SCARC...
PDF
Predicting Customer Bankruptcy Using Machine Learning Algorithm research pape...
PDF
Buy Verified Stripe Accounts for Sale - Secure and.pdf
PDF
How to join illuminati agent in Uganda Kampala call 0782561496/0756664682
Mathematical Economics 23lec03slides.pdf
5a An Age-Based, Three-Dimensional Distribution Model Incorporating Sequence ...
Understanding University Research Expenditures (1)_compressed.pdf
Lecture1.pdf buss1040 uses economics introduction
how_to_earn_50k_monthly_investment_guide.pdf
How best to drive Metrics, Ratios, and Key Performance Indicators
OAT_ORI_Fed Independence_August 2025.pptx
Session 14-16. Capital Structure Theories.pptx
Chapter 9 IFRS Ed-Ed4_2020 Intermediate Accounting
The discussion on the Economic in transportation .pptx
THE EFFECT OF FOREIGN AID ON ECONOMIC GROWTH IN ETHIOPIA
HCWM AND HAI FOR BHCM STUDENTS(1).Pdf and ptts
Bitcoin Layer August 2025: Power Laws of Bitcoin: The Core and Bubbles
social-studies-subject-for-high-school-globalization.pptx
kyc aml guideline a detailed pt onthat.pptx
Introduction to Customs (June 2025) v1.pptx
CLIMATE CHANGE AS A THREAT MULTIPLIER: ASSESSING ITS IMPACT ON RESOURCE SCARC...
Predicting Customer Bankruptcy Using Machine Learning Algorithm research pape...
Buy Verified Stripe Accounts for Sale - Secure and.pdf
How to join illuminati agent in Uganda Kampala call 0782561496/0756664682

Multiprocessor Scheduling

  • 2. Introduction  When a computer system contains more than a single processor, several new issues are introduced into the design of scheduling functions. We will examine these issues and the details of scheduling algorithms for tightly coupled multi-processor systems.
  • 3. Classifications of Multiprocessor Systems  Loosely coupled or distributed multiprocessor, or cluster  Each processor has its own memory and I/O channels  Functionally specialized processors  Such as I/O processor  Controlled by a master processor  Tightly coupled multiprocessing  Processors share main memory  Controlled by operating system
  • 4. Interconnection Network P P P Memory disk Memory Memory disk disk Shared Nothing (Loosely Coupled) P P P Interconnection Network Global Shared Memory disk disk disk Shared Memory (Tightly Coupled) Memory Memory Memory P P P Interconnection Network disk disk disk Shared Disk
  • 5. Granularity  A good metric for characterizing multiprocessors and placing then in context with other architectures is to consider the synchronization granularity, or frequency of synchronization, between processes in a system.  Five categories of parallelism that differ in the degree of granularity can be defined: 1. Independent parallelism 2. Very coarse granularity 3. Coarse granularity 4. Medium granularity 5. Fine granularity
  • 6. Independent Parallelism  With independent parallelism there is no explicit synchronization among processes.  Each process represents a separate, independent application or job.  This type of parallelism is typical of a time-sharing system.  The multiprocessor provides the same service as a multiprogrammed uniprocessor, however, because more than one processor is available, average response times to the user tend to be less.
  • 7. Coarse and Very Coarse-Grained Parallelism  With coarse and very coarse grained parallelism, there is synchronization among processes, but at a very gross level.  This type of situation is easily handled as a set of concurrent processes running on a multi-programmed uni-processor and can be supported on a multiprocessor with little or no change to user software.  Good for concurrent processes that need to communicate or synchronize can benefit from the use of a multiprocessor architecture.  In the case of very infrequent interaction among the processes, a distributed system can provide good support. However, if the interaction is somewhat more frequent, then the overhead of communication across the network may negate some of the potential speedup. In that case, the multiprocessor organization provides the most effective support.
  • 8. Medium-Grained Parallelism  Single application is a collection of threads  Threads usually interact frequently Fine-Grained Parallelism  Highly parallel applications  Specialized and fragmented area
  • 9. Design Issues For Multiprocessor Scheduling  Scheduling on a multiprocessor involves three interrelated issues: 1. The assignment of processes to processors 2. The use of multiprogramming on individual processors 3. The actual dispatching of a process  Looking at these three issues, it is important to keep in mind that the approach taken will depend, in general, on the degree of granularity of the applications and on the number of processors available.
  • 10. 1: Assignment of Processes to Processors  Treat processors as a pooled resource and assign process to processors on demand  Permanently assign process to a processor  Known as group or gang scheduling  Dedicate short-term queue for each processor  Less overhead  Processor could be idle while another processor has a backlog  Global queue  Schedule to any available processor
  • 11. Assignment of Processes to Processors  Master/slave architecture  Key kernel functions always run on a particular processor  Master is responsible for scheduling  Slave sends service request to the master  Disadvantages  Failure of master brings down whole system  Master can become a performance bottleneck  Peer architecture  Kernel can execute on any processor  Each processor does self-scheduling  Complicates the operating system  Make sure two processors do not choose the same process
  • 12. Process Scheduling  Single queue for all processes  Multiple queues are used for priorities  All queues feed to the common pool of processors
  • 13. Multiprocessor Thread Scheduling  Executes separate from the rest of the process  An application can be a set of threads that cooperate and execute concurrently in the same address space  Load sharing  Processes are not assigned to a particular processor  Gang scheduling  A set of related threads is scheduled to run on a set of processors at the same time  Dedicated processor assignment  Threads are assigned to a specific processor  Dynamic scheduling  Number of threads can be altered during course of execution
  • 14. Load Sharing  Load is distributed evenly across the processors  No centralized scheduler required  Use global queues Disadvantages of Load Sharing  Central queue needs mutual exclusion  Preemptive threads are unlikely to resume execution on the same processor  If all threads are in the global queue, all threads of a program will not gain access to the processors at the same time
  • 15. Gang Scheduling  Simultaneous scheduling of threads that make up a single process  Useful for applications where performance severely degrades when any part of the application is not running  Threads often need to synchronize with each other
  • 16. Dedicated Processor Assignment  When application is scheduled, its threads are assigned to a processor  Some processors may be idle  No multiprogramming of processors
  • 17. Dynamic Scheduling  Number of threads in a process are altered dynamically by the application  Operating system adjust the load to improve utilization  Assign idle processors  New arrivals may be assigned to a processor that is used by a job currently using more than one processor  Hold request until processor is available  Assign processor a job in the list that currently has no processors (i.e., to all waiting new arrivals)