CPU Scheduling is a process of determining which process will own CPU for execution while another process is on hold. The main task of CPU scheduling is to make sure that whenever the CPU remains idle, the OS at least select one of the processes available in the ready queue for execution.
This document provides an overview of CPU scheduling concepts and algorithms. It discusses key scheduling concepts like multiprogramming and processes. It then covers various scheduling algorithms like first-come first-served, shortest job first, priority-based, and round robin. It also discusses scheduling criteria, multilevel queues, multiple processor scheduling, real-time scheduling, and how scheduling algorithms are evaluated. The goal of scheduling is to optimize criteria like wait time, response time, and throughput.
This document discusses CPU scheduling in operating systems. It covers basic scheduling concepts like multiprogramming and preemptive scheduling. It then describes the role of the scheduler and dispatcher in selecting which process runs on the CPU. Several common scheduling algorithms are explained like first-come first-served, shortest job first, priority scheduling, and round robin. Factors for evaluating scheduling performance and examples of scheduling in Linux and real-time systems are also summarized.
The document discusses different CPU scheduling algorithms used in operating systems. It describes first-come, first-served (FCFS) scheduling, which schedules processes in the order they arrive. Shortest job first (SJF) scheduling prioritizes the shortest jobs. Round-robin (RR) scheduling allocates each process a time slice or quantum to use the CPU before switching to another process. The document also covers shortest remaining time next, preemptive priority scheduling, and some of the criteria used to evaluate scheduling algorithms like CPU utilization, throughput, waiting time and response time.
This document discusses CPU scheduling algorithms. It begins by defining key concepts like multiprogramming, jobs, processes, and preemptive scheduling. It then covers scheduling criteria for performance evaluation like utilization, throughput, and response time. The rest of the document explains common scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), preemptive scheduling, priority-based scheduling, and round robin scheduling. It provides examples to calculate metrics like average wait time under each algorithm.
The document discusses CPU scheduling techniques used in operating systems to improve CPU utilization. It describes how multiprogramming allows multiple processes to share the CPU by switching between processes when one is waiting for I/O. Common scheduling algorithms like first-come first-served (FCFS), priority scheduling, round robin, and shortest job first are explained. The goal of scheduling is to maximize throughput and minimize average wait times for processes.
LM10,11,12 - CPU SCHEDULING algorithms and its processesmanideepakc
The document discusses CPU scheduling in operating systems. It covers key concepts like processes alternating between CPU and I/O bursts, the role of the CPU scheduler and dispatcher in selecting the next process to run. It also describes different scheduling algorithms like FCFS, SJF, priority, and round robin scheduling and compares their advantages and disadvantages in optimizing criteria like CPU utilization, wait time, and throughput.
The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that contains a program counter, stack, and data section. Processes can be in various states like new, ready, running, waiting, and terminated. A process control block contains information about each process like its state, program counter, memory allocation, and more. Scheduling aims to optimize CPU utilization, throughput, turnaround time, waiting time, and response time using algorithms like first come first serve, shortest job first, priority, and round robin scheduling.
This document provides an overview of CPU scheduling algorithms and concepts. It discusses scheduling criteria like throughput and turnaround time. It describes common scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, round robin scheduling, and multilevel queue scheduling. It also covers thread scheduling, real-time scheduling, and examples of scheduling in Linux and Windows. The goal of CPU scheduling is to maximize CPU utilization and optimize criteria like waiting time, response time, and throughput by selecting which process runs next.
It consists of CPU scheduling algorithms, examples, scheduling problems, realtime scheduling algorithms and issues. Multiprocessing and multicore scheduling.
This document provides an outline of topics related to CPU scheduling in operating systems. It discusses basic concepts like CPU-I/O burst cycles, scheduling criteria, and various scheduling algorithms including first-come first-served, shortest job first, priority, round robin, and multilevel queue scheduling. It also covers thread scheduling, multiple processor scheduling, real-time scheduling, and approaches to evaluating scheduling algorithms.
This document provides an outline and overview of CPU scheduling concepts and algorithms. It discusses key concepts like CPU bursts, scheduling criteria, and common scheduling algorithms like first-come first-served, shortest job first, priority, and round robin. It also covers more advanced scheduling techniques like multilevel queue scheduling, multilevel feedback queue scheduling, and considerations for thread and multiprocessor scheduling. The document is intended as a guide to understanding operating system CPU scheduling.
This presentation summarizes six different CPU scheduling algorithms: First Come First Serve (FCFS), Shortest Job First (SJF), Shortest Remaining Time, Priority Scheduling, Round Robin Scheduling, and Multilevel Queue Scheduling. For each algorithm, the presentation provides a brief overview of how it works, its advantages, and disadvantages. It also includes an example calculation of turnaround time and waiting time for FCFS and a comparison chart of the different algorithms. The presentation concludes that scheduling algorithms should not affect system behavior but can impact efficiency and response time, and the best are adaptive to changes.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with respect to criteria like CPU utilization, waiting time, response time, and turnaround time.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with regards to criteria like CPU utilization, waiting time, throughput, response time and turnaround time.
Process scheduling involves assigning system resources like CPU time to processes. There are three levels of scheduling - long, medium, and short term. The goals of scheduling are to minimize turnaround time, waiting time, and response time for users while maximizing throughput, CPU utilization, and fairness for the system. Common scheduling algorithms include first come first served, priority scheduling, shortest job first, round robin, and multilevel queue scheduling. Newer algorithms like fair share scheduling and lottery scheduling aim to prevent starvation.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
The document presents a new CPU scheduling algorithm that acts as both preemptive and non-preemptive based on arrival time. It compares the proposed algorithm to existing algorithms like FCFS, SJF, priority, and round robin. The proposed algorithm calculates a condition factor by adding arrival time and burst time. Processes are arranged based on this factor to improve waiting time, turnaround time, and CPU utilization compared to other algorithms. Simulation results show the proposed algorithm performs equally to SJF and better than FCFS, priority, and round robin scheduling.
CPU scheduling determines which process will be assigned to the CPU for execution. There are several types of scheduling algorithms:
First-come, first-served (FCFS) assigns processes in the order they arrive without preemption. Shortest-job-first (SJF) selects the process with the shortest estimated run time, but may result in starvation of longer processes. Priority scheduling assigns priorities to processes and selects the highest priority process, but low priority processes risk starvation.
This document describes an experiment to implement and compare two CPU scheduling algorithms: shortest job first (SJF) and round robin. It provides background on SJF and round robin scheduling, including their advantages and disadvantages. Formulas to calculate completion time, turnaround time, and waiting time are provided for analyzing SJF scheduling. Pseudocode is included for algorithms to implement SJF and round robin scheduling. The objective is to demonstrate and program these two algorithms with different process arrival times.
The document discusses process scheduling in operating systems. It covers basic concepts like scheduling criteria, algorithms like FCFS, SJF, priority and round robin scheduling. It explains key process states and scheduling techniques like preemptive and non-preemptive. Examples are provided to illustrate different scheduling algorithms and how they optimize criteria like waiting time, response time and CPU utilization.
LM10,11,12 - CPU SCHEDULING algorithms and its processesmanideepakc
The document discusses CPU scheduling in operating systems. It covers key concepts like processes alternating between CPU and I/O bursts, the role of the CPU scheduler and dispatcher in selecting the next process to run. It also describes different scheduling algorithms like FCFS, SJF, priority, and round robin scheduling and compares their advantages and disadvantages in optimizing criteria like CPU utilization, wait time, and throughput.
The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that contains a program counter, stack, and data section. Processes can be in various states like new, ready, running, waiting, and terminated. A process control block contains information about each process like its state, program counter, memory allocation, and more. Scheduling aims to optimize CPU utilization, throughput, turnaround time, waiting time, and response time using algorithms like first come first serve, shortest job first, priority, and round robin scheduling.
This document provides an overview of CPU scheduling algorithms and concepts. It discusses scheduling criteria like throughput and turnaround time. It describes common scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), priority scheduling, round robin scheduling, and multilevel queue scheduling. It also covers thread scheduling, real-time scheduling, and examples of scheduling in Linux and Windows. The goal of CPU scheduling is to maximize CPU utilization and optimize criteria like waiting time, response time, and throughput by selecting which process runs next.
It consists of CPU scheduling algorithms, examples, scheduling problems, realtime scheduling algorithms and issues. Multiprocessing and multicore scheduling.
This document provides an outline of topics related to CPU scheduling in operating systems. It discusses basic concepts like CPU-I/O burst cycles, scheduling criteria, and various scheduling algorithms including first-come first-served, shortest job first, priority, round robin, and multilevel queue scheduling. It also covers thread scheduling, multiple processor scheduling, real-time scheduling, and approaches to evaluating scheduling algorithms.
This document provides an outline and overview of CPU scheduling concepts and algorithms. It discusses key concepts like CPU bursts, scheduling criteria, and common scheduling algorithms like first-come first-served, shortest job first, priority, and round robin. It also covers more advanced scheduling techniques like multilevel queue scheduling, multilevel feedback queue scheduling, and considerations for thread and multiprocessor scheduling. The document is intended as a guide to understanding operating system CPU scheduling.
This presentation summarizes six different CPU scheduling algorithms: First Come First Serve (FCFS), Shortest Job First (SJF), Shortest Remaining Time, Priority Scheduling, Round Robin Scheduling, and Multilevel Queue Scheduling. For each algorithm, the presentation provides a brief overview of how it works, its advantages, and disadvantages. It also includes an example calculation of turnaround time and waiting time for FCFS and a comparison chart of the different algorithms. The presentation concludes that scheduling algorithms should not affect system behavior but can impact efficiency and response time, and the best are adaptive to changes.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with respect to criteria like CPU utilization, waiting time, response time, and turnaround time.
This document discusses different approaches to CPU scheduling. It describes three levels of scheduling: long-term, medium-term, and short-term. For short-term scheduling, which determines the next ready process to execute, it covers scheduling algorithms like first-come first-served (FCFS), shortest job first (SJF), shortest remaining time (SRT), and round-robin. It analyzes the advantages and disadvantages of each approach with regards to criteria like CPU utilization, waiting time, throughput, response time and turnaround time.
Process scheduling involves assigning system resources like CPU time to processes. There are three levels of scheduling - long, medium, and short term. The goals of scheduling are to minimize turnaround time, waiting time, and response time for users while maximizing throughput, CPU utilization, and fairness for the system. Common scheduling algorithms include first come first served, priority scheduling, shortest job first, round robin, and multilevel queue scheduling. Newer algorithms like fair share scheduling and lottery scheduling aim to prevent starvation.
The document discusses processes, CPU scheduling, and process synchronization. It covers:
- Process concepts including states like running, ready, waiting, and terminated.
- CPU scheduling algorithms like first come first serve, round robin, shortest job first, and priority scheduling. Scheduling objectives are maximizing CPU utilization and minimizing wait time.
- Process synchronization is needed when multiple processes access shared resources. The critical section problem arises when processes need exclusive access to a critical section of code. Solutions ensure mutual exclusion, progress, and bounded waiting.
The document presents a new CPU scheduling algorithm that acts as both preemptive and non-preemptive based on arrival time. It compares the proposed algorithm to existing algorithms like FCFS, SJF, priority, and round robin. The proposed algorithm calculates a condition factor by adding arrival time and burst time. Processes are arranged based on this factor to improve waiting time, turnaround time, and CPU utilization compared to other algorithms. Simulation results show the proposed algorithm performs equally to SJF and better than FCFS, priority, and round robin scheduling.
CPU scheduling determines which process will be assigned to the CPU for execution. There are several types of scheduling algorithms:
First-come, first-served (FCFS) assigns processes in the order they arrive without preemption. Shortest-job-first (SJF) selects the process with the shortest estimated run time, but may result in starvation of longer processes. Priority scheduling assigns priorities to processes and selects the highest priority process, but low priority processes risk starvation.
This document describes an experiment to implement and compare two CPU scheduling algorithms: shortest job first (SJF) and round robin. It provides background on SJF and round robin scheduling, including their advantages and disadvantages. Formulas to calculate completion time, turnaround time, and waiting time are provided for analyzing SJF scheduling. Pseudocode is included for algorithms to implement SJF and round robin scheduling. The objective is to demonstrate and program these two algorithms with different process arrival times.
The document discusses process scheduling in operating systems. It covers basic concepts like scheduling criteria, algorithms like FCFS, SJF, priority and round robin scheduling. It explains key process states and scheduling techniques like preemptive and non-preemptive. Examples are provided to illustrate different scheduling algorithms and how they optimize criteria like waiting time, response time and CPU utilization.
Value Stream Mapping Worskshops for Intelligent Continuous SecurityMarc Hornbeek
This presentation provides detailed guidance and tools for conducting Current State and Future State Value Stream Mapping workshops for Intelligent Continuous Security.
RESORT MANAGEMENT AND RESERVATION SYSTEM PROJECT REPORT.Kamal Acharya
The project developers created a system entitled Resort Management and Reservation System; it will provide better management and monitoring of the services in every resort business, especially D’ Rock Resort. To accommodate those out-of-town guests who want to remain and utilize the resort's services, the proponents planned to automate the business procedures of the resort and implement the system. As a result, it aims to improve business profitability, lower expenses, and speed up the resort's transaction processing. The resort will now be able to serve those potential guests, especially during the high season. Using websites for faster transactions to reserve on your desired time and date is another step toward technological advancement. Customers don’t need to walk in and hold in line for several hours. There is no problem in converting a paper-based transaction online; it's just the system that will be used that will help the resort expand. Moreover, Gerard (2012) stated that “The flexible online information structure was developed as a tool for the reservation theory's two primary applications. Computer use is more efficient, accurate, and faster than a manual or present lifestyle of operation. Using a computer has a vital role in our daily life and the advantages of the devices we use.
its all about Artificial Intelligence(Ai) and Machine Learning and not on advanced level you can study before the exam or can check for some information on Ai for project
Sorting Order and Stability in Sorting.
Concept of Internal and External Sorting.
Bubble Sort,
Insertion Sort,
Selection Sort,
Quick Sort and
Merge Sort,
Radix Sort, and
Shell Sort,
External Sorting, Time complexity analysis of Sorting Algorithms.
2. 5: CPU-Scheduling 2
What Is In This Chapter?
• This chapter is about how to get a process attached to a processor.
• It centers around efficient algorithms that perform well.
• The design of a scheduler is concerned with making sure all users get
their fair share of the resources.
CPU Scheduling
3. 5: CPU-Scheduling 3
What Is In This Chapter?
• Basic Concepts
• Scheduling Criteria
• Scheduling Algorithms
• Multiple-Processor Scheduling
• Real-Time Scheduling
• Thread Scheduling
• Operating Systems Examples
• Java Thread Scheduling
• Algorithm Evaluation
CPU Scheduling
4. 5: CPU-Scheduling 4
CPU SCHEDULING Scheduling
Concepts
Multiprogramming A number of programs can be in
memory at the same time. Allows
overlap of CPU and I/O.
Jobs (batch) are programs that run
without user interaction.
User (time shared) are programs that may
have user interaction.
Process is the common name for both.
CPU - I/O burst cycle Characterizes process execution,
which alternates, between CPU and
I/O activity. CPU times are
generally much shorter than I/O
times.
Preemptive Scheduling An
interrupt causes currently running
process to give up the CPU and be
replaced by another process.
5. 5: CPU-Scheduling 5
CPU SCHEDULING The Scheduler
Selects from among the processes in memory that are ready to execute, and
allocates the CPU to one of them
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
6. 5: CPU-Scheduling 6
CPU SCHEDULING The Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-
term scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start
another running
7. 5: CPU-Scheduling 7
Note usage of the words DEVICE, SYSTEM, REQUEST, JOB.
UTILIZATION The fraction of time a device is in use. ( ratio of in-use time / total observation
time )
THROUGHPUT The number of job completions in a period of time. (jobs / second )
SERVICE TIME The time required by a device to handle a request. (seconds)
QUEUEING TIME Time on a queue waiting for service from the device. (seconds)
RESIDENCE TIME The time spent by a request at a device.
RESIDENCE TIME = SERVICE TIME +
QUEUEING TIME.
RESPONSE TIME Time used by a system to respond to a User Job. ( seconds )
THINK TIME The time spent by the user of an interactive system to figure out the next
request. (seconds)
The goal is to optimize both the average and the amount of variation. (but beware the ogre
predictability.)
CPU SCHEDULING
Criteria For
Performance
Evaluation
8. 5: CPU-Scheduling 8
Most Processes Don’t Use Up Their Scheduling Quantum!
CPU SCHEDULING
Scheduling
Behavior
9. 5: CPU-Scheduling 9
FIRST-COME, FIRST SERVED:
( FCFS) same as FIFO
Simple, fair, but poor performance. Average queueing time may be long.
What are the average queueing and residence times for this scenario?
How do average queueing and residence times depend on ordering of these
processes in the queue?
CPU SCHEDULING Scheduling
Algorithms
10. 5: CPU-Scheduling 10
EXAMPLE DATA:
Process Arrival Service
Time Time
1 0 8
2 1 4
3 2 9
4 3 5
0 8 12 21 26
P1 P2 P3 P4
FCFS
Average wait = ( (8-0) + (12-1) + (21-2) + (26-3) )/4 = 61/4 = 15.25
CPU SCHEDULING Scheduling
Algorithms
Residence Time
at the CPU
11. 5: CPU-Scheduling 11
SHORTEST JOB FIRST:
Optimal for minimizing queueing time, but impossible to implement.
Tries to predict the process to schedule based on previous history.
Predicting the time the process will use on its next schedule:
t( n+1 ) = w * t( n ) + ( 1 - w ) * T( n )
Here: t(n+1) is time of next burst.
t(n) is time of current burst.
T(n) is average of all previous bursts .
W is a weighting factor emphasizing current or previous bursts.
CPU SCHEDULING Scheduling
Algorithms
12. 5: CPU-Scheduling 12
PREEMPTIVE ALGORITHMS:
Yank the CPU away from the currently executing process when a higher
priority process is ready.
Can be applied to both Shortest Job First or to Priority scheduling.
Avoids "hogging" of the CPU
On time sharing machines, this type of scheme is required because the
CPU must be protected from a run-away low priority process.
Give short jobs a higher priority – perceived response time is thus better.
What are average queueing and residence times? Compare with FCFS.
CPU SCHEDULING Scheduling
Algorithms
13. 5: CPU-Scheduling 13
EXAMPLE DATA:
Process Arrival Service
Time Time
1 0 8
2 1 4
3 2 9
4 3 5
0 5 10 17 26
P2 P4 P1 P3
Preemptive Shortest Job First
Average wait = ( (5-1) + (10-3) + (17-0) + (26-2) )/4 = 52/4 = 13.0
P1
1
CPU SCHEDULING Scheduling
Algorithms
14. 5: CPU-Scheduling 14
PRIORITY BASED SCHEDULING:
Assign each process a priority. Schedule highest priority first. All processes within
same priority are FCFS.
Priority may be determined by user or by some default mechanism. The system
may determine the priority based on memory requirements, time limits, or other
resource usage.
Starvation occurs if a low priority process never runs. Solution: build aging into a
variable priority.
Delicate balance between giving favorable response for interactive jobs, but not
starving batch jobs.
CPU SCHEDULING Scheduling
Algorithms
15. 5: CPU-Scheduling 15
ROUND ROBIN:
Use a timer to cause an interrupt after a predetermined time. Preempts if task
exceeds it’s quantum.
Train of events
Dispatch
Time slice occurs OR process suspends on event
Put process on some queue and dispatch next
Use numbers in last example to find queueing and residence times. (Use quantum =
4 sec.)
Definitions:
– Context Switch Changing the processor from running one task (or
process) to another. Implies changing memory.
– Processor Sharing Use of a small quantum such that each process runs
frequently at speed 1/n.
– Reschedule latency How long it takes from when a process requests to
run, until it finally gets control of the CPU.
CPU SCHEDULING Scheduling
Algorithms
16. 5: CPU-Scheduling 16
ROUND ROBIN:
Choosing a time quantum
– Too short - inordinate fraction of the time is spent in context switches.
– Too long - reschedule latency is too great. If many processes want
the CPU, then it's a long time before a particular process can get the
CPU. This then acts like FCFS.
– Adjust so most processes won't use their slice. As processors have
become faster, this is less of an issue.
CPU SCHEDULING Scheduling
Algorithms
17. 5: CPU-Scheduling 17
EXAMPLE DATA:
Process Arrival Service
Time Time
1 0 8
2 1 4
3 2 9
4 3 5
0 8 12 16 26
P2 P3 P4 P1
Round Robin, quantum = 4, no priority-based preemption
Average wait = ( (20-0) + (8-1) + (26-2) + (25-3) )/4 = 74/4 = 18.5
P1
4
P3 P4
20 24 25
P3
CPU SCHEDULING Scheduling
Algorithms
Note:
Example violates rules for
quantum size since most
processes don’t finish in one
quantum.
18. 5: CPU-Scheduling 18
MULTI-LEVEL QUEUES:
Each queue has its scheduling algorithm.
Then some other algorithm (perhaps priority based) arbitrates between queues.
Can use feedback to move between queues
Method is complex but flexible.
For example, could separate system processes, interactive, batch, favored, unfavored
processes
CPU SCHEDULING Scheduling
Algorithms
20. 5: CPU-Scheduling 20
MULTIPLE PROCESSOR SCHEDULING:
Different rules for homogeneous or heterogeneous processors.
Load sharing in the distribution of work, such that all processors have an
equal amount to do.
Each processor can schedule from a common ready queue ( equal
machines ) OR can use a master slave arrangement.
Real Time Scheduling:
• Hard real-time systems – required to complete a critical task within a
guaranteed amount of time.
• Soft real-time computing – requires that critical processes receive priority over
less fortunate ones.
CPU SCHEDULING Scheduling
Algorithms
21. 5: CPU-Scheduling 21
Two algorithms: time-sharing and real-time
• Time-sharing
– Prioritized credit-based – process with most credits is scheduled next
– Credit subtracted when timer interrupt occurs
– When credit = 0, another process chosen
– When all processes have credit = 0, recrediting occurs
• Based on factors including priority and history
• Real-time
– Soft real-time
– Posix.1b compliant – two classes
• FCFS and RR
• Highest priority process runs first
CPU SCHEDULING Linux Scheduling
22. 5: CPU-Scheduling 22
How do we decide which algorithm is best for a particular environment?
• Deterministic modeling – takes a particular predetermined workload and defines
the performance of each algorithm for that workload.
• Queueing models.
CPU SCHEDULING Algorithm Evaluation
23. 5: CPU-Scheduling 23
We’ve looked at a number of different scheduling algorithms.
Which one works the best is application dependent.
General purpose OS will use priority based, round robin, preemptive
Real Time OS will use priority, no preemption.
CPU SCHEDULING
WRAPUP