The document discusses processes and process scheduling in operating systems. It defines a process as a program in execution that contains a program counter, stack, and data section. Processes can be in various states like new, ready, running, waiting, and terminated. A process control block contains information about each process like its state, program counter, memory allocation, and more. Scheduling aims to optimize CPU utilization, throughput, turnaround time, waiting time, and response time using algorithms like first come first serve, shortest job first, priority, and round robin scheduling.
The document discusses process scheduling in operating systems. It covers basic concepts like scheduling criteria, algorithms like FCFS, SJF, priority and round robin scheduling. It explains key process states and scheduling techniques like preemptive and non-preemptive. Examples are provided to illustrate different scheduling algorithms and how they optimize criteria like waiting time, response time and CPU utilization.
1. Process management is an integral part of operating systems for allocating resources, enabling information sharing, and protecting processes. The OS maintains data structures describing each process's state and resource ownership.
2. Processes go through discrete states and events can cause state changes. Scheduling selects processes to run from ready, device, and job queues using algorithms like round robin, shortest job first, and priority scheduling.
3. CPU scheduling aims to maximize utilization and throughput while minimizing waiting times using criteria like response time, turnaround time, and fairness between processes.
CPU scheduling decides which processes run when multiple are ready. It aims to make the system efficient, fast and fair. There are different scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin. Multi-level feedback queue scheduling uses multiple queues and allows processes to move between queues based on their CPU usage to prioritize shorter interactive processes.
The document discusses CPU scheduling algorithms. It begins by explaining the basic concepts of CPU scheduling, including that the CPU scheduler selects ready processes to execute on the CPU. This allows for multi-programming by switching the CPU among ready processes instead of waiting for each process to finish. The document then discusses different scheduling algorithms like first come first served and shortest job first, and evaluates them based on criteria like CPU utilization, throughput, turnaround time, and waiting time.
CPU scheduling is the process by which the CPU selects which process to execute next from among processes in memory that are ready to execute. The CPU scheduler selects processes from the ready queue to execute. The goal of CPU scheduling is to maximize CPU utilization and throughput while minimizing waiting time and response time. Common CPU scheduling algorithms include first come first serve (FCF) which services processes in the order they arrive, and shortest job first (SJF) which selects the process with the shortest estimated run time to execute next.
Process Scheduling in Ope Spptystems ratingAryan904173
Process scheduling is the activity of the operating system that decides which process (program) in the ready queue will execute next on the CPU. Its goal is to maximize CPU utilization, improve response time, and ensure fairness among processes.
* Using SJF preemptive scheduling:
P2 will execute from time 0 to 5 ms.
P3 will execute from time 5 to 8 ms.
P1 will execute from time 8 to 18 ms.
P4 will execute from time 18 to 38 ms.
P5 will execute from time 38 to 40 ms.
Total waiting time = (10-5) + (8-5) + (18-0) + (38-5) + (40-10) = 5.6 + 3 + 18 + 33 + 30 = 90
Average waiting time = Total waiting time / Number of processes = 90/5 = 5.6 ms
* Using Priority preemptive scheduling
The document discusses operating system concepts including CPU scheduling, process states, and scheduling algorithms. It covers historical perspectives on CPU scheduling and bursts, preemptive vs. nonpreemptive scheduling, and scheduling criteria. Common scheduling algorithms like first-come, first-served (FCFS), shortest-job-first (SJF), priority, and round robin are described. The roles of long-term and short-term schedulers are defined.
Unit iios process scheduling and synchronizationdonny101
The document discusses process scheduling and synchronization in operating systems. It covers CPU scheduling algorithms like first-come first-served, shortest job first, priority scheduling, and round robin. It also discusses concepts like critical section problem, synchronization methods using semaphores and monitors, and solutions to deadlocks. Process synchronization ensures that processes access shared resources in a mutually exclusive way.
The document discusses various CPU scheduling algorithms used in operating systems. It describes the main objective of CPU scheduling as maximizing CPU utilization by allowing multiple processes to share the CPU. It then explains different scheduling criteria like throughput, turnaround time, waiting time and response time. Finally, it summarizes common scheduling algorithms like first come first served, shortest job first, priority scheduling and round robin scheduling.
The document discusses different CPU scheduling algorithms:
1. First Come First Served scheduling allocates CPU to the longest waiting process first, which can result in longer processes waiting behind shorter ones (convoy effect).
2. Shortest Job First scheduling allocates CPU to the process with the shortest estimated run time, minimizing average wait time. Preemptive SJF allows interrupting the current process if a shorter one arrives.
3. Priority scheduling assigns priority levels and allocates CPU to the highest priority ready process. Preemption and aging policies address starvation of lower priority processes.
4. Round Robin scheduling allocates a time quantum (e.g. 10-100ms) to each ready process
Here are the steps to solve this problem:
a) Non-Preemptive Priority Scheduling:
- Process order based on priority: P2, P4, P1, P3
- Number of context switches = 3
b) Round Robin Scheduling with time slice = 2:
- Process order: P1, P2, P1, P4, P1, P3, P1
- Number of context switches = 6
c) With RR, the behavior depends on the time slice size. With a small time slice of 2ms, most processes cannot complete within one time slice. This leads to a larger number of context switches compared to priority scheduling.
d)
The document discusses different CPU scheduling algorithms used in operating systems. It describes first-come, first-served (FCFS) scheduling, which schedules processes in the order they arrive. Shortest job first (SJF) scheduling prioritizes the shortest jobs. Round-robin (RR) scheduling allocates each process a time slice or quantum to use the CPU before switching to another process. The document also covers shortest remaining time next, preemptive priority scheduling, and some of the criteria used to evaluate scheduling algorithms like CPU utilization, throughput, waiting time and response time.
Operating Systems Third Unit - Fourth Semester - EngineeringYogesh Santhan
The document describes CPU scheduling concepts in a multiprogramming operating system. It discusses how CPU scheduling depends on CPU bursts and I/O waits as processes alternate between the two states. The scheduler selects processes from the ready queue to run on the CPU. Scheduling can be preemptive, occurring when a process changes states, or non-preemptive. Common scheduling algorithms like first-come, first-served, shortest job first, priority, and round robin are described. Optimization criteria for scheduling like CPU utilization, throughput, turnaround time and waiting time are also covered.
The document discusses various CPU scheduling concepts and algorithms. It covers basic concepts like CPU-I/O burst cycles and scheduling criteria. It then describes common scheduling algorithms like first come first served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also discusses more advanced topics like multi-level queue scheduling, multi-processor scheduling, and thread scheduling in Linux.
The document discusses different scheduling algorithms used by operating systems. It begins by explaining that the scheduler decides which process to activate from the ready queue when multiple processes are runnable. There are long-term, medium-term, and short-term schedulers that control admission of jobs, memory allocation, and CPU sharing respectively. The goal of scheduling is to optimize system performance and resource utilization while providing responsive service. Common algorithms include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round-robin. FCFS schedules processes in the order of arrival while SJF selects the shortest job first. Priority scheduling preempts lower priority jobs.
The document discusses operating system scheduling. It defines key scheduling criteria like CPU utilization, throughput, turnaround time, waiting time, and response time. It also outlines common scheduling algorithms like first-come first-served (FCFS), shortest-job-next (SJN), priority scheduling, shortest remaining time, and round robin. For each algorithm, it provides examples of how they work and how to calculate metrics like waiting time and turnaround time. It also distinguishes between time-sharing systems, which context switch between processes frequently for fast response, and parallel processing systems, which divide programs across multiple processors.
The document discusses various CPU scheduling algorithms used in operating systems. It describes scheduling concepts like processes alternating between CPU and I/O bursts. Common scheduling criteria like CPU utilization, throughput and waiting time are discussed. Specific algorithms covered include First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin scheduling. More advanced techniques like multilevel queue and multilevel feedback queue scheduling are also summarized.
The document discusses CPU scheduling techniques used in operating systems to improve CPU utilization. It describes how multiprogramming allows multiple processes to share the CPU by switching between processes when one is waiting for I/O. Common scheduling algorithms like first-come first-served (FCFS), priority scheduling, round robin, and shortest job first are explained. The goal of scheduling is to maximize throughput and minimize average wait times for processes.
Process Scheduling in Ope Spptystems ratingAryan904173
Process scheduling is the activity of the operating system that decides which process (program) in the ready queue will execute next on the CPU. Its goal is to maximize CPU utilization, improve response time, and ensure fairness among processes.
* Using SJF preemptive scheduling:
P2 will execute from time 0 to 5 ms.
P3 will execute from time 5 to 8 ms.
P1 will execute from time 8 to 18 ms.
P4 will execute from time 18 to 38 ms.
P5 will execute from time 38 to 40 ms.
Total waiting time = (10-5) + (8-5) + (18-0) + (38-5) + (40-10) = 5.6 + 3 + 18 + 33 + 30 = 90
Average waiting time = Total waiting time / Number of processes = 90/5 = 5.6 ms
* Using Priority preemptive scheduling
The document discusses operating system concepts including CPU scheduling, process states, and scheduling algorithms. It covers historical perspectives on CPU scheduling and bursts, preemptive vs. nonpreemptive scheduling, and scheduling criteria. Common scheduling algorithms like first-come, first-served (FCFS), shortest-job-first (SJF), priority, and round robin are described. The roles of long-term and short-term schedulers are defined.
Unit iios process scheduling and synchronizationdonny101
The document discusses process scheduling and synchronization in operating systems. It covers CPU scheduling algorithms like first-come first-served, shortest job first, priority scheduling, and round robin. It also discusses concepts like critical section problem, synchronization methods using semaphores and monitors, and solutions to deadlocks. Process synchronization ensures that processes access shared resources in a mutually exclusive way.
The document discusses various CPU scheduling algorithms used in operating systems. It describes the main objective of CPU scheduling as maximizing CPU utilization by allowing multiple processes to share the CPU. It then explains different scheduling criteria like throughput, turnaround time, waiting time and response time. Finally, it summarizes common scheduling algorithms like first come first served, shortest job first, priority scheduling and round robin scheduling.
The document discusses different CPU scheduling algorithms:
1. First Come First Served scheduling allocates CPU to the longest waiting process first, which can result in longer processes waiting behind shorter ones (convoy effect).
2. Shortest Job First scheduling allocates CPU to the process with the shortest estimated run time, minimizing average wait time. Preemptive SJF allows interrupting the current process if a shorter one arrives.
3. Priority scheduling assigns priority levels and allocates CPU to the highest priority ready process. Preemption and aging policies address starvation of lower priority processes.
4. Round Robin scheduling allocates a time quantum (e.g. 10-100ms) to each ready process
Here are the steps to solve this problem:
a) Non-Preemptive Priority Scheduling:
- Process order based on priority: P2, P4, P1, P3
- Number of context switches = 3
b) Round Robin Scheduling with time slice = 2:
- Process order: P1, P2, P1, P4, P1, P3, P1
- Number of context switches = 6
c) With RR, the behavior depends on the time slice size. With a small time slice of 2ms, most processes cannot complete within one time slice. This leads to a larger number of context switches compared to priority scheduling.
d)
The document discusses different CPU scheduling algorithms used in operating systems. It describes first-come, first-served (FCFS) scheduling, which schedules processes in the order they arrive. Shortest job first (SJF) scheduling prioritizes the shortest jobs. Round-robin (RR) scheduling allocates each process a time slice or quantum to use the CPU before switching to another process. The document also covers shortest remaining time next, preemptive priority scheduling, and some of the criteria used to evaluate scheduling algorithms like CPU utilization, throughput, waiting time and response time.
Operating Systems Third Unit - Fourth Semester - EngineeringYogesh Santhan
The document describes CPU scheduling concepts in a multiprogramming operating system. It discusses how CPU scheduling depends on CPU bursts and I/O waits as processes alternate between the two states. The scheduler selects processes from the ready queue to run on the CPU. Scheduling can be preemptive, occurring when a process changes states, or non-preemptive. Common scheduling algorithms like first-come, first-served, shortest job first, priority, and round robin are described. Optimization criteria for scheduling like CPU utilization, throughput, turnaround time and waiting time are also covered.
The document discusses various CPU scheduling concepts and algorithms. It covers basic concepts like CPU-I/O burst cycles and scheduling criteria. It then describes common scheduling algorithms like first come first served (FCFS), shortest job first (SJF), priority scheduling, and round robin (RR). It also discusses more advanced topics like multi-level queue scheduling, multi-processor scheduling, and thread scheduling in Linux.
The document discusses different scheduling algorithms used by operating systems. It begins by explaining that the scheduler decides which process to activate from the ready queue when multiple processes are runnable. There are long-term, medium-term, and short-term schedulers that control admission of jobs, memory allocation, and CPU sharing respectively. The goal of scheduling is to optimize system performance and resource utilization while providing responsive service. Common algorithms include first-come first-served (FCFS), shortest job first (SJF), priority scheduling, and round-robin. FCFS schedules processes in the order of arrival while SJF selects the shortest job first. Priority scheduling preempts lower priority jobs.
The document discusses operating system scheduling. It defines key scheduling criteria like CPU utilization, throughput, turnaround time, waiting time, and response time. It also outlines common scheduling algorithms like first-come first-served (FCFS), shortest-job-next (SJN), priority scheduling, shortest remaining time, and round robin. For each algorithm, it provides examples of how they work and how to calculate metrics like waiting time and turnaround time. It also distinguishes between time-sharing systems, which context switch between processes frequently for fast response, and parallel processing systems, which divide programs across multiple processors.
The document discusses various CPU scheduling algorithms used in operating systems. It describes scheduling concepts like processes alternating between CPU and I/O bursts. Common scheduling criteria like CPU utilization, throughput and waiting time are discussed. Specific algorithms covered include First Come First Serve (FCFS), Shortest Job First (SJF), Priority Scheduling, and Round Robin scheduling. More advanced techniques like multilevel queue and multilevel feedback queue scheduling are also summarized.
The document discusses CPU scheduling techniques used in operating systems to improve CPU utilization. It describes how multiprogramming allows multiple processes to share the CPU by switching between processes when one is waiting for I/O. Common scheduling algorithms like first-come first-served (FCFS), priority scheduling, round robin, and shortest job first are explained. The goal of scheduling is to maximize throughput and minimize average wait times for processes.
Alaska Silver: Developing Critical Minerals & High-Grade Silver Resources
Alaska Silver is advancing a prolific 8-km mineral corridor hosting two significant deposits. Our flagship high-grade silver deposit at Waterpump Creek, which contains gallium (the U.S. #1 critical mineral), and the historic Illinois Creek mine anchor our 100% owned carbonate replacement system across an expansive, underexplored landscape.
Waterpump Creek: 75 Moz @ 980 g/t AgEq (Inferred), open for expansion north and south
Illinois Creek: 525 Koz AuEq - 373 Koz @ 1.3 g/t AuEq (Indicated), 152 Koz @ 1.44 g/t AuEq (Inferred)
2024 "Warm Springs" Discovery: First copper, gold, and Waterpump Creek-grade silver intercepts 0.8 miles from Illinois Creek
2025 Focus: Targeting additional high-grade silver discoveries at Waterpump Creek South and initiating studies on gallium recovery potential.
Influence of Career Development on Retention of Employees in Private Univers...publication11
Retention of employees in universities is paramount for producing quantity and quality of human capital for
economic development of a country. Turnover has persistently remained high in private universities despite
employee attrition by institutions, which can disrupt organizational stability, quality of education and reputation.
Objectives of the study included performance appraisal, staff training and promotion practices on retention of
employees. Correlational research design and quantitative research were adopted. Total population was 85 with a
sample of 70 which was selected through simple random sampling. Data collection was through questionnaire and
analysed using multiple linear regression with help of SPSS. Results showed that both performance appraisal
(t=1.813, P=.076, P>.05) and staff training practices (t=-1.887, P=.065, P>.05) were statistical insignificant while
promotion practices (t=3.804, P=.000, P<.05) was statistically significantly influenced retention of employees.
The study concluded that performance appraisal and staff training has little relationship with employee retention
whereas promotion practices affect employee retention in private universities. Therefore, it was recommended
that organizations renovate performance appraisal and staff training practices while promoting employees
annually, review salary structure, ensure there is no biasness and promotion practices should be based on meritocracy. The findings could benefit management of private universities, Government and researchers.
Brandon Flatley masterfully blends creativity and community impact. As a mixologist and small business owner, he delivers unforgettable cocktail experiences. A musician at heart, he excels in composition and recording.
Top 5 Mistakes to Avoid When Writing a Job ApplicationRed Tape Busters
Applying for jobs can be tough, especially when you’re making common application mistakes. Learn how to avoid errors like sending generic applications, ignoring job descriptions, and poor formatting. Discover how to highlight your strengths and create a polished, tailored resume. Stand out to employers and increase your chances of landing an interview. Visit for more information: https://redtapebusters.com/job-application-writer-resume-writer-brisbane/
Attn: Team Loyalz and Guest Students.
To give Virtual Gifts/Tips,
please visit the Temple Office at:
https://ldmchapels.weebly.com
Optional and Any amount is appreciated.
Thanks for Being apart of the team and student readers.
Alan Stalcup is the visionary leader and CEO of GVA Real Estate Investments. In 2015, Alan spearheaded the transformation of GVA into a dynamic real estate powerhouse. With a relentless commitment to community and investor value, he has grown the company from a modest 312 units to an impressive portfolio of over 29,500 units across nine states. He graduated from Washington University in St. Louis and has honed his knowledge and know-how for over 20 years.
The Peter Cowley Entrepreneurship Event Master 30th.pdfRichard Lucas
About this event
The event is dedicated to remember the contribution Peter Cowley made to the entrepreneurship eco-system in Cambridge and beyond, and includes a special lecture about his impact..
We aim to make the event useful and enjoyable for all those who are committed to entrepreneurship.
Programme
Registration and Networking
Introduction & Welcome
The Invested Investor Peter Cowley Entrepreneurship Talk, by Katy Tuncer Linkedin
Introductions from key actors in the entrepreneurship support eco-system
Cambridge Angels Emmi Nicholl Managing Director Linkedin
Cambridge University Entrepreneurs , Emre Isik President Elect Linkedin
CUTEC Annur Ababil VP Outreach Linkedin
King's Entrepreneurship Lab (E-Lab) Sophie Harbour Linkedin
Cambridgeshire Chambers of Commerce Charlotte Horobin CEO Linkedin
St John's Innovation Centre Ltd Barnaby Perks CEO Linkedin
Presentations by entrepreneurs from Cambridge and Anglia Ruskin Universities
Jeremy Leong Founder Rainbow Rocket Climbing Wall Linkedin
Mark Kotter Founder - bit.bio https://www.bit.bio Linkedin
Talha Mehmood Founder CEO Medily Linkedin
Alison Howie Cambridge Adaptive Testing Linkedin
Mohammad Najilah, Director of the Medical Technology Research Centre, Anglia Ruskin University Linkedin
Q&A
Guided Networking
Light refreshments will be served. Many thanks to Penningtons Manches Cooper and Anglia Ruskin University for covering the cost of catering, and to Anglia Ruskin University for providing the venue
The event is hosted by
Prof. Gary Packham Linkedin Pro Vice Chancellor Anglia Ruskin University
Richard Lucas Linkedin Founder CAMentrepreneurs
About Peter Cowley
Peter Cowley ARU Doctor of Business Administration, honoris causa.
Author of Public Success Private Grief
Co-Founder CAMentrepreneurs & Honorary Doctorate from Anglia Ruskin.
Chair of Cambridge Angels, UK Angel Investor of the Year, President of European Business Angels Network Wikipedia. Peter died in November 2024.
About Anglia Ruskin University - ARU
ARU was the recipient of the Times Higher Education University of the Year 2023 and is a global university with students from 185 countries coming to study at the institution. Anglia Ruskin prides itself on being enterprising, and innovative, and nurtures those qualities in students and graduates through mentorship, support and start-up funding on offer through the Anglia Ruskin Enterprise Academy. ARU was the first in the UK to receive the prestigious Entrepreneurial University Award from the National Centre for Entrepreneurship in Education (NCEE), and students, businesses, and partners all benefit from the outstanding facilities available.
About CAMentrepreneurs
CAMentrepreneurs supports business and social entrepreneurship among Cambridge University Alumni, students and others. Since its launch in 2016 CAMentrepreneurs has held more than 67 events in Boston, Cambridge, Dallas, Dubai, Edinburgh, Glasgow, Helsinki, Hong Kong, Houston, Lisbon, London, Oxford, Paris, New
From Sunlight to Savings The Rise of Homegrown Solar Power.pdfInsolation Energy
With the rise in climate change and environmental concerns, many people are turning to alternative options for the betterment of the environment. The best option right now is solar power, due to its affordability, and long-term value.
Diagrams are key to architectural work, aligning teams and guiding business decisions. This session covers best practices for transforming text into clear flowcharts using standard components and professional styling. Learn to create, customize, and reuse high-quality diagrams with tools like Miro, Lucidchart, ... Join us for hands-on learning and elevate your diagramming skills!
www.visualmedia.com digital markiting (1).pptxDavinder Singh
Visual media is a visual way of communicating meaning. This includes digital media such as social media and traditional media such as television. Visual media can encompass entertainment, advertising, art, performance art, crafts, information artifacts and messages between people.
Harnessing Hyper-Localisation: A New Era in Retail StrategyRUPAL AGARWAL
Discover how hyper-localisation is transforming the retail landscape by allowing businesses to tailor products, services, and marketing strategies to meet the unique needs of specific communities. This presentation explores the concept, benefits, and real-world examples of hyper-localisation in action, helping retailers boost customer satisfaction and drive growth.
AI isn’t a replacement; it’s the tool that’s unlocking new possibilities for start-ups, making it easier to automate tasks, strengthen security, and uncover insights that move businesses forward. But technology alone isn’t enough.
Real growth happens when smart tools meet real Human Support. Our virtual assistants help you stay authentic, creative, and connected while AI handles the heavy lifting.
Want to explore how combining AI power and human brilliance can transform your business?
Visit our website and let’s get started!
🔗 Learn more here: BestVirtualSpecialist.com
**Title:** Accounting Basics – A Complete Visual Guide
**Author:** CA Suvidha Chaplot
**Description:**
Whether you're a beginner in business, a commerce student, or preparing for professional exams, understanding the language of business — **accounting** — is essential. This beautifully designed SlideShare simplifies key accounting concepts through **colorful infographics**, clear examples, and smart layouts.
From understanding **why accounting matters** to mastering **core principles, standards, types of accounts, and the accounting equation**, this guide covers everything in a visual-first format.
📘 **What’s Inside:**
* **Introduction to Accounting**: Definition, objectives, scope, and users
* **Accounting Concepts & Principles**: Business Entity, Accruals, Matching, Going Concern, and more
* **Types of Accounts**: Asset, Liability, Equity explained visually
* **The Accounting Equation**: Assets = Liabilities + Equity broken down with diagrams
* BONUS: Professionally designed cover for presentation or academic use
🎯 **Perfect for:**
* Students (Commerce, BBA, MBA, CA Foundation)
* Educators and Trainers
* UGC NET/Assistant Professor Aspirants
* Anyone building a strong foundation in accounting
👩🏫 **Designed & curated by:** CA Suvidha Chaplot
3. Basic Concept
Imagine you're at a busy cafe,
Orders must be managed efficiently, ensuring prompt
customer service.
Similarly, Modern operating systems rely on CPU
scheduling,
optimizing task execution and resource utilization
efficiently.
This effectively determines CPU task order, balancing
performance, fairness, and responsiveness.
4. Basic Concept
CPU Scheduling is:
Basis of Multi-programmed OS
Objective of Multi-programming:
To have some processes running all the time to
maximize CPU Utilization.
7. CPU Scheduling Decisions
1. When process switches from
Running State to Waiting State
(i/o request or wait)
2. When process switches from
Running to Ready State
(interrupt)
3. When process switches from
Waiting State to Ready State
(at completion of i/o)
4. When a process terminates
1. Non-preemptive
2. Pre-emptive
3. Pre-emptive
4. Non-Preemptive
(allow)
10. Non-Preemptive
Once CPU is allocated to process,
process retains CPU until completion.
It may switch to a waiting state.
Example: Windows 3.x, Apple Macintosh systems.
Process executes till it finishes.
Example: First In, First Out scheduling.
11. Preemptive Scheduling
The running process is interrupted briefly.
It resumes once priority task finishes.
CPU or resources are taken away.
A high-priority process needs immediate
execution.
Lower-priority processes must wait for turn.
This ensures critical tasks are completed
efficiently.
12. Dispatcher
Dispatcher manages CPU control, enabling process
execution.
Functions include context switching, user mode
switching.
Restarts program at proper memory location.
Dispatcher must operate efficiently, minimizing delays.
Invoked during every process switch promptly.
Dispatch latency measures process-switch time
duration.
13. Scheduling Criteria
Which algorithm to use in a particular situation
1. CPU Utilization: CPU should be busy to the fullest
2. Throughput: No. of processes completed per unit of
time.
3. Turnaround Time: The time interval from submitting a
process to the time of completion.
Turnaround Time= Time spent to get into memory + waiting
in ready queue + doing I/O + executing on CPU
(It is the amount of time taken to execute a particular process)
14. Scheduling Criteria
4. Waiting Time: Time a process spends in a ready queue.
Amount of time a process has been waiting in the ready
queue to acquire control on the CPU.
5. Response Time: Time from the submission of a request
to the first response, Not Output
6. Load Average: It is the average number of processes
residing in the ready queue waiting for their turn to get
into the CPU.
15. Scheduling Algorithm Optimization Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
16. Formula
Turn around time= Completion Time- Arrival Time
Waiting Time= Turn around Time-Burst Time
OR
Turnaround time = Burst time + Waiting time
17. First-Come, First-Served (FCFS)
Processes that request CPU first, are allocated the CPU
first
It is non-preemptive scheduling algorithm
FCFS is implemented with FIFO queue.
A process is allocated the CPU according to their arrival
times.
18. First-Come, First-Served (FCFS)
When process enters the ready queue,
its PCB is attached to the Tail of queue.
When CPU is free,
it is allocated to the process selected from Head/Front
of queue.
FCFS leads to Convoy Effect
19. First-Come, First-Served (Example)
Example: Consider three processes arrive in order
P1, P2, and P3.
P1 burst time: 24
P2 burst time: 3
P3 burst time: 3
Draw the Gantt Chart and compute Average Waiting
Time and Average Turn Around Time.
Sol:
19
23. (2) Shortest Job First
Processes with least burst time (BT) are selected.
CPU assigned to process with less BT.
SJF has two types:
1. Non-Preemptive.
CPU stays until process completion only.
24. (2) Shortest Job First
2. Preemption:
New process enters the scheduling queue.
Scheduler compares execution time with current.
A shorter process preempts an already running process.
Efficient scheduling ensures minimized waiting time.
25. Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 4
P2 0 6
P3 0 4
Cal. Completion time, turn around time and avg. waiting time.
SJF(Pre-emptive)-> SRTF
26. Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 9
P2 1 4
P3 2 9
Cal. Completion time, turn around time and avg. waiting time.
SJF(Pre-emptive)-> SRTF
Shortest Job First(Preemptive)
27. Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 5
P2 1 7
P3 3 4
Cal. Completion time, turn around time and avg. waiting time.
SJF(Pre-emptive)-> SRTF
Shortest Job First(Preemptive)
28. Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 5
P2 1 3
P3 2 3
P4 3 1
Cal. Completion time, turn around time and avg. waiting time.
Shortest Job First(Preemptive)
29. Shortest Job First (Non-Preemption)
P1 burst time: 15
P2 burst time: 8
P3 burst time: 10
P4 burst time: 3
Processes arrived at the same time.
29
30. Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 1 7
P2 2 5
P3 3 1
P4 4 2
P5 5 8
Cal. Completion time, turn around time and avg. waiting time.
Practice: Shortest Job First (Non Preemption)
32. SRTF Example
Consider three processes, all arriving at time zero, with total
execution time of 10, 20 and 30 units, respectively. Each
process spends the first 20% of execution time doing I/O, the
next 70% of time doing computation, and the last 10% of time
doing I/O again. The operating system uses a shortest
remaining compute time first scheduling algorithm and
schedules a new process either when the running process gets
blocked on I/O or when the running process finishes its
compute burst. Assume that all I/O operations can be
overlapped as much as possible. For what percentage of time
does the CPU remain idle?
(A) 0% (B) 10.6% (C) 30.0% (D) 89.4%
33. SRTF Example
Explanation: Let three processes be p0, p1 and p2. Their execution time is 10, 20
and 30 respectively. p0 spends first 2 time units in I/O, 7 units of CPU time and
finally 1 unit in I/O. p1 spends first 4 units in I/O, 14 units of CPU time and finally 2
units in I/O. p2 spends first 6 units in I/O, 21 units of CPU time and finally 3 units in
I/O.
idle p0 p1 p2 idle
0 2 9 23 44 47
Total time spent = 47
Idle time = 2 + 3 = 5
Percentage of idle time = (5/47)*100 = 10.6 %
34. Priority Scheduling
Priority is associated with each process.
CPU is allocated to the process with highest priority.
If 2 processes have same priority FCFS
Disadvantage: Starvation (Low priority Processes wait for
long)
Solution of Starvation:
Aging: The priority of a process is gradually increased over
time to prevent it from starving (waiting indefinitely).
35. Priority Scheduling (Preemptive)
Process Arrival
Time
Priority Burst
Time
Completion
Time
P1 1 5 4
P2 2 7 2
P3 3 4 3
Consider 4 as
Highest and 7 as
Lowest Priority
36. Priority Scheduling (Preemptive)
Process Arrival
Time
Priority Burst
Time
Completion
Time
P1 0 2 10
P2 2 1 5
P3 3 0 2
P4 5 3 20
Consider 0 as
Lowest and 3
as Highest
Priority
37. Priority Scheduling (Preemptive)
Process Arrival
Time
Priority Burst
Time
Completion
Time
P1 0 2 10
P2 2 1 5
P3 3 0 2
P4 5 3 20
Consider 3 as
Lowest and 0
as Highest
Priority
38. Round Robin Scheduling
A Time Quantum is associated to all processes
Time Quantum: Maximum amount of time for which
process can run once it is scheduled.
RR scheduling is always Pre-emptive.
42. Multilevel Queue Scheduling
Partition the ready queue into several separate queues.
For Example: a multilevel queue scheduling algorithm with
five queue:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student/ user processes
43. Multilevel Queue
A process moves between various queues.
Multilevel Queue Scheduler defined by parameters.
Number of queues is defined here.
Scheduling algorithms for each queue specified.
Method determines when to upgrade process.
Method decides which queue gets service.
46. Multilevel Queue
1. System Processes: These are programs that belong to OS
(System Files)
2. Interactive Processes: Real-Time Processes e.g. playing
games online, listening to music online, etc.
3. Batch Processes: Lots of processes are pooled and one
process at a time is selected for execution. I/O by Punch Cards
4. Student/User Processes
47. Multilevel Queue
No process in the batch queue could run unless the other
queues for the system were empty.
If an interactive editing process entered the ready queue
while a batch process was running, the batch process would
be preempted.
48. Multilevel Queue
Processes can be :
Foreground Process: processes that are running
currently
Background Process: Processes that are running in the
background but their effects are not visible to the user.
Processes are permanently assigned to one queue on some
properties:
memory size, process priority, process type.
Each queue has its own scheduling algorithm
49. Multilevel Queue
Different types of processes exist, So
Cannot apply same scheduling algorithm.
Disadvantages:
1. Until high priority queue is not empty,
2. No process from lower priority queues will be selected.
3. Starvation for lower priority processes.
Advantage:
Can apply separate scheduling algorithm for each queue.
50. Multilevel Feedback Queue
Solution is: Multilevel Feedback Queue
If a process is taking too long to execute..
Pre-empt it send it to low priority queue.
Don’t allow a low priority process to wait for long.
After some time move the least priority process to
high priority queue Aging
53. Multi-processor Scheduling
Concerns:
• Multiple CPUs enable load sharing.
• Focus is on homogeneous processors' systems.
• Use any processor for any process.
• Various types of limitations exist.
• Systems with I/O devices have restrictions.
• Devices must run on specific processors.
55. Approaches to Multiple-Processor Scheduling
1. Asymmetric multiprocessing
•Scheduling decisions, I/O processing handled by server.
•Other processors execute only user code.
•One processor accesses system data structures.
•Reduces the need for data sharing.
•Single processor controls system activities efficiently.
•Improves data management, reducing conflicts.
56. Approaches to Multiple-Processor Scheduling
2. Symmetric multiprocessing (SMP)
Each processor is self-scheduling and efficient.
All processes may share a common queue.
Each processor may have a private queue.
Schedulers examine queues and select processes.
Modern operating systems widely support SMP.
Like, Windows, Linux, Mac OS X included.
57. Issues concerning SMP systems
1. Processor Affinity
(a process has an affinity for the processor on which it is currently
running.)
Consider what happens to cache memory when a process
has been running on a specific processor.
The data most recently accessed by the process populate
the cache for the processor.
As a result, successive memory accesses by the process are
often in cache memory.
58. Issues concerning SMP systems
1. Processor Affinity
If the process migrates to another processor.
The contents of cache memory must be invalidated for the
first processor,
and the cache for the second processor must be repopulated.
59. Issues concerning SMP systems
1. Processor Affinity
High cost of invalidating and repopulating caches.
Most SMP systems avoid process migration entirely.
They attempt to keep processes running consistently.
This concept is called processor affinity generally.
A process has affinity for its processor.
It runs on the same processor continuously.
60. Issues concerning SMP systems
Forms of Processor Affinity
1. Soft affinity
• An OS tries keeping processes on the same processor.
• This is called soft affinity policy.
2. Hard affinity
• The OS tries to keep processes stable.
• Processes may migrate between different processors.
61. Issues concerning SMP systems
2. Load Balancing
•On SMP systems, workload balance is important.
•All processors should be fully utilized efficiently.
•Load balancing prevents some processors from idling.
•High workloads may burden specific processors excessively.
•Idle processors waste system resources and efficiency.
•Balanced workloads maximize the system's overall
performance.
62. Issues concerning SMP systems
2. Load Balancing
Two approaches to load balancing: push migration and pull
migration.
1. Push migration: a specific task periodically checks the
load on each processor and—if it finds an imbalance—evenly
distributes the load by moving (or pushing) processes from
overloaded to idle or less-busy processors.
2. Pull migration: occurs when an idle processor pulls a
waiting task from a busy processor.
63. Issues concerning SMP systems
3. Multicore Processors
•A multi-core processor has independent cores.
•It reads and executes program instructions.
•A CPU may contain multiple cores.
•A core includes registers and cache.
•Core handles processor tasks, not entire.
•It's essential for efficient task performance.
64. Issues concerning SMP systems
3. Multicore Processors
Multicore processors may complicate scheduling issues:
When a processor accesses memory, it spends a significant
amount of time waiting for the data to become available. This
situation, known as a memory stall.
Memory Stall may occur due to a cache miss (accessing
data that are not in cache memory).
65. Issues concerning SMP systems
Memory Stall
the processor can spend up to 50 percent of its time waiting for
data to become available from memory.
66. Real Time Scheduling
Analysis and testing of the scheduler system and the
algorithms used in real-time applications
a) soft real-time systems b) hard real-time systems
a)Soft real-time systems provide no guarantee as to when
a critical real-time process will be scheduled.
b)Hard real-time systems have stricter requirements. A
task must be serviced by its deadline;
67. Rate-Monotonic Scheduling
• Rate-monotonic scheduling assigns tasks' priorities statically.
• It uses a preemption-based scheduling policy efficiently.
• RMS operates in real-time operating systems seamlessly.
• Static priorities depend on task cycle duration.
• Shorter cycles lead to higher job priority.
• This ensures periodic tasks are scheduled optimally.
68. Rate-Monotonic Scheduling
rate-monotonic scheduling assumes that the processing time of
a periodic process is the same for each CPU burst.
For example
Consider P1, P2 with time period 50,100 resp. and B.T 20,35
resp.
Cal. CPU utilization of each process and total CPU
utilization.
Sol:
CPU utilization=(Burst time/Time period)= (Ti/Pi)
For P1: (20/50)=0.40 i.e 40%
For P2: (35/100)=0.35 i.e 35%
Total CPU utilization is 75%
69. Earliest Deadline First Scheduling
Earliest-deadline-first (EDF) scheduling dynamically assigns
priorities according to deadline.
The earlier the deadline, the higher the priority the later the
deadline, the lower the priority.
Under the EDF policy, when a process becomes runnable, it
must announce its deadline requirements to the system.