0% found this document useful (0 votes)
3 views

CPU SCHEDULING

Sun Solaris os scheduling
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

CPU SCHEDULING

Sun Solaris os scheduling
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

CPU SCHEDULING

is a key concept in computer multitasking,


multiprocessing operating system and real-time operating system designs. Scheduling refers
to the way processes are assigned to run on the available CPUs, CPU scheduling deals with
the problem of deciding which of the processes in the ready queue is to be allocated the CPU .

A fundamental job of the operating system is to arbitrate which processes get


access to the system's resources . The process scheduler, which is also called the
dispatcher, is the portion of the kernel that controls allocation of the CPU to
processes. The scheduler supports the concept of scheduling classes. Each class
defines a scheduling policy that is used to schedule processes within the class. The
default scheduler in the Solaris Operating System, the TS scheduler, tries to give
every process relatively equal access to the available CPUs.
You can use the fair share scheduler (FSS) to control the allocation of available CPU
resources among workloads, based on their importance. This importance is
expressed by the number of shares of CPU resources that you assign to each
workload.
The FSS consists of a kernel scheduling class module and class-specific versions
of the dispadmin and priocntl commands. Project shares used by the FSS are
specified through the project.cpu-shares property in the project database.

CPU Share Definition

The term “share” is used to define a portion of the system's CPU resources that
is allocated to a project .
CPU shares are not equivalent to percentages of CPU resources. Shares are
used to define the relative importance of workloads in relation to other workloads.
When you assign CPU shares to a project, your primary concern is not the
number of shares the project has. Knowing how many shares the project has in
comparison with other projects is more important. You must also take into
account how many of those other projects will be competing with it for CPU
resources.

Classes vs Algorithim

 Scheduling Classes are like "teams" in a workplace, each with its own set of
rules and goals.
 Scheduling Algorithms are the specific workflows or methods the team uses
to achieve its objectives.
 In essence, classes define "what to do" (policies), and algorithms define "how to
do it"

Scheduling Classes in Solaris

Solaris supports multiple scheduling classes to handle different types of processes


efficiently. Each scheduling class has its own policies and priority levels.

Time-Share (TS) Scheduling:

Designed for general-purpose, interactive processes.

Uses dynamic priorities that change based on the CPU usage and sleep time of the
processes.

Priorities are adjusted to favor interactive processes over CPU-intensive ones.

Interactive (IA) Scheduling:

A subclass of Time-Share designed for desktop environments.

Prioritizes user interface responsiveness, ensuring GUI applications respond quickly


to user inputs.

Real-Time (RT) Scheduling:

Meant for time-sensitive tasks.

Fixed priorities that do not change during execution.

Processes in this class are given precedence over all other classes.

Useful for applications requiring deterministic behavior (e.g., industrial control


systems).

System (SYS) Scheduling:


Handles kernel-level processes and system-critical operations.

These processes are non-preemptive and have the highest priority to ensure system
stability.

Fair Share (FSS) Scheduling:

Allocates CPU resources based on shares assigned to projects or users.

Useful in environments where resource usage needs to be distributed proportionally


among users or projects.

Fixed-Priority (FP) Scheduling:

Similar to Real-Time but with fixed priorities determined by the user.

Processes with higher priorities run before those with lower priorities, regardless of
their behavior.

Schedulung algorithims

1. Round-Robin (RR)

Definition:

Round-Robin scheduling is a preemptive scheduling algorithm where each process is


assigned a fixed time slice (quantum). If a process does not complete within its time
slice, it is preempted and placed at the end of the queue.

How It Works:
1.All processes are placed in a queue.

2.The CPU executes each process for a fixed time slice (if it’s ready).

3.If a process is not finished, it goes back to the queue; if it finishes, it exits.

Advantages:

1.Simple and easy to implement.

2.Ensures fairness by giving all processes an equal chance to execute.

3.Good for time-sharing systems and interactive tasks.

Disadvantages:

1.Performance depends on the length of the time quantum.

2.Too small a quantum causes frequent context switching (overhead).

3.Too large a quantum behaves like FCFS, with poorer responsiveness.

2. Multilevel Feedback Queue (MFQ)

Definition:

A multilevel feedback queue uses multiple queues with different priority levels.
Processes can move between these queues based on their execution behavior (e.g.,
CPU burst, waiting time).

How It Works:
1.New processes start in the highest-priority queue.

2.If a process uses its entire time slice, it is moved to a lower-priority queue.

3.Processes that wait longer or are I/O-bound may move up to a higher-priority


queue.

4.Higher-priority queues are processed first; lower-priority queues are only served if
higher-priority queues are empty.

Advantages:

dynamical adapts to process behavior (e.g., CPU-bound vs. I/O-bound).

provides fairness and prioritization simultaneously.

effective for systems with diverse workloads.

Disadvantages:

complex to implement and tune (e.g., number of queues, promotion/demotion


criteria).

can starve lower-priority processes if higher-priority ones dominate.

3. First-Come, First-Served (FCFS)

Definition:

FCFS is the simplest scheduling algorithm where processes are executed in the order
they arrive, without preemption.
How It Works:

1.The process that arrives first is executed first.

2.Once a process starts, it runs to completion before the next one begins.

Advantages:

Simple to understand and implement.

Works well when all processes have similar execution times.

Disadvantages:

Convoy Effect: Long processes can block shorter ones, leading to poor overall
performance.

Not suitable for interactive or time-sharing systems.

No preemption, so responsiveness is low.

4. Fixed-Priority Preemptive

Definition:

In fixed-priority preemptive scheduling, each process is assigned a priority, and the


CPU always executes the highest-priority process. If a higher-priority process arrives
while a lower-priority process is running, the lower-priority process is preempted.
How It Works:

1.Processes are assigned fixed priorities (usually numeric, lower numbers = higher
priority).

2.The scheduler selects the process with the highest priority.

3.Lower-priority processes wait until higher-priority ones finish or are blocked.

Advantages:

Ensures that critical tasks are executed promptly.

Suitable for real-time systems where certain tasks must meet deadlines.

Disadvantages:

priority Inversion: A higher-priority task may wait indefinitely if a lower-priority task


holds a needed resource.

starvation of low-priority processes can occur

You might also like