0% found this document useful (0 votes)
24 views

Cheatsheet Os

Uploaded by

James
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Cheatsheet Os

Uploaded by

James
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

UMA- – Uniform Memory Access – access to any RAM Two models of IPC--- (Shared memory & Message In a single

sage In a single CPU multiprogramming environment busy If graph contains a cycle => if only one instance per
takes the same amount of time. passing) waiting wastes CPU cycles that could be used by other resource type, then deadlock or if several instances per
NUMA-- Non-Uniform Memory Access – some parts of Shared Memory---- An area of memory shared among processes. resource type, possibility of deadlock
memory make take longer to access than others (local vs. the processes that wish to communicate. ADVANTAGES
remote). The communication is under the control of the users Spinlocks do not require context switching and are Dining-Philosophers Problem Algorithm Deadlock
processes not the operating system. efficient when locks are expected to be held for short Handling solution
Multiple processors share the computer bus, clock, Goal is to provide mechanism that will allow the user times. Allow at most 4 philosophers to be sitting simultaneously
memory, and peripherals. processes to synchronize their actions when they access Multiprocessor systems are good candidates for spinlocks at the table with five chopsticks available.
shared memory… as another thread can enter its critical section on another Allow a philosopher to pick up the chopsticks only if
Race-Condition-- A situation where several processes Message Passing-- Mechanism for processes to processor. both are available (picking must be done in a critical
access and manipulate the same data concurrently and communicate and to synchronize their actions section).
the outcome of the execution depends on the order in Particularly useful in distributed systems where SEMAPHORES Use an asymmetric solution -- An odd-numbered
which the access takes place. We need a way to processes may reside on different computers connected Synchronization tool that provides more sophisticated philosopher picks up first the left chopstick and then the
synchronize the processes so that only one process at a by a network ways (than Mutex locks) for processes to synchronize right chopstick.
time can be manipulating a critical data item. Message system----- processes communicate with each their activities. An even-numbered philosopher picks up first the right
other without resorting to shared variables Semaphore S – integer variable chopstick and then the left chopstick.
Solution to critical-section--- Mutual Exclusion - If IPC facility provides two operations: send and receive Can only be accessed via two indivisible (atomic)
process Pi is executing in its critical section, then no other message. operations TRANSACTIONAL MEMORY
processes can be executing in their critical sections wait() and signal() A memory transaction is a sequence of read-write
Progress - If no process is executing in its critical section The message size is either fixed or variable operations to memory that are performed atomically.
and some processes wish to enter their critical sections, Variable messages are harder for the OS programmer to Counting Semaphore – integer value can range over an Advantages:
only those processes that are not executing in their implement and make the life of the application unrestricted domain System provides atomicity, not developer
remainder sections can participate in deciding which will programmer easier. Used to provide access to a resource consisting of a finite # No locks are involved … so no deadlock.
enter its critical section next, and this selection cannot be Fixed messages are easier for the OS programmer to of resources Transactional memory system can detect opportunities
postponed indefinitely. implement and make the life of the application The semaphore is initialized to the number of available for concurrency within an atomic block
3. Bounded Waiting - A bound must exist on the programmer harder. resources Implementation
number of times that other processes are allowed to enter Each process that wishes to use a resource performs a Software Transactional Memory (STM)– code inserted
their critical sections after a process has made a request If processes P and Q wish to communicate, they need to: wait() operation the decrements the semaphore by compiler to guarantee atomicity and detect
to enter its critical section and before that request is Establish a communication link between them. When a process releases a resource, it performs a signal() concurrency options
granted Exchange messages via send/receive. operations that increments the semaphore. Hardware Transactional Memory (HTM)– Uses cache
Assume that each process executes at a nonzero speed Implementation issues: When the semaphore goes to 0, all resources are being hierarchies and cache coherency protocols to manage
No assumption concerning relative speed of the n How are links established? used. shared data conflicts. Less overhead than STM
processes. Can a link be associated with more than two processes? After that, all processes that wish to use a resource will
block until the count becomes >0. THREADS
Areas Prone to Race Conditions: Kernel Open Files data Implementation of communication link Multiple simultaneous tasks within the application can be
structure 1)List must be modified when a new file is Physical: Shared memory, Hardware bus, Network Binary semaphore – – integer value can range only implemented by separate threads
opened or closed 2)If two processes were to open files Logical: Direct or indirect Synchronous or asynchronous between 0 and 1 Update display
simultaneously, the separate updates to this file could Automatic or explicit buffering. Same as a mutex lock (and they suffer from busy-wait Fetch data
cause a race condition. too) Spell checking
3. Memory Allocation Structures Direct Communication: Processes must name each other Can solve various synchronization problems Answer a network request
4. Process List Structures explicitly: Consider P1 and P2 that require S1 to happen before S2 Can simplify code, increase efficiency
5. Interrupt Handling. send (P, message) – send a message to process P Create a semaphore synch initialized to 0 Kernels are generally multithreaded
receive(Q, message) – receive a message from process Q. P1 : S1: signal(synch);
Non-preemptive----runs until exits kernel mode, blocks, Properties of communication link--- Links are P2 :wait(synch); S2; Thread: basic unit of CPU utilization. Comprises
or voluntarily yields CPU established automatically….A link is associated with Because synch is set to 0, P2 will execute S2 only after P1 Thread id
Essentially free of race conditions in kernel mode exactly one pair of communicating processes,.. Between has invoked signal (synch) which is after statement S1 Program counter
Preemptive- allows preemption of the process when each pair there exists exactly one link…The link may be has been executed. Register set
running in kernel mode unidirectional but is usually bi-directional. Stack
Are hard to design especially in SMP (Symmetric Semaphore Implementation Threads of a process share
Multiprocessors) architectures where two kernel-mode Symmetric Addressing: sender and receiver name their We can modify the definition of wait() and signal() as Code section
processes can be running simultaneously on different partner follows: Data section
processors. send (P, message) – send a message to process P. When a process executes a wait() operation and finds Other OS resources (e.g., Open files and signals)
Why choose preemptively? --More responsive since the Receive(Q, message) – receive a message from process Q. that the semaphore value is not positive, it must wait. Most modern applications are multithreaded. Threads
risk of long-running processes holding up the works is Asymmetric Addressing: only sender names partner Rather than engaging in busy waiting, the process can run within application.
eliminated send (P, message) – send a message to process P block itself.
More suitable for real-time programming as real-time receive(id, message) – receive a message from any The block operations puts the process into a waiting Multithreaded Server Architecture
processes can preempt as needed. process. Id is set to the id of the sender process. queue associated with the semaphore and the state of the Consider implications of a single threaded web server
Significant Disadvantage: If process names change or process is switched to ‘waiting’. with 9,000 requests!!....
Priority Inversion – Scheduling problem when a PID values change, all referring processes must be Control is transferred to the CPU scheduler which selects We can create a new process for each received request
lower-priority process holds a lock needed by the higher- modified. another process to execute. And.. Process creation is time consuming and resource
priority process. A process that is blocked should be restarted using the intensive
Consider processes L, M, and H whose priorities are Indirect Communication: Messages are directed and wakeup() operation when some other process executes a In Solaris, process creation is ~30 times slower than
represented by: received from mailboxes (also referred to as ports)… signal() operation. thread creation.
L < M < H. Each mailbox has a unique id This changes the process state to ‘ready’ and it will get Context switching is ~5 times slower.
Assume process H requires resource R that is currently Processes can communicate only if they share a mailbox. scheduled based on the CPU scheduler’s priority Asynchronous threading - Once the parent creates a
being accessed by process L. Properties of communication link:1)Link established algorithm child thread, the parent resumes its execution.
H would have to wait for L to complete to access resource between two processes 2)IFF processes share a common Parent and child execute concurrently.
R. mailbox 3)A link may be associated with many processes Semaphore Implementation with no Busy waiting Each thread runs independently of every other thread
BUT… 4)Each pair of processes may share several 1.With each semaphore there is an associated waiting Parent thread agnostic to child terminations
M becomes runnable and preempts process L. communication links with each link corresponding to one queue Synchronous threading (fork-join strategy) -- Once the
Now, M, a process with a lower priority than H, has mailbox 5)Link may be unidirectional or bi-directional 2.Each entry in a waiting queue has two data items: parent creates a child thread it must wait for all child
affected how long H must wait for L to relinquish Primitives are defined as: value (of type integer) threads to terminate
resource R. send(A, message) – send a message to mailbox A pointer to next record in the list Child threads work concurrently, but parent must wait.
Priority Inversion – Solved--- via priority-inheritance receive(A, message) – receive a message from mailbox A 3.Two operations: Typically involved significant data sharing amongst
protocol Mailbox sharing block – place the process invoking the operation on the threads.
All processes that are accessing resources that are needed P1, P2, and P3 share mailbox A.. P1, sends; P2 and P3 appropriate waiting queue
by a higher-priority process inherit the higher priority receive wakeup – remove one of processes in the waiting queue BENEFITS OF THREADS: 1. Responsiveness – may
until they have finished with the resources in question. Who gets the message? Assumption (message is and place it in the ready queue allow continued execution if part of process is blocked,
When they are finished using those resources, their consumed when received). Semaphore values can be negative. especially important for user interfaces. 2. Economy –
priorities revert to their original values. Solutions Magnitude = # of processes waiting cheaper than process creation, thread switching lower
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive Must guarantee that no two processes can execute the overhead than context switching. 3. Economy – cheaper
operation wait() and signal()on the same semaphore at the same
Process Synchronization: Message passing may than process creation, thread switching lower overhead
Allow the system to select arbitrarily the receiver (or use time
be either blocking or non-blocking. Blocking is
considered synchronous an algorithm –e.g., round-robin). Sender is notified who
the receiver was. Thus, the implementation becomes the critical section
Blocking sends -- the sender is blocked until the message than context switching. 4. Scalability – process can take
problem where the wait and signal code are placed in the
is received by the receiving process or mailbox
Synchronization Hardware critical section advantage of multiprocessor architectures
Blocking receives -- the receiver is blocked until a
Many systems provide hardware support for Could now have busy waiting in critical section MULTICORE PROGRAMMING
message is available.
implementing the critical section code. All solutions implementation Concurrency supports more than one task making
Non-blocking is considered asynchronous
below based on idea of locking But implementation of wait and signal code is short progress via rapid switching, and only one executes at
Non-blocking sends -- the sender sends the message and
1.Protecting critical regions via locks Little busy waiting if critical section rarely occupied any given time.
resumes operation
Non-blocking receives -- the receiver receives: 2. Uniprocessors – could disable interrupts In a single processor/core environment, scheduler
Currently running code would execute without Busy waiting is more of a concern in applications that provides concurrency
A valid message, or
preemption may spend a long time in their critical section and Concurrency simulates parallelism
Null message
This is the approach generally taken for non-preemptive therefore this is not a good solution parallelism implies a system can perform more than one
Different combinations possible
If both send and receive are blocking, we have a kernels task simultaneously (requires multicore or
Generally too inefficient on multiprocessor systems. DEADLOCK AND STARVATION multiprocessor)
rendezvous.
Why? Message is passed to all processors Deadlock – two or more processes are waiting multithreaded programming takes advantage of
Producer-consumer problem becomes trivial
This delays entry into each critical section decreasing indefinitely for an event that can be caused by only one of multicore or multiprocessor systems and achieves
Producer invokes a non blocking send() call.
system efficiency. May impact clock performance if clock the waiting processes parallelism
Consumer invokes a blocking receive() and waits until a
is updated using interrupts. Let S and Q be two semaphores initialized to 1 Multicore or multiprocessor systems put pressure on
message is available.
Buffering Modern machines provide special atomic hardware both OS and application programmers.
instructions Starvation – indefinite blocking Challenges include:
Messages exchanged between communicating processes
Atomic = non-interruptible A process may never be removed from the semaphore 1.Dividing activities
reside in a temporary queue.
1.test and modify contents of memory word queue in which it is. 2.Balance – ensuring that separate execution is worth the
implemented in one of two ways
No Buffering System: 2.swap contents of two memory words cost
These instructions can be used to solve the critical section FOUR NECESSARY CONDITIONS FOR DEADLOCK 3.Data splitting
Zero capacity – no messages are queued on a link.
problem to occur 4.Data dependency
Sender must wait for receiver (rendezvous)
Mutual Exclusion Condition – 5.Testing and debugging – more challenging to debug
Automatic Buffering:
Mutex Locks Resource may be acquired by one and only one process concurrent programs than single-threaded applications
Bounded capacity – finite length of n messages
OS designers build software tools to solve the critical at a time
Sender must wait if link full
section problem Wait-For Condition (Hold and Wait Condition) Logical cores enable a single core to do 2 or more things
Unbounded capacity – infinite length
Simplest is mutex lock: (mutual exclusion) Process that has acquired an exclusive resource may hold simultaneously.
Sender never waits.
1.Protect a critical section by first acquire() a lock then that resource while the process waits to obtain other This grew out of the early Pentium 4 CPUs ability to do
Interprocess Communication release() the lock while(true). resources what was termed Hyper Threading
Processes within a system may be independent or 2.Boolean variable available indicating if lock is available No-preemption Condition
cooperating or not Once a process has obtained a resource, the system TYPES OF PARALLELISM;
Cooperating process can affect or affected by other 3.Usually implemented via hardware atomic instructions cannot remove it from the process’ control until the Data parallelism – distributes subsets of the same data
processes, including sharing data. DISADVANTAGES process is finished using the resource. across multiple cores, same operation on each
Reasons for cooperating processes: Information sharing, But this solution requires busy waiting Circular-Wait Condition Add up the elements in an array A. Use two threads on
Computation speedup (requires multiple processing When a process is in its critical section, any other process Two or more processes are locked in a ‘circular chain’ in two cores- subsets of the same data. Same operation
cores), that tries to enter its critical section must loop which each process is waiting for one or more resources
Modularity, Convenience. continuously in the calls to acquire() that the next process in the chain is holding. Task parallelism – distributing threads across cores, each
Cooperating processes need interprocess communication and is therefore called a spinlock because the process FACTS thread performing unique operation
(IPC) spins while waiting for the lock to become available. If graph contains no cycles =>no deadlock
Perform two distinct statistical operations on array A. Deliver the signal to the thread to which the signal P1 P2 P3 foreground (interactive) and background (batch)
Use two threads on two cores. Two distinct operations. 0 24. 27. Process permanently in a given queue
(Data may or may not be the same) applies, very thread in the process, certain threads in the 30 Each queue has its own scheduling algorithm:
USER THREADS AND KERNEL THREADS Waiting time for P1 = 0; P2 = 24; P3 = 27. Average foreground – RR (why?) and background – FCFS
process
User threads - management done by user-level threads waiting time: (0 + 24 + 27)/3 = 17 Scheduling must be done between the queues:
Assign a specific thread to receive all signals for the
library BUT the average wait time is NOT minimal and is 1.Fixed priority scheduling; (i.e., serve all from
process
Three primary thread libraries: sensitive to variance in CPU burst times of processes. foreground then from background – including
Synchronous signals need to be delivered to the thread
• POSIX PYthreads (either user-level or Remember: FCFS Scheduling is nonpreemptive. Not preemption!). Possibility of starvation.
causing the signal (e.g., Divide by 0 error)
kernel level) suitable for timesharing (read cloud-computing) systems. 2.Time slice – each queue gets a certain amount of CPU
Some asynchronous signals should go to all threads (e.g.,
• Windows threads (kernel level) time which it can schedule amongst its processes
<Control> <C>)
• Java threads (implemented using host Shortest-Job-First (SJF) Scheduling: Associate with each e.g., 80% to foreground in RR ; 20% to background in
OSs vary in their support
system API) process the length of its next CPU burst FCFS
To specify threads as destination (Windows APC),
• 1 Use these lengths to schedule the process with the Multilevel Feedback Queue: Processes are separated
The ability of threads to selectively block signals (some
Kernel threads - supported by the kernel. shortest time according to the characteristics of their CPU bursts.
multithread UNIX systems)
Examples – virtually all general-purpose operating 2 FCFS breaks the tie in case two have equal times process can move between the various queues.
SJF is optimal – gives minimum average waiting time for Use too much CPU and you get demoted.
Thread Cancellation
systems, including Windows , Solaris, Linux Tru64 a given set of processes Aging can be implemented this way => A process waiting
Terminating a thread before it has finished
The difficulty is knowing the length of the next CPU too long in a lower-priority queue may be moved to
UNIX, Mac OS X Multiple threads searching a database. When the data
request (aka shortest next CPU burst) higher priority queue.
item is found by one thread, the others can be cancelled.
SJF is a special case of the general priority-scheduling Multilevel-feedback-queue scheduler defined by the
MULTI-THREADING MODELS Pressing the x button on a browser window will cause
algorithm following parameters:
A relationship must exist between user threads and multiple threads loading content (each image is loaded by
Example: (Process,Burst time)= (P1,6), (P2,8) (P3,7) 1.number of queues 2.scheduling algorithms for each
kernel threads. a separate thread) to be cancelled.
(P4,3) queue
Many-to-one Thread to be canceled is target thread
P4 P1 P3 P2 3.method used to determine when to upgrade a process
Many user-level threads mapped to single kernel thread Two general approaches:
0 +3 3 +6 9 +7 16 +8 4.method used to determine when to demote a process
Thread management is done efficiently in the user space Asynchronous cancellation terminates the target thread
24 5.method used to determine which queue a process will
by the thread library usually immediately but not guaranteed to be so
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 enter when that process needs service.
One thread blocking causes all to block Problems occur when resources have been allocated to
Multiple threads may not run in parallel on multicore the target thread or it is updating data shared by other Average waiting time under FCFS would have been 10.25 Priority-based Scheduling
system because only one user thread may access the threads (NOT GUT) For real-time scheduling, scheduler must support
kernel at a time OS may only reclaim system resources but not all Preemptive SJF: The SJF algorithm can be preemptive preemptive, priority-based scheduling
Few systems currently use this model resources or nonpreemptive. The choice arises when a new process But only guarantees soft real-time What's that
Example: Solaris green threads Deferred cancellation allows the target thread to arrives at the ready queue while a process is still mean?
One-to-one periodically check if it should be cancelled executing. For hard real-time must also provide ability to meet
Each user-level thread maps to a kernel thread The next CPU burst of the newly arrived process may be deadlines and processes must announce their deadlines to
Creating a user-level thread creates a kernel thread (cost Thread-Local Storage shorter than what is left of the currently executing the scheduler
issue) Threads share the data of the process and reduces process. Processes have new characteristics: periodic ones require
More concurrency than many-to-one – another thread overhead in data sharing. A preemptive SJF algorithm will preempt the currently CPU at constant intervals.
can run when a thread makes a blocking system call Sometimes threads need their own copy of the data. executing process Has processing time t, deadline d, period p
More parallelism in multiprocessor environment Transactions might store their transaction ids in tls A nonpreemptive SJF algorithm will allow the currently
0≤t≤d≤p
Number of threads per process sometimes restricted due Thread-local storage (TLS) allows each thread to have its executing process to finish its CPU burst.
Rate of periodic task is 1/p
to overhead own copy of data Preemptive SJF is sometimes call Shortest Remaining
Time First scheduling. Processes announce their deadline requirements to the
Examples: Windows Linux. Solaris 9 and later e.g., Use of errno to specify system error code. If not
scheduler.
Many-to-many protected can get overwritten by another thread
Process Arrival Time Burst Time Using an admission control algorithm, the scheduler
Allows many user level threads to be Useful when you do not have control over the thread
creation process (i.E., When using a thread pool) P1 0 9 either:
mapped/multiplexed to many kernel threads
Allows the operating system to create the appropriate Different from local variables P2 1 3 Admits the process, guaranteeing that the process will
number of kernel threads based on application or Local variables visible only during single function complete on time
P3 2 10
hardware requirements invocation Rejects the request as impossible if it cannot guarantee
P4 3 6
Solaris prior to version 9 TLS visible across function invocations (useful to save that the task will be serviced by its deadline.
Windows with the threadfiber package state) Rate Monotonic Scheduling: Assign higher priority to
P1 P2 P4 P1 P3
User to Kernel Thread Mapping Similar to static data tasks that need the CPU more often
Except that TLS is unique to each thread 0 1 4 10 18 A priority is assigned based on the inverse of its period
Impact on concurrency 28 Shorter periods = higher priority Longer periods =
Many-to-one allows ‘unlimited’ user thread B) Turn Turnaround Time = Completion Time – Arrival
HARDWARE threads (or CPU threads) lower priority
Not true concurrency since only one kernel thread Time
1.A single CPU core is represented to the operating P1 is assigned a higher priority than P2.
executes at a time
system as two cores RMS is static priority with preemption.
One-to-one allows greater concurrency but sets limits on Wait Time = Turn Around Time – Boost Time
2.OS schedules two tasks on the two "logical" cores as it Assign higher priority to tasks that need the CPU more
the number of threads Average Wait Time =
would on two physical cores in a multi-processor system often.
Many-to-many allows ‘unlimited’ user threads and the
3.The single physical CPU core will switch between the SOLARIS: Priority-based scheduling
kernel threads can run in parallel on a multiprocessor
tasks on the two logical cores as it sees fit Six classes available: 1.Time sharing (default) (TS)
Two-level Model
4.When one task is stalled waiting for data to be loaded, 2.Interactive (IA) 3.Real time (RT) 4.System (SYS) –
Similar to M: M, except that it allows a user thread to be
it switches to the other one Process AT Burst CT TTT WT reserved for kernel use 5.Fair Share (FSS) - uses shares
bound to a kernel thread
1.Program issues a LOAD instruction Time instead of priorities for scheduling 6.Fixed priority (FP) –
Examples. IRIX HP-UX Tru64 UNIX Solaris 8 and
If the content of the requested address isn’t in priorities not dynamically adjusted
earlier P1 0 9 18 18 – 18 –
cache, it needs be fetched from RAM and there’s a Given thread can be in one class at a time. Each class has
0= 9=
delay its own scheduling algorithm
Thread Library Provides Programmer With API For 18 9
2.Without hardware CPU threading, the CPU is Time sharing is multi-level feedback queue dynamically
Creating And Managing Threads P2 1 3 4 4–1 3–3
idle during this fetch time. altering priorities and assigns time-slices of different
Two Primary Ways Of Implementing =3 =0
3.With hardware threading multiple threads of lengths
Library Entirely In User Space P3 2 10 28 28 – 26 –
computation are saved in an internal cpu memory. Loadable table configurable by sysadmin
Less Overhead To Create A Thread 2= 10 =
4.Instead of waiting, the computer swaps out the 1.What is a Hardware Thread and why it is used?
Kernel-level Library Supported By The OS 26 16
current state, swaps in one ready to go, and keeps A hardware thread is a single CPU core being
Requires System Call To Kernel Each Time A Thread Is P4 3 6 10 10 – 7–6
executing represented to the OS as two or more cores. The OS
Created. 3=7 =1
5.That swap can start the new thread on the very schedules two logical cores as it would on two physical

∑ of WT for each Process = 9+ 0+16+1 =6.5


IMPLICIT THREADS
next CPU cycle cores in a multi-processor system. The single CPU core
Growing in popularity as numbers of threads increase,
switches between the tasks. This is used for more efficient
program correctness more difficult with explicit threads
CPU SCHEDULING ALGORITHMS utilization of resources such as during the execution of a
Creation and management of threads done by
¿ of Processes
program. With hardware threading, a CPU can switch
compilers and run-time libraries rather than
programmers
Maximum CPU utilization obtained with
multiprogramming
CPU–I/O Burst Cycle – Process execution consists of a
4
between multiple threads instead of saving all resources
for a single thread. This helps to eliminate wait delays
Three methods for designing multithreaded programs PRIORITY SCHEDULING
cycle of CPU execution and I/O wait A priority number (integer) is associated with each and improve effective utilization of CPU cycles.
that can take advantage of multicore processing through
CPU burst followed by I/O burst process 2.What is a Thread Pool and why is it used?
implicit threading:
CPU burst distribution is of main concern The CPU is allocated to the process with the highest A thread pool is creating a number of threads at process
Thread pools
SHORT-TERM SCHEDULER selects from among the priority (smallest integer = highest priority) (FCFS startup in a pool where they await work. The advantages
OpenMP
processes in ready queue, and allocates the CPU to one of breaks ties) are usually slightly faster request servicing by using an
Grand Central Dispatch
them 1. Preemptive 2. Nonpreemptive existing thread rather than creating a new one, allowing
Other methods include Microsoft threading building
Queue may be ordered in various ways (FIFO, Major Problem is Indefinite Blocking or Starvation – low the number of threads in an application to be bound to
blocks (TBB), java.Util.Concurrent package.
Priority, Tree, Unordered) and usually contains process priority processes may never execute the size of the pool, and separating task to be performed
PCBs Solution = Aging – as time progresses increase the from mechanics o creating task which allows different
Thread Pools
CPU scheduling decisions may take place when a priority of the process (e.g., increment process priority strategies for running task.
Issues with this solution:
process: by 1 every 15 minutes) A thread pool is used to reduce thread creation, recycle
Thread creation still takes time (albeit less than process
1.Switches from running to waiting state (I/O request, SJF is priority scheduling where priority is the inverse of threads rather than discarding them and preventing
creation)
child termination) predicted next CPU burst time. unlimited threads that could exhause CPU or memory.
Threads are discarded after they complete their tasks
2.Switches from running to ready state (when interrupt Priority Types: Internal – use measurable quantity or 3.When two cooperating processes need to share large
‘Unlimited’ threads could exhaust CPU or memory.
occurs) quantities to compute priority amounts of data, which of the two methods discussed in
Create a number of threads at process startup in a pool
3.Switches from waiting to ready (when I/O completes) 1.Time limits 2. Memory requirements 3 .# of open files class would be the best choice? Justify your answer.
where they await work
4.Terminates 4.Ratio of average I/O burst to average CPU burst. Shared memory would be the best choice for
Advantages:
DISPATCHER module gives control of the CPU to the External – use criteria outside the operating system. cooperating processes that need to share large amounts
Usually slightly faster to service a request with an
process selected by the short-term scheduler. 1.Importance of process 2.Type and amount of funds of data because this exchanges information by reading
existing thread than create a new thread
THIS INVOLVES: 1. switching context 2. switching to being paid for computer use 3.Department sponsoring and writing data. An area of memory would be
Allows the number of threads in the application(s) to be
user mode the work 4.Organizational politics designated for the two processed to communicate. They
bound to the size of the pool
3. jumping to the proper location in the user program to Round Robin (RR): Designed for timesharing systems would be able to synchronize their actions through
Separating task to be performed from mechanics of
restart that program (read cloud computing). 1.FCFS + preemption accessing the shared memory. In message passing, only
creating task allows different strategies for running task
DISPATCH LATENCY– time it takes for the dispatcher 2.Each process gets a small unit of CPU time (time messages would be exchanged rather than large amounts
i.e. Tasks could be scheduled to run periodically or after
to stop one process and start another running. quantum q), usually 10-100 milliseconds. After this time of data and this would not happen concurrently. In
a delay
Dispatchers should be fast to minimize Dispatch has elapsed, the process is preempted and added to the message passing, there are no shared variables which
Thread pool size can be
latency end of the ready (circular) queue. makes this type of communication less suited for sharing
Set heuristically based on system resources and expected
There are many criteria for comparing scheduling 3.If there are n processes in the ready queue and the less amounts of data.
concurrent client requests
algorithms time quantum is q, then each process gets 1/n of the CPU 4.A server is being overloaded by client requests. Explain
Dynamically adjusted (e.g., Apple’s Grand Central
1.CPU utilization – keep the CPU as busy as possible (40- time in chunks of at most q time units at once. No how the use of threads could alleviate this problem.
Dispatch)
90%) (maximize) process waits more than (n-1)q time units. Explain why you would choose synchronous or
Signals are used in UNIX systems to notify a process that
2. Throughput – # of processes that complete their 4. Timer interrupts every quantum to schedule next asynchronous threading in your solution.
a particular event has occurred.
execution per time unit(maximize) process A thread pool would create a number of threads at
A signal handler is used to process signals
3.Turnaround time – amount of time to execute a 5. Performance == If q large = >FIFO/FCFS. If q small process startup in a pool where they wait to handle client
A signal is Generated by particular event. Delivered to a
particular process (sum of wait times to enter memory, => q must be large with respect to context switch, requests work. The advantages are usually slightly faster
process
be on ready queue, execute on CPU, wait on I/O). otherwise overhead is too high. request servicing by using an existing thread rather than
Handled by one of two signal handlers:
Turnaround time is generally a function of the speed of Example => Typically, higher average wait time than creating a new one and allowing the number of threads in
Default
the output device. Why? (minimize) SJF, but better response. q should be large compared to an application to be dynamically bound to the size of the
User-defined
4.Waiting time – amount of time a process has been context switch time Why? pool. Recycling of threads rather than creating new ones
Every signal has default handler that kernel runs when
waiting in the ready queue (minimize) q usually 10ms to 100ms, context switch < 10 μsec. is less costly and better for CPU or memory.
handling signal
5.Response time – amount of time it takes from when a Average turnaround time does NOT necessarily improve Asynchronous threading would definitely be the better
User-defined signal handler can override default
request was submitted until the first response is as time-quantum size increases. solution so that threads could run simultaneously and
For single-threaded process, signal delivered to process
produced, not the time it takes to output the response MULTILEVEL QUEUE: process requests as quickly as possible. Threads running
Where should a signal be delivered for multi-threaded
(for time-sharing environment). (minimize) Designed for situations where processes fit into different sequentially in synchronous threading could quickly
process?
First- Come, First-Served (FCFS) Scheduling groups cause a huge, slowly processed backlog of requests.
(Process, cpu burst time); (P1,24) (P2,3) (P3,3) Ready queue is partitioned into separate queues, e.g.: 5.Describe three general methods used to pass
The Gantt Chart for the schedule is: parameters to the operating system during system calls.
1.Pass the parameters in registers. This is inefficient
when there are more parameters than registers.
2.If there are more parameters than registers, store
parameters in a block or table in memory that is passed.
No limit on the number or length of parameters being
passed. 3.Parameters placed, or pushed, onto the stack
by the program and popped off the stack by the OS. No
limit on the number or length of parameters being
passed.

You might also like