0% found this document useful (0 votes)
8 views

Unit 3

Uploaded by

Sandip Thamke
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Unit 3

Uploaded by

Sandip Thamke
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Government Polytechnic Jintur

Department of Computer Engg

Course Name: Operating System (OSY)


Course Code : 22516

M.A. Zahed
Lecturer
Unit III: Process Management
Process Management
⚫ Process Concept
⚫ Process Scheduling
⚫ Interprocess Communication
⚫ Examples of IPC Systems
Process Concept
⚫ Process – a program in execution;

⚫ Multiple parts
◦ The program code, also called text section
◦ Current activity including program counter, processor
registers
◦ Stack containing temporary data
● Function parameters, return addresses, local variables
◦ Data section containing global variables
◦ Heap containing memory dynamically allocated during run
time
Process Concept (Cont.)
⚫ Program is passive entity stored on disk
(executable file), process is active

◦ Program becomes process when executable file


loaded into memory

⚫ Execution of program started via GUI mouse clicks,


command line entry of its name, etc
Process in Memory
Process State

⚫ As a process executes, it changes state


◦ new: The process is being created
◦ running: Instructions are being executed
◦ waiting: The process is waiting for some event
to occur
◦ ready: The process is waiting to be assigned to a
processor
◦ terminated: The process has finished execution
Diagram of Process State
Process Control Block (PCB)
Information associated with each process
(also called task control block)
⚫ Process state – running, waiting, etc
⚫ Program counter – location of
instruction to next execute
⚫ CPU registers – contents of all
process-centric registers
⚫ CPU scheduling information-
priorities, scheduling queue pointers
⚫ Memory-management information –
memory allocated to the process
⚫ Accounting information – CPU used,
clock time elapsed since start, time
limits
⚫ I/O status information – I/O devices
allocated to process, list of open files
CPU Switch From Process to Process
Context Switch
⚫ When CPU switches to another process, the system must
save the state of the old process and load the saved state
for the new process via a context switch
⚫ Context of a process represented in the PCB
⚫ Context-switch time is overhead; the system does no useful
work while switching
◦ The more complex the OS and the PCB  the longer
the context switch
⚫ Time dependent on hardware support
◦ Some hardware provides multiple sets of registers per
CPU  multiple contexts loaded at once
Process Scheduling
⚫ Maximize CPU use, quickly switch processes
onto CPU for time sharing
⚫ Process scheduler selects among available
processes for next execution on CPU
⚫ Maintains scheduling queues of processes
◦ Job queue – set of all processes in the system
◦ Ready queue – set of all processes residing in
main memory, ready and waiting to execute
◦ Device queues – set of processes waiting for an
I/O device
◦ Processes migrate among the various queues
Representation of Process Scheduling

 Queueing diagram represents queues, resources, flows


Schedulers
⚫ Short-term scheduler (or CPU scheduler) – selects which process should
be executed next and allocates CPU
◦ Sometimes the only scheduler in a system
◦ Short-term scheduler is invoked frequently (milliseconds)  (must be fast)
⚫ Long-term scheduler (or job scheduler) – selects which processes should be
brought into the ready queue
◦ Long-term scheduler is invoked infrequently (seconds, minutes)  (may be
slow)
◦ The long-term scheduler controls the degree of multiprogramming
⚫ Processes can be described as either:
◦ I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
◦ CPU-bound process – spends more time doing computations; few very
long CPU bursts
⚫ Long-term scheduler strives for good process mix
Addition of Medium Term Scheduling
 Medium-term scheduler can be added if degree of multiple programming needs
to decrease
 Remove process from memory, store on disk, bring back in from disk to
continue execution: swapping
Interprocess Communication
⚫ Processes within a system may be independent or cooperating
⚫ Cooperating process can affect or be affected by other processes,
including sharing data
⚫ Reasons for cooperating processes:
◦ Information sharing
◦ Computation speedup
◦ Modularity
◦ Convenience
⚫ Cooperating processes need interprocess communication
(IPC)
⚫ Two models of IPC
◦ Shared memory
◦ Message passing
Communications Models
(a) Message passing. (b) shared memory.
Cooperating Processes
⚫ Independent process cannot affect or be
affected by the execution of another
process
⚫ Cooperating process can affect or be
affected by the execution of another
process
⚫ Advantages of process cooperation
◦ Information sharing
◦ Computation speed-up
◦ Modularity
◦ Convenience
Interprocess Communication – Shared Memory

⚫ An area of memory shared among the


processes that wish to communicate
⚫ The communication is under the
control of the users processes not the
operating system.
⚫ Major issues is to provide mechanism
that will allow the user processes to
synchronize their actions when they
access shared memory.
⚫ Synchronization is discussed in great
details in Chapter 5.
Interprocess Communication – Message Passing

⚫ Mechanism for processes to


communicate and to synchronize their
actions
⚫ Message system – processes
communicate with each other without
resorting to shared variables
⚫ IPC facility provides two operations:
◦ send(message)
◦ receive(message)
⚫ The message size is either fixed or
variable
Message Passing (Cont.)

⚫ If processes P and Q wish to communicate, they need


to:
◦ Establish a communication link between them
◦ Exchange messages via send/receive
⚫ Implementation issues:
◦ How are links established?
◦ Can a link be associated with more than two processes?
◦ How many links can there be between every pair of
communicating processes?
◦ What is the capacity of a link?
◦ Is the size of a message that the link can accommodate
fixed or variable?
◦ Is a link unidirectional or bi-directional?
Message Passing (Cont.)

⚫ Implementation of communication link


◦ Physical:
● Shared memory
● Hardware bus
● Network
◦ Logical:
● Direct or indirect
● Synchronous or asynchronous
● Automatic or explicit buffering
Direct Communication
⚫ Processes must name each other explicitly:
◦ send (P, message) – send a message to process P
◦ receive(Q, message) – receive a message from
process Q
⚫ Properties of communication link
◦ Links are established automatically
◦ A link is associated with exactly one pair of
communicating processes
◦ Between each pair there exists exactly one link
◦ The link may be unidirectional, but is usually bi-
directional
Indirect Communication
⚫ Messages are directed and received from mailboxes
(also referred to as ports)
◦ Each mailbox has a unique id
◦ Processes can communicate only if they share a
mailbox
⚫ Properties of communication link
◦ Link established only if processes share a common
mailbox
◦ A link may be associated with many processes
◦ Each pair of processes may share several
communication links
◦ Link may be unidirectional or bi-directional
Indirect Communication
⚫ Operations
◦ create a new mailbox (port)
◦ send and receive messages through mailbox
◦ destroy a mailbox
⚫ Primitives are defined as:
send(A, message) – send a message to
mailbox A
receive(A, message) – receive a message
from mailbox A
Indirect Communication
⚫ Mailbox sharing
◦ P1, P2, and P3 share mailbox A
◦ P1, sends; P2 and P3 receive
◦ Who gets the message?
⚫ Solutions
◦ Allow a link to be associated with at most
two processes
◦ Allow only one process at a time to execute
a receive operation
◦ Allow the system to select arbitrarily the
receiver. Sender is notified who the
receiver was.
Threads
General Introduction

⚫ Most modern applications are


multithreaded
⚫ Threads run within application
⚫ Multiple tasks with the application can
be implemented by separate threads
◦ Update display
◦ Fetch data
◦ Spell checking
◦ Answer a network request
⚫ Process creation is heavy-weight while
thread creation is light-weight
Multithreaded Server Architecture
Benefits
⚫ Responsiveness – may allow continued execution
if part of process is blocked, especially important for
user interfaces
⚫ Resource Sharing – threads share resources of
process, easier than shared memory or message
passing
⚫ Economy – cheaper than process creation, thread
switching lower overhead than context switching
⚫ Scalability – process can take advantage of
multiprocessor architectures
Multicore Programming
⚫ Multicore or multiprocessor systems putting
pressure on programmers, challenges include:
◦ Dividing activities
◦ Balance
◦ Data splitting
◦ Data dependency
◦ Testing and debugging
⚫ Parallelism implies a system can perform more than
one task simultaneously
Multicore Programming (Cont.)
⚫ Types of parallelism
◦ Data parallelism – distributes subsets of the same data
across multiple cores, same operation on each
◦ Task parallelism – distributing threads across cores,
each thread performing unique operation
⚫ As # of threads grows, so does architectural support for
threading
◦ CPUs have cores as well as hardware threads
◦ Consider Oracle SPARC T4 with 8 cores, and 8 hardware
threads per core
Single and Multithreaded Processes
User Threads and Kernel Threads

⚫ User threads - management done by user-level threads


library
⚫ Three primary thread libraries:
◦ POSIX Pthreads
◦ Windows threads
◦ Java threads
⚫ Kernel threads - Supported by the Kernel
⚫ Examples – virtually all general purpose operating systems,
including:
◦ Windows
◦ Solaris
◦ Linux
Multithreading Models

⚫ Many-to-One

⚫ One-to-One

⚫ Many-to-Many
Many-to-One
⚫ Many user-level threads mapped to
single kernel thread
⚫ One thread blocking causes all to
block
⚫ Multiple threads may not run in
parallel on muticore system because
only one may be in kernel at a time
⚫ Few systems currently use this model
⚫ Examples:
◦ Solaris Green Threads
◦ GNU Portable Threads
One-to-One
⚫ Each user-level thread maps to kernel
thread
⚫ Creating a user-level thread creates a
kernel thread
⚫ More concurrency than many-to-one
⚫ Number of threads per process
sometimes restricted due to overhead
⚫ Examples
◦ Windows
◦ Linux
◦ Solaris 9 and later
Many-to-Many Model
⚫ Allows many user level threads to
be mapped to many kernel threads
⚫ Allows the operating system to
create a sufficient number of kernel
threads
⚫ Solaris prior to version 9
Process Commands
Thank You..!!!

You might also like