0% found this document useful (0 votes)
8 views

Operating System 1.2

Uploaded by

Krishnendu Rarhi
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Operating System 1.2

Uploaded by

Krishnendu Rarhi
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Unit I

Chapter 2

< Operating
/ System 22CSH-
} /> [
242 /
>
Presented by:
Dr. Krishnendu Rarhi

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</
>
Process 02
} /> [
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
Roadmap
Process Concept

Scheduling

Interprocess Communications

Examples

1 0 1 1 0 1 1 0 1 1 0 1 1 0Krishnendu
0 1 1 0 Rarhi
1 1[E9621]]©
011 011 01 110110 110111 1101
APEX Institute of Technology
</ Process Concept
1. An operating system executes a variety of programs:
a. Batch system – jobs
b. Time-shared systems – user programs or tasks
2. Textbook uses the terms job and process almost interchangeably
3. Process – a program in execution; process execution must
progress in sequential fashion
4. Multiple parts
a. The program code, also called text section
b. Current activity including program counter, processor
registers
c. Stack containing temporary data
i. Function parameters, return addresses, local
variables
d. Data section containing global variables
e. Heap containing memory dynamically allocated during run
time

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Process Concept
1. Program is passive entity stored on disk (executable file), process is active
a. Program becomes process when executable file loaded into memory
2. Execution of program started via GUI mouse clicks, command line entry of its name, etc.
3. One program can be several processes
a. Consider multiple users executing the same program

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Process State
1. As a process executes, it changes state
a. new: The process is being created
b. running: Instructions are being
executed
c. waiting: The process is waiting for
some event to occur
d. ready: The process is waiting to be
assigned to a processor
e. terminated: The process has finished
execution

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Process Control Block (PCB)
Information associated with each process
(also called task control block)
1. Process state – running, waiting, etc
2. Program counter – location of instruction to next execute
3. CPU registers – contents of all process-centric registers
4. CPU scheduling information- priorities, scheduling queue
pointers
5. Memory-management information – memory allocated to the
process
6. Accounting information – CPU used, clock time elapsed since
start, time limits
7. I/O status information – I/O devices allocated to process, list of
open files

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ CPU Switch

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Threads
1. So far, process has a single thread of execution
2. Consider having multiple program counters per process
a. Multiple locations can execute at once
i. Multiple threads of control -> threads
3. Must then have storage for thread details, multiple program counters in PCB

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Process Scheduling
1. Maximize CPU use, quickly switch processes
onto CPU for time sharing
2. Process scheduler selects among available
processes for next execution on CPU
3. Maintains scheduling queues of processes
a. Job queue – set of all processes in the
system
b. Ready queue – set of all processes
residing in main memory, ready and
waiting to execute
c. Device queues – set of processes
waiting for an I/O device
d. Processes migrate among the various
queues

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Schedulers
1. Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU
a. Sometimes the only scheduler in a system
b. Short-term scheduler is invoked frequently (milliseconds)  (must be fast)
2. Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue
a. Long-term scheduler is invoked infrequently (seconds, minutes)  (may be slow)
b. The long-term scheduler controls the degree of multiprogramming
3. Processes can be described as either:
a. I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts
b. CPU-bound process – spends more time doing computations; few very long CPU
bursts
4. Long-term scheduler strives for good process mix

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Schedulers
1. Medium-term scheduler can be added if degree of multiple programming needs to
decrease
a. Remove process from memory, store on disk, bring back in from disk to continue
execution: swapping

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Context Switch
1. When CPU switches to another process, the system must save the state of the old process
and load the saved state for the new process via a context switch
2. Context of a process represented in the PCB
3. Context-switch time is overhead; the system does no useful work while switching
a. The more complex the OS and the PCB  the longer the context switch
4. Time dependent on hardware support
a. Some hardware provides multiple sets of registers per CPU  multiple contexts loaded
at once
5. System must provide mechanisms for:
a. process creation,
b. process termination,
c. and so on as detailed next

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Process Creation
1. Parent process create children processes, which, in turn create other processes, forming a
tree of processes
2. Generally, process identified and managed via a process identifier (pid)
3. Resource sharing options
a. Parent and children share all resources
b. Children share subset of parent’s resources
c. Parent and child share no resources
4. Execution options
a. Parent and children execute concurrently
b. Parent waits until children terminate

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ A Tree of Processes (in Linux)

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Process Creation
1. Address space
a. Child duplicate of parent
b. Child has a program loaded into it
2. UNIX examples
a. fork() system call creates new process
b. exec() system call used after a fork() to replace the process’ memory space with a
new program

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Process Termination
1. Process executes last statement and then asks the operating system to delete it using the
exit() system call.
a. Returns status data from child to parent (via wait())
b. Process’ resources are deallocated by operating system
2. Parent may terminate the execution of children processes using the abort() system call.
Some reasons for doing so:
a. Child has exceeded allocated resources
b. Task assigned to child is no longer required
c. The parent is exiting and the operating systems does not allow a child to continue if
its parent terminates

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Process Termination
1. Some operating systems do not allow child to exists if its parent has terminated. If a
process terminates, then all its children must also be terminated.
a. cascading termination. All children, grandchildren, etc. are terminated.
b. The termination is initiated by the operating system.
2. The parent process may wait for termination of a child process by using the wait()system
call. The call returns status information and the pid of the terminated process
pid = wait(&status);
3. If no parent waiting (did not invoke wait()) process is a zombie
4. If parent terminated without invoking wait , process is an orphan

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Interprocess Communication
1. Processes within a system may be independent or cooperating
2. Cooperating process can affect or be affected by other processes, including sharing data
3. Reasons for cooperating processes:
a. Information sharing
b. Computation speedup
c. Modularity
d. Convenience
4. Cooperating processes need interprocess communication (IPC)
5. Two models of IPC
a. Shared memory
b. Message passing

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Communication Models

(a) Message passing. (b) shared memory.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Producer-Consumer Problem
1. Paradigm for cooperating processes, producer process produces information that is
consumed by a consumer process
a. unbounded-buffer places no practical limit on the size of the buffer
b. bounded-buffer assumes that there is a fixed buffer size

• Shared data //Bounded Buffer- Producer


#define BUFFER_SIZE 10 item next_produced;
typedef struct { while (true) {
... /* produce an item in next
} item; produced */
while (((in + 1) % BUFFER_SIZE)
item buffer[BUFFER_SIZE]; == out)
int in = 0; ; /* do nothing */
int out = 0; buffer[in] = next_produced;
• Solution is correct, but can only use BUFFER_SIZE-1 in = (in + 1) % BUFFER_SIZE;
elements }

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Producer-Consumer Problem
1. Paradigm for cooperating processes, producer process produces information that is
consumed by a consumer process
a. unbounded-buffer places no practical limit on the size of the buffer
b. bounded-buffer assumes that there is a fixed buffer size

• Shared data //Bounded Buffer- Consumer


#define BUFFER_SIZE 10 item next_consumed;
typedef struct { while (true) {
... while (in == out)
} item; ; /* do nothing */
next_consumed = buffer[out];
item buffer[BUFFER_SIZE]; out = (out + 1) % BUFFER_SIZE;
int in = 0;
int out = 0; /* consume the item in next
• Solution is correct, but can only use BUFFER_SIZE-1 consumed */
elements }

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Interprocess Communication –
Shared Memory
1. An area of memory shared among the processes that wish to communicate
2. The communication is under the control of the users processes not the operating system.
3. Major issues is to provide mechanism that will allow the user processes to synchronize their
actions when they access shared memory.
Message Passing
4. Mechanism for processes to communicate and to synchronize their actions
5. Message system – processes communicate with each other without resorting to shared
variables
6. IPC facility provides two operations:
a. send(message)
b. receive(message)
7. The message size is either fixed or variable

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Message Passing
1. If processes P and Q wish to communicate, they need to:
a. Establish a communication link between them
b. Exchange messages via send/receive
2. Implementation issues:
a. How are links established?
b. Can a link be associated with more than two processes?
c. How many links can there be between every pair of communicating processes?
d. What is the capacity of a link?
e. Is the size of a message that the link can accommodate fixed or variable?
f. Is a link unidirectional or bi-directional?

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Message Passing
1. Implementation of communication link
a. Physical:
i. Shared memory
ii. Hardware bus
iii. Network
b. Logical:
i. Direct or indirect
ii. Synchronous or asynchronous
iii. Automatic or explicit buffering

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Direct Communication
1. Processes must name each other explicitly:
a. send (P, message) – send a message to process P
b. receive(Q, message) – receive a message from process Q
2. Properties of communication link
a. Links are established automatically
b. A link is associated with exactly one pair of communicating processes
c. Between each pair there exists exactly one link
d. The link may be unidirectional, but is usually bi-directional

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Indirect Communication
1. Messages are directed and received from mailboxes (also referred to as ports)
a. Each mailbox has a unique id
b. Processes can communicate only if they share a mailbox
2. Properties of communication link
a. Link established only if processes share a common mailbox
b. A link may be associated with many processes
c. Each pair of processes may share several communication links
d. Link may be unidirectional or bi-directional

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Indirect Communication
1. Operations
a. create a new mailbox (port)
b. send and receive messages through mailbox
c. destroy a mailbox
2. Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
A. Mailbox sharing
a. P1, P2, and P3 share mailbox A
b. P1, sends; P2 and P3 receive
c. Who gets the message?
B. Solutions
a. Allow a link to be associated with at most two processes
b. Allow only one process at a time to execute a receive operation
c. Allow the system to select arbitrarily the receiver. Sender is notified who the receiver
was.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Synchronization
1. Message passing may be either blocking or non-blocking
2. Blocking is considered synchronous
a. Blocking send -- the sender is blocked until the message is received
b. Blocking receive -- the receiver is blocked until a message is available
3. Non-blocking is considered asynchronous
a. Non-blocking send -- the sender sends the message and continue
b. Non-blocking receive -- the receiver receives:
 A valid message, or
 Null message
 Different combinations possible
 If both send and receive are blocking, we have a rendezvous

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Synchronization
1. Producer-consumer becomes trivial

message next_produced;
while (true) {
/* produce an item in next produced
*/
send(next_produced);
}
message next_consumed;
while (true) {
receive(next_consumed);

/* consume the item in next consumed */


}

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Buffering
1. Queue of messages attached to the link.
2. implemented in one of three ways
1. Zero capacity – no messages are queued on a link. Sender must wait for receiver
(rendezvous)
2. Bounded capacity – finite length of n messages. Sender must wait if link full
3. Unbounded capacity – infinite length. Sender never waits

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Remote Procedure Call
1. Remote procedure call (RPC) abstracts procedure calls between processes on networked
systems
a. Again uses ports for service differentiation
2. Stubs – client-side proxy for the actual procedure on the server
3. The client-side stub locates the server and marshalls the parameters
4. The server-side stub receives this message, unpacks the marshalled parameters, and
performs the procedure on the server
5. On Windows, stub code compile from specification written in Microsoft Interface
Definition Language (MIDL)

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Remote Procedure Call
1. Data representation handled via External Data Representation (XDL) format to account
for different architectures
a. Big-endian and little-endian
2. Remote communication has more failure scenarios than local
a. Messages can be delivered exactly once rather than at most once
3. OS typically provides a rendezvous (or matchmaker) service to connect client and server

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Pipes
1. Acts as a conduit allowing two processes to communicate
2. Issues:
a. Is communication unidirectional or bidirectional?
b. In the case of two-way communication, is it half or full-duplex?
c. Must there exist a relationship (i.e., parent-child) between the communicating
processes?
d. Can the pipes be used over a network?
3. Ordinary pipes – cannot be accessed from outside the process that created it. Typically, a
parent process creates a pipe and uses it to communicate with a child process that it
created.
4. Named pipes – can be accessed without a parent-child relationship.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Ordinary Pipes
 Ordinary Pipes allow communication in standard producer-consumer style
 Producer writes to one end (the write-end of the pipe)
 Consumer reads from the other end (the read-end of the pipe)
 Ordinary pipes are therefore unidirectional
 Require parent-child relationship between communicating processes

 Windows calls these anonymous pipes


 See Unix and Windows code samples in textbook

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Named Pipes
1. Named Pipes are more powerful than ordinary pipes
2. Communication is bidirectional
3. No parent-child relationship is necessary between the communicating processes
4. Several processes can use the named pipe for communication
5. Provided on both UNIX and Windows systems

1011 011 01 1011001 10 11011 011 01 110110 110111 1101

You might also like