0% found this document useful (0 votes)
18 views

Operating System Lect6.1

The document discusses interprocess communication and process management in operating systems. It describes different methods of interprocess communication including message passing, shared memory, pipes, sockets and files. It also discusses process scheduling including scheduling queues, scheduling algorithms and types of schedulers.

Uploaded by

pancham8256
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Operating System Lect6.1

The document discusses interprocess communication and process management in operating systems. It describes different methods of interprocess communication including message passing, shared memory, pipes, sockets and files. It also discusses process scheduling including scheduling queues, scheduling algorithms and types of schedulers.

Uploaded by

pancham8256
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Operating System

Lecture #6
Process Management
Objectives
• Cooperating and Independent Processes
• Inter process communication
• Process Management in Unix
• Threads
Inter Process Communication
• Interprocess communication is the mechanism provided by the operating
system that allows processes to communicate with each other.

• This communication could involve a process letting another process know


that some event has occurred or the transferring of data from one process
to another.
Inter Process Communication
• Processes executing concurrently in the operating system might be either
independent processes or cooperating processes.
• Cooperating Process in the operating system is a process that gets
affected by other processes under execution or can affect any other
process under execution.
• An independent process in an operating system is one that does not
affect or impact any other process of the system.
Cooperating processes
• It shares data with other processes in the system by directly sharing a
memory or by sharing data through files or messages.
• Cooperating processes in OS requires a communication method that will
allow the processes to exchange data and information.
• There are two methods by which cooperating process in OS can
communicate:
• Cooperation by Sharing
• Cooperation by Message Passing
Cooperation by Sharing
• The cooperation processes in OS can
communicate with each other using the
shared resource which
includes data, memory, variables, files,
etc.
• Processes can then exchange the
information by reading or writing data to
the shared region. We can use a critical
section that provides data integrity and
avoids data inconsistency.
Message passing
• The cooperating processes in OS can
communicate with each other with the
help of message passing. The production
process will send the message and the
consumer process will receive the same
message.
• There is no concept of shared memory
instead the producer process will first
send the message to the kernel and then
the kernel sends that message to the
consumer process.
Task
• Explore the difference of Message passing and shared memory.
Inter Process Communication
• To send and receive messages in message passing, a communication link
is required between two processes. There are various ways to implement a
communication link.
• Direct and indirect communication
• Synchronous and asynchronous communication
• Buffering
Direct communication
• In direct communication, each process must explicitly name the recipient or
sender of the communication.
• send(P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
• Link is always created between two processes.
• One direct link can be used only between one pair of communicating processes.
• Two processes can use only a single direct link for communication.
Indirect communication
• In indirect communication, messages are sent and received through
mailboxes or ports.
• send(A, message) – send a message to mailbox A
receive (A, message) – receive a message from mailbox
A
• Different pair of processes can use the same indirect link for
communication.
• Also, two processes can two different indirect links for communication.
Synchronous and
asynchronous communication
• Message passing may be blocking (synchronous) or non blocking (non-
synchronous)
• Blocking send – blocks the sending process until the receiving process receives
the message. This means that the sending process will not send data until the
receiving process is also there. For example, in a class, the teacher will not
speak until there are students sitting to receive.
• Non blocking send – the sending process sends the message and resumes
operation. Hence, the sending process does not bother whether the receiver is
there or not.
Synchronous and
asynchronous communication
• Blocking receive – the receiver blocks until a message is available. This
means that the receiving process will not proceed until the sending
process is not there. For example, in a class, the students do not go until
the teacher comes and delivers the lecture.
• Non blocking receive – The receiver process does not wait for the sender
to send data. Hence, the receiver retrieves either a valid message or
NULL.
Buffering
• The speed of the sender and receiver process can be different. So, it is important to
introduce some kind of temporary storage. For example, if your internet connection is slow,
the video in youtube gets buffered. The messages exchanged by the communicating process
reside in a temporary queue. There are three ways to implement queues.
• Zero capacity buffer – it means that there is no buffering in the system. If the receiver is
slow, then the data will be lost.
• Bounded buffer – means that the memory used for buffering has a bound to it. Hence, you
can not buffer unlimited data.
• Unbounded buffer – if there is unlimited memory available for buffering.
Inter Process Communication
• The different approaches to implement interprocess communication are given as follows −
• Pipe
• A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way data channel
between two processes. This uses standard input and output methods.
• Socket
• The socket is the endpoint for sending or receiving data in a network. This is true for data sent between
processes on the same computer or data sent between different computers on the same network. Most of the
operating systems use sockets for interprocess communication.
• File
• A file is a data record that may be stored on a disk or acquired on demand by a file server. Multiple
processes can access a file as required. All operating systems use files for data storage.
Inter Process Communication
• Signal
• Signals are useful in interprocess communication in a limited way. They are system messages that are sent
from one process to another. Normally, signals are not used to transfer data but are used for remote
commands between processes.
• Shared Memory
• Shared memory is the memory that can be simultaneously accessed by multiple processes. This is done so
that the processes can communicate with each other.
• Message Queue
• Multiple processes can read and write data to the message queue without being connected to each other.
Messages are stored in the queue until their recipient retrieves them. Message queues are quite useful for
interprocess communication and are used by most operating systems.
Scheduler
• The goal of a multiprogramming system tries to keep the CPU busy at all times. For
a Time-sharing system, the goal is to switch between user processes so that each user
can interact with the process.
• To meet these goals a scheduler selects one out of many available processes for
execution by the CPU. There are different types of schedulers in operating system –
• long term scheduler,
• short term scheduler
• and medium-term scheduler.
Scheduling Queues
• Job queue – comprise of all processes in the system
• Ready queue – comprise of all processes residing in the main memory that
are ready to run
• Device queue – processes waiting for a particular device
• There are Two queues – ready queue and device queue. A new process is
first admitted into the ready queue. It waits until it
is dispatched (allocated to the CPU for execution).
Five possibilities
• The process terminates normally and it exits the system
• The process issues a I/O request and moves into the device queue. Once it
gets the I/O device, it is back into the ready queue
• The time quantum of the process expires (remember in a
time-sharing system, every process gets a fixed amount of CPU time).
Then, the process is added back into the ready queue
• The process creates a child process and waits for the child process to
terminate. Hence, it itself is added to the ready queue)
• A higher priority process interrupts the currently running process. Thus,
forcing the current process to change its state to ready state.
Types of scheduler
Long term scheduler
• The job queue consists of all process generated in the system.
• These reside in the secondary storage.
• Since, the size of secondary storage device is more than the RAM, all the
processes in the job queue cannot be accomodated in the RAM.
• Hence, to select a few out of all the processes in the Job queue is the task
of the Long Term Scheduler.
Short term scheduler
• All the processes in the RAM reside in the ready queue.
• Since there is one CPU, so the OS selects only one process from the ready
queue to allocate to the CPU. This selection is made by the OS module
known as Short Term Scheduler.
Mid term scheduler
• A process is of two types –
CPU bound process – if it spends most of its time with CPU) process
I/O bound process – if it spends most of its time with I/O devices).
• It is important to have a mix of CPU bound and I/O bound process in the RAM (to use
all the resources).
• Since the system can not determine whether a process is CPU or I/O bound when its
new.
• The selection done by long-term scheduler might consist of mostly CPU or mostly I/O
processes.
• Hence, if the mix is not good then the OS swaps some of the processes with those in the
job queue. This swapping is the task of the Medium-term scheduler.
• Finally, the swapping may also be required to lower the degree of multi-programming or
to incorporate a higher priority process in the RAM if sufficient space is not there in
Context Switching
• The Context switching is a technique or method used by the operating system to switch a process
from one state to another
• Context Switching is the mechanism that allows multiple processes to use a single CPU.
• Context Switching stores the status of the ongoing process so that the process can be reloaded
from the same point from where it was stopped.
• When switching perform in the system, it stores the old running process's status in the form of
registers and assigns the CPU to a new process to execute its tasks.
• While a new process is running in the system, the previous process must wait in a ready queue.
• The execution of the old process starts at that point where another process stopped it.
Steps for context switching
• First, the context switching needs to save the state of process P1 in the form of the program counter
and the registers to the PCB (Program Counter Block), which is in the running state.
• Now update PCB1 to process P1 and moves the process to the appropriate queue, such as the ready
queue, I/O queue and waiting queue.
• After that, another process gets into the running state, or we can select a new process from the ready
state, which is to be executed, or the process has a high priority to execute its task.
• Now, we have to update the PCB (Process Control Block) for the selected process P2. It includes
switching the process state from ready to running state or from another state like blocked, exit, or
suspend.
• If the CPU already executes process P2, we need to get the status of process P2 to resume its
execution at the same time point where the system interrupt occurs.
Triggers for Context Switching
• Multitasking: In the multitasking environment, when one process is utilizing CPU, and there is
a need for CPU by another process, Context Switching triggers. Context Switching saves the
state of the old process and passes the control of the CPU to the new process.
• Interrupt Handling: When an interrupt takes place, the CPU needs to handle the interrupt. So
before handling the interrupt, the Context Switching gets triggered, which saves the state of the
process before handling the interrupt.
• User and Kernel Mode Switching: The user mode is the normal mode in which the user
application can execute with limited access, whereas kernel mode is the mode in which the
process can carry out the system-level operations that are not available in the user mode. So,
whenever the switching takes from user mode to kernel mode, mode switching triggers
the Context Switching, which stores the state on an ongoing process.
Any Query??

You might also like