0% found this document useful (0 votes)
17 views

Unit 2 Operating System

Uploaded by

Ayush Garg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Unit 2 Operating System

Uploaded by

Ayush Garg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 171

BCA V SEM

BCA 301
Syllabus
 Processes: Process Concept, Process Scheduling,
Operation on Processes
 CPU Scheduling: Basic Concepts, Scheduling
Criteria, Scheduling Algorithms
 Process Synchronization: Background, The Critical-
Section Problem, Semaphores solution to critical
section problem
 Process related commands in Linux: ps, top, pstree,
nice, renice and system calls
Processes
 A process is basically a program in execution. The
execution of a process must progress in a sequential
fashion.
 A process is defined as an entity which represents the
basic unit of work to be implemented in the system.
 In a simple term, we write our computer programs in a text
file and when we execute this program, it becomes a
process which performs all the tasks mentioned in the
program.
 When a program is loaded into the memory and it becomes
a process, it can be divided into four sections ─ stack, heap,
text and data.
Processes
Processes
S.N. Component & Description
1 Stack
The Stack is used for local variables. Space on the stack is reserved for
local variables when they are declared. So the Stack contains the
temporary data such as method/function parameters, return address and
local variables.

2 Heap
This is dynamically allocated memory to a process during its run time.
The Heap is used for the dynamic memory allocation, and is managed via
calls to new, delete, malloc, free, etc.
3 Data
The Data section is made up the global and static variables, allocated and
initialized prior to executing the main.
4 Text
The Text section is made up of the compiled program code, read in from
non-volatile storage when the program is launched. This includes the
current activity represented by the value of Program Counter and the
contents of the processor's registers.
Program
 A program is a piece of code which may be a single
line or millions of lines. A computer program is usually
written by a computer programmer in a
programming language.
 A computer program is a collection of instructions that
performs a specific task when executed by a computer.
When we compare a program with a process, we can
conclude that a process is a dynamic instance of a
computer program.
 A part of a computer program that performs a well-
defined task is known as an algorithm. A collection of
computer programs, libraries and related data are
referred to as software.
Program
A computer program is a
collection of instructions that
performs a specific task when
executed by a computer. When
we compare a program with a
process, we can conclude that a
process is a dynamic instance
of a computer program.
 A part of a computer program
that performs a well-defined
task is known as an algorithm.
A collection of computer
programs, libraries and related
data are referred to as software.
Comparison Chart of Process and Program
BASIS FOR PROGRAM
PROCESS
COMPARISON

Basic Program is a set of When a program is


instruction. executed, it is known
as process.
Nature Passive Active

Lifespan Longer Limited

Required Program is stored on Process holds


resources disk in some file and resources such as
does not require any CPU, memory
other resources. address, disk, I/O
etc.
Attributes of a process
 The Attributes of the process are used by the
Operating System to create the process control block
(PCB) for each of them.

 A Process Control Block is a data structure maintained


by the Operating System for every process.

 The PCB is identified by an integer process ID (PID).


This is also called context of the process.
Attributes of a process( PCB)
Attributes of a process
 1. Process ID: When a process is created, a unique id is
assigned to the process which is used for unique
identification of the process in the system.

 2. Program counter: A program counter stores the address


of the last instruction of the process on which the process
was suspended. The CPU uses this address when the
execution of this process is resumed.

 3. Process State: The Process, from its creation to the


completion, goes through various states which are new,
ready, running and waiting.

 4. Priority: Every process has its own priority. The process


with the highest priority among the processes gets the CPU
first. This is also stored on the process control block.
Attributes of a process
 5. General Purpose Registers: Every process has its
own set of registers which are used to hold the data
which is generated during the execution of the
process.

 6. List of open files: During the Execution, Every


process uses some files which need to be present in the
main memory. OS also maintains a list of open files in
the PCB.

 7. List of open devices: OS also maintain the list of


all open devices which are used during the execution
of the process.
CPU Switch from Process to process
Process Life Cycle
Process Life Cycle(Process States)
 NEW- The process is being created. A program which
is going to be picked up by the OS into the main
memory is called a new process.
 READY- The OS picks the new processes from the
secondary memory and put all of them in the main
memory. The processes which are ready for the
execution and reside in the main memory are called
ready state processes. There can be many processes
present in the ready state. Process may come into this
state after Start state or while running it by but
interrupted by the scheduler to assign CPU to
some other process.
Process Life Cycle(Process States)

 RUNNING- Instructions are being executed. One of


the processes from the ready state will be chosen by
the OS depending upon the scheduling algorithm.
Hence, if we have only one CPU in our system, the
number of running processes for a particular time will
always be one. If we have n processors in the system
then we can have n processes running simultaneously.
Process Life Cycle
 WAITING- The process is waiting for some event to occur
(such as an I/O completion or reception of a signal).
 From the Running state, a process can make the transition
to the block or wait state depending upon the scheduling
algorithm or the intrinsic behavior of the process.
 When a process waits for a certain resource to be assigned
or for the input from the user then the OS move this
process to the block or wait state and assigns the CPU to
the other processes.
 Process moves into the waiting state if it needs to wait for a
resource, such as waiting for user input, or waiting for a file
to become available.
Process Life Cycle(Process States)
 TERMINATED- The process has finished execution.
When a process finishes its execution, it comes in the
termination state.
 All the context of the process (Process Control Block)
will also be deleted the process will be terminated by
the Operating system
Process Scheduling
 The act of determining which process is in the ready state,
and should be moved to the running state is known
as Process Scheduling.
 The prime aim of the process scheduling system is to keep
the CPU busy all the time and to deliver minimum response
time for all programs. For achieving this, the scheduler
must apply appropriate rules for swapping
processes IN and OUT of CPU.
 Process scheduling is an essential part of
Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares
the CPU using time multiplexing.
Process Scheduling Queues
 The OS maintains all PCBs in Process Scheduling Queues.

 The OS maintains a separate queue for each of the process


states and PCBs of all processes in the same execution state
are placed in the same queue.

 When the state of a process is changed, its PCB is unlinked


from its current queue and moved to its new state queue.
The Operating System maintains the following
important process scheduling queues −

 All processes, upon entering into the system, are stored in


the Job Queue.
 Processes in the Ready state are placed in the Ready
Queue.
 Processes waiting for a device to become available are
placed in Device Queues. There are unique device queues
available for each I/O device.
Scheduling Queues
 Job Queue- As processes enter the system, they are put into job
queue. This queue consists of all the processes in the system. It is
maintained in the secondary memory. The long term scheduler
(Job scheduler) picks some of the jobs and put them in the
primary memory.

 Ready Queue- The processes that are residing in main memory


and are ready and waiting to execute are kept on list called the
ready queue. This queue is generally stored as linked list. A ready
queue header contains pointer to the first and final PCB’s in the
list. The short term scheduler picks the job from the ready queue
and dispatch to the CPU for the execution.

 I/O Device Queue- The list of processes waiting for a particular


I/O device such as dedicated tape drive or shared drive like disk
in case of I/O request is called a device queue. Each device has
its own device queue.
Queuing Diagram representation of
process scheduling
A new process is initially put into the ready queue. It waits in
the ready queue until it is selected for
execution(dispatched). Once the process is assigned to the
CPU and is executing, one of several events could occur:

 The process could issue an I/O request and then be placed


in I/O queue.

 The process could create a new sub process and wait for its
termination.

 The process could be removed forcibly from the CPU, as a


result of an interrupt and put back in the ready queue.

A process continues its cycle of switching among different


queues until it is removed form all queues and its PCB and
resources deallocated.
Types of Schedulers
 Schedulers are special system software which handles
process scheduling in various ways. Their main task is
to select the jobs to be submitted into the system and
to decide which process to run. There are three types
of schedulers available:

 Long Term Scheduler


 Short Term Scheduler
 Medium Term Scheduler
Long Term Scheduler
 It is also called a job scheduler.
 A long-term scheduler determines which programs
are admitted to the system for processing.
 It selects processes from the queue and loads them
into memory for execution. Process loads into the
memory for CPU scheduling.
 Long Term scheduler mainly controls the degree of
Multiprogramming.
 The purpose of long term scheduler is to choose a
perfect mix of IO bound and CPU bound processes
among the jobs present in the pool.
Short Term Scheduler
 Short term scheduler is also known as CPU scheduler. It
selects one of the Jobs from the ready queue and
dispatch to the CPU for the execution.
 Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the
change of ready state to running state of the process.
CPU scheduler selects a process among the processes
that are ready to execute and allocates CPU to one of
them.
 Short-term schedulers, also known as dispatchers,
make the decision of which process to execute next.
Short-term schedulers are faster than long-term
schedulers.
Short Term Scheduler
 A scheduling algorithm is used to select which job is
going to be dispatched for the execution. The Job of
the short term scheduler can be very critical in the
sense that if it selects job whose CPU burst time is very
high then all the jobs after that, will have to wait in the
ready queue for a very long time.
 This problem is called starvation which may arise if
the short term scheduler makes some mistakes while
selecting the job.
 The primary aim of this scheduler is to enhance CPU
performance and increase process execution rate.
Medium Term Scheduler
 Medium term scheduler takes care of the swapped
out processes. If the running state processes needs
some IO time for the completion then there is a need
to change its state from running to waiting.
 Medium term scheduler removes the process from the
running state to make room for the other processes.
Such processes are the swapped out processes and this
procedure is called swapping. The medium term
scheduler is responsible for suspending and
resuming the processes.
 It reduces the degree of multiprogramming. The
swapping is necessary to have a perfect mix of
processes in the ready queue.
Comparison among Scheduler
S. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
N.
1 It is a job scheduler. It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than short term Speed is fastest among Speed is in between both short
scheduler other two and long term scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
4 It is almost absent or minimal It is also minimal in time It is a part of Time sharing
in time sharing system sharing system systems.

5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory which are ready to execute into memory and execution can
for execution be continued.
What is Context Switch?
 Switching the CPU to another process requires saving the
state of the old process and loading the saved state for the
new process. This task is known as a Context Switch.
 The context of a process is represented in the Process
Control Block (PCB) of a process; it includes the value of
the CPU registers, the process state and memory-
management information. When a context switch occurs,
the Kernel saves the context of the old process in its PCB
and loads the saved context of the new process scheduled
to run.
 Context switch time is pure overhead, because
the system does no useful work while switching. Its
speed varies from machine to machine, depending on the
memory speed, the number of registers that must be
copied, and the existence of special instructions (such as a
single instruction to load or store all registers). Typical
speeds range from 1 to 1000 microseconds.
Operations on Processes
The processes in the system can execute concurrently and they must be created and
deleted dynamically.
 1. Creation
Once the process is created, it will be ready and come into the ready
queue (main memory) and will be ready for the execution.
 2. Scheduling
Out of the many processes present in the ready queue, the Operating
system chooses one process and start executing it. Selecting the
process which is to be executed next, is known as scheduling.
 3. Execution
Once the process is scheduled for the execution, the processor starts
executing it. Process may come to the blocked or wait state during
the execution then in that case the processor starts executing the
other processes.
 4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the
process. The Context of the process (PCB) will be deleted and the
process gets terminated by the Operating system.
System Call
 When a program in user mode requires access to RAM or a hardware
resource, it must ask the kernel to provide access to that resource. This is
done via something called a system call.
 When a program makes a system call, the mode is switched from user mode
to kernel mode. This is called a context switch.

 Then the kernel provides the resource which the program requested. After
that, another context switch happens which results in change of mode from
kernel mode back to user mode.

 Generally, system calls are made by the user level programs in the following
situations:

 Creating, opening, closing and deleting files in the file system.


 Creating and managing new processes.
 Creating a connection in the network, sending and receiving packets.
 Requesting access to a hardware device, like a mouse or a printer.
 In a typical UNIX system, there are around 300 system calls.
Process Creation
Fork()
 The fork() system call is used to create processes. When a
process (a program in execution) makes a fork() call, an exact
copy of the process is created. Now there are two processes, one
being the parent process and the other being the child
process.

 The process which called the fork() call is the parent process
and the process which is created newly is called the child
process. The child process will be exactly the same as the
parent. Note that the process state of the parent i.e., the
address space, variables, open files etc. is copied into the child
process. This means that the parent and child processes
have identical but physically different address spaces. The
change of values in parent process doesn't affect the child and
vice versa is true too.
Process Creation
 Let's look at an example:
// example.c
#include <stdio.h>
void main()
{ int val;
val = fork(); // line A
printf("%d", val); // line B
}

 When the above example code is executed, when line A is executed, a child process is
created. Now both processes start execution from line B. To differentiate between the
child process and the parent process, we need to look at the value returned by the
fork() call.

 The difference is that, in the parent process, fork() returns a value which represents
the process ID of the child process. But in the child process, fork() returns the value
0.

 This means that according to the above program, the output of parent process will be
the process ID of the child process and the output of the child process will be 0.
Process Creation
Exec()
 The exec() system call is also used to create processes.
But there is one big difference between fork() and
exec() calls.
 The fork() call creates a new process while preserving
the parent process.
 But, an exec() call replaces the address space, text
segment, data segment etc. of the current process with
the new process.
Process Termination
 By making the exit(system call), typically returning an integer value,
processes may request their own termination. This integer value is passed
along to the parent if it is doing a wait(), and is typically zero on successful
completion and some negative value in the event of any problem.

 Processes may also be terminated by the system for a variety of reasons,


including :
• The inability of the system to deliver the necessary system resources.
• In response to a KILL command or other unhandled process interrupts.
• A parent may kill its children if the task assigned to them is no longer
needed i.e. if the need of having a child terminates.
• If the parent exits, the system may or may not allow the child to
continue without a parent (In UNIX systems, orphaned processes are
generally inherited by init, which then proceeds to kill them.)
Process Termination
 When a process ends, all of its system resources are freed up,
open files flushed and closed, etc. The process termination
status and execution times are returned to the parent if the
parent is waiting for the child to terminate, or eventually
returned to init if the process already became an orphan.
Inter process Communication
 The concurrent processes in operating system are of
two types
 Independent processes
 Independent process is the process that can not affect
or be affected by the other processes. Independent
processes does not share any data like temporary or
persistent with any other process.
 Cooperating processes
 Cooperating process is affect or be affected by the
other processes executing in the system. Cooperating
process shares data with other processes.
Inter process Communication
 Inter process communication (IPC) is a mechanism
which allows processes to communicate each other
and synchronize their actions. The communication
between these processes can be seen as a method of
co-operation between them. Processes can
communicate with each other using these two
ways:

 Shared Memory
 Message passing
Interprocess Communication through Shared memory-
Bounded Buffer/Producer Consumer Problem
 There are two processes: Producer and Consumer. Producer
produces some item and Consumer consumes that item. The
two processes shares a common space or memory location
known as buffer where the item produced by Producer is
stored and from where the Consumer consumes the item if
needed.

 If the total produced item is equal to the size of buffer, producer


will wait to get it consumed by the Consumer. Similarly, the
consumer first check for the availability of the item and if no
item is available, Consumer will wait for producer to produce it.
If there are items available, consumer will consume it. The
pseudo code are given below:
Interprocess Communication through Shared memory-
Bounded Buffer/Producer Consumer Problem
Producer Process Code
item nextProduced;

while(1) {

while((in+1) % buffer_size== out);


buffer[in] = nextProduced;
in = (in + 1) % buffer_size;
}

Consumer Process Code


item nextConsumed;

while(1){

while(in == out);
nextConsumed = buffer[out];
out = (out + 1) % buffer_size;
}
Interprocess Communication through Shared memory-
Bounded Buffer/Producer Consumer Problem
Messaging Passing Method
 In this method, processes communicate with each other
without using any kind of shared memory. This is best
approach for Interprocess communication as distributed
processes at different locations can communicate through
network. If two processes p1 and p2 want to communicate
with each other, they proceed as follow:

 Message Passing through Communication Link.


 Direct and Indirect Communication link

 Establish a communication link (if a link already exists, no


need to establish it again.)

 Start exchanging messages using basic primitives.


:
–send(destination, message)
– receive(host, message)
Messaging Passing Method
Messaging Passing Method
 A link has some capacity that determines the number of
messages that can reside in it temporarily for which
Every link has a queue associated with it which can be
either of zero capacity or of bounded capacity or of
unbounded capacity.
 In zero capacity, sender wait until receiver inform
sender that it has received the message.
 In non-zero capacity cases, a process does not know
whether a message has been received or not after the
send operation. For this, the sender must communicate
to receiver explicitly. Implementation of the link
depends on the situation, it can be either a Direct
communication link or an Indirect communication
link.
Direct Communication
 Direct Communication links are implemented when the
processes use specific process identifier for the
communication. The process which want to communicate must
explicitly name the recipient or sender of communication. For
example: the print server.
 Send(P, message) – send message to P
 Receive(Q, message) – receive message from Q
 In this method of communication, the communication link get
established automatically, which can be either unidirectional or
bidirectional, but one link can be used between one pair of the
sender and receiver and one pair of sender and receiver should
not possess more than one pair of link. message. The problem
with this method of communication is that if the name of one
process changes, this method will not work.
Indirect Communication
 Indirect Communication is done via a shared mailbox
(port), which consists of queue of messages. Sender keeps
the message in mailbox and receiver picks them up.
Processes uses mailboxes (also referred to as ports) for
sending and receiving messages.
 Each mailbox has a unique id and processes can
communicate only if they share a mailbox. Link established
only if processes share a common mailbox and a single link
can be associated with many processes. Each pair of
processes can share several communication links and these
link may be unidirectional or bi-directional.
 Suppose two process want to communicate though Indirect
message passing, the required operations are: create a
mail box, use this mail box for sending and receiving
messages, destroy the mail box.
Indirect Communication
A mailbox can be made private to a single
sender/receiver pair and can also be shared between
multiple sender/receiver pairs.

send(A, message)
receive(A, message) where A is Id of
mailbox
CPU scheduling
 In the uniprogrammming systems like MS DOS, when a
process waits for any I/O operation to be done, the CPU
remains idle. This is an overhead since it wastes the time
and causes the problem of starvation. However, In
Multiprogramming systems, the CPU doesn't remain idle
during the waiting time of the Process and it starts
executing other processes. Operating System has to define
which process the CPU will be given.

 In Multiprogramming systems, the Operating system


schedules the processes on the CPU to have the maximum
utilization of it and this procedure is called CPU
scheduling. The Operating System uses various
scheduling algorithm to schedule the processes.
CPU scheduling
 CPU scheduling is a process which allows one process to
use the CPU while the execution of another process is on
hold(in waiting state) due to unavailability of any resource
like I/O etc, thereby making full use of CPU. The aim of
CPU scheduling is to make the system efficient, fast and
fair.
 Whenever the CPU becomes idle, the operating system
must select one of the processes in the ready queue to be
executed. The selection process is carried out by the short-
term scheduler (or CPU scheduler). The scheduler selects
from among the processes in memory that are ready to
execute, and allocates the CPU to one of them.
CPU-I/O Burst Cycle
 Process execution consists a cycle of CPU execution and
I/O wait.
 Processes alternate between these two states. Process
execution begins with a CPU burst that is followed by
continuous cycles of I/O bursts and CPU Bursts until the
termination.
 So CPU scheduling algorithms can be selected according
to CPU burst and I/O burst time.
CPU-I/O Burst Cycle
Dispatcher
 Another component involved in the CPU scheduling
function is the Dispatcher. The dispatcher is the module
that gives control of the CPU to the process selected by
the short-term scheduler. This function involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to
restart that program from where it left last time.
 The dispatcher should be as fast as possible, given that it
is invoked during every process switch. The time taken by
the dispatcher to stop one process and start another
process is known as the Dispatch Latency.
Types of CPU Scheduling
CPU scheduling decisions may take place under the
following four circumstances:
 When a process switches from the running state to
the waiting state (for I/O request or invocation of
wait for the termination of one of the child
processes).
 When a process switches from the running state to
the ready state (for example, when an interrupt
occurs).
 When a process switches from the waiting state to
the ready state (for example, completion of I/O).
 When a process terminates.
Non-Preemptive Scheduling

 Under non-preemptive scheduling, once the CPU has


been allocated to a process, the process keeps the CPU
until it releases the CPU either by terminating or by
switching to the waiting state.
Preemptive Scheduling
 In this type of Scheduling, the tasks are usually
assigned with priorities.
 At times it is necessary to run a certain task that has a
higher priority before another task although it is
running.
 Therefore, the running task is interrupted for some
time and resumed later when the priority task has
finished its execution.
CPU Scheduling Decisions
Various Times related to the Process
Various Times related to the Process
Various Times related to the Process
1. Arrival Time
 The time at which the process enters into the ready queue
is called the arrival time.

2. Burst Time
 The total amount of time required by the CPU to execute
the whole process is called the Burst Time. This does not
include the waiting time. It is confusing to calculate the
execution time for a process even before executing it hence
the scheduling problems based on the burst time cannot be
implemented in reality.

3. Completion Time
 The Time at which the process enters into the completion
state or the time at which the process completes its
execution, is called completion time.
Various Times related to the Process
4. Turnaround time
 The total amount of time spent by the process from its
arrival to its completion, is called Turnaround time.

5. Waiting Time
 The Total amount of time for which the process waits for
the CPU to be assigned is called waiting time.

6. Response Time
 The difference between the arrival time and the time at
which the process first gets the CPU is called Response
Time.

RT=First response-AT
CPU Scheduling Criteria
There are many different criteria’s to check when considering
the "best" scheduling algorithm, they are:
 CPU Utilization
 To make out the best use of CPU and not to waste any CPU cycle,
CPU would be working most of the time(Ideally 100% of the
time). Considering a real system, CPU usage should range from
40% (lightly loaded) to 90% (heavily loaded.)
 Throughput
 It is the total number of processes completed per unit time or rather
say total amount of work done in a unit of time. This may range
from 10/second to 1/hour depending on the specific processes.
 Turnaround Time
 It is the amount of time taken to execute a particular process, i.e.
The interval from time of submission of the process to the time of
completion of the process(Wall clock time).
CPU Scheduling Criteria
 Waiting Time
The sum of the periods spent waiting in the ready queue
amount of time a process has been waiting in the ready
queue to acquire get control on the CPU.
 Load Average
It is the average number of processes residing in the ready
queue waiting for their turn to get into the CPU.
 Response Time
Amount of time it takes from when a request was submitted
until the first response is produced. Remember, it is the
time till the first response and not the completion of process
execution(final response).
Scheduling Algorithms
The Purpose of a Scheduling algorithm:

 Maximum CPU utilization


 Fare allocation of CPU
 Maximum throughput
 Minimum turnaround time
 Minimum waiting time
 Minimum response time
Scheduling Algorithms

 First Come First Serve(FCFS) Scheduling


 Shortest-Job-First(SJF) Scheduling
 Priority Scheduling
 Round Robin(RR) Scheduling
 Multilevel Queue Scheduling
 Multilevel Feedback Queue Scheduling
First Come First Serve(FCFS) Scheduling

 In the "First come first serve" scheduling algorithm, as the


name suggests, the process which arrives first, gets executed
first, or we can say that the process which requests the CPU
first, gets the CPU allocated first.
 First Come First Serve, is just like FIFO(First in First out)
Queue data structure, where the data element which is
added to the queue first, is the one who leaves the queue
first.
 This is used in Batch Systems.
 It's easy to understand and implement programmatically,
using a Queue data structure, where a new process enters
through the tail of the queue, and the scheduler selects
process from the head of the queue.
 A perfect real life example of FCFS scheduling is buying
tickets at ticket counter.
Calculating Average Waiting Time
 For every scheduling algorithm, Average waiting
time is a crucial parameter to judge its performance.
 AWT or Average waiting time is the average of the
waiting times of the processes in the queue, waiting
for the scheduler to pick them for execution.

 Lower the Average Waiting Time, better the


scheduling algorithm.
For Example
 Let's take an example of The FCFS scheduling
algorithm. In the Following schedule, there are 5
processes with process ID P0, P1, P2, P3 and P4. P0
arrives at time 0, P1 at time 1, P2 at time 2, P3 arrives at
time 3 and Process P4 arrives at time 4 in the ready
queue. The processes and their respective Arrival and
Burst time are given in the following table.

 The average waiting Time is determined by summing


the respective waiting time of all the processes and
divided the sum by the total number of process.
Advantages of FCFS

 Simple
 Easy
 First come, First serve
Problems with FCFS Scheduling
 It is Non Pre-emptive algorithm, which means
the process priority doesn't matter.
If a process with very least priority is being executed, more
like daily routine backup process, which takes more time,
and all of a sudden some other high priority process
arrives, like interrupt to avoid system crash, the high
priority process will have to wait, and hence in this case,
the system will crash, just because of improper process
scheduling.
 Not optimal Average Waiting Time.
 Resources utilization in parallel is not possible, which leads
to Convoy Effect, and hence poor resource(CPU, I/O etc)
utilization.
 Due to the non-preemptive nature of the algorithm, the
problem of starvation may occur.
What is Convoy Effect?

 Convoy Effect is a situation where many processes,


who need to use a resource for short time are blocked
by one process holding that resource for a long time.
 This essentially leads to poor utilization of resources
and hence poor performance.
Shortest Job First scheduling
 Shortest Job First scheduling works on the process with the
shortest burst time or duration first.
 This is the best approach to minimize waiting time.
 This is used in Batch Systems.
 It is of two types:
 Non Pre-emptive
 Pre-emptive
 To successfully implement it, the burst time/duration time
of the processes should be known to the processor in
advance, which is practically not feasible all the time.
 This scheduling algorithm is optimal if all the
jobs/processes are available at the same time. (either
Arrival time is 0 for all, or Arrival time is same for all)
Problem with SJF
 If the arrival time for processes are different, which
means all the processes are not available in the ready
queue at time 0, and some jobs arrive after some time,
in such situation, sometimes process with short burst
time have to wait for the current process's execution to
finish, because in Non Pre-emptive SJF, on arrival of a
process with short duration, the existing job/process's
execution is not halted/stopped to execute the short
job first.
 This leads to the problem of Starvation, where a
shorter process has to wait for a long time until the
current longer process gets executed. This happens if
shorter jobs keep coming, but this can be solved using
the concept of aging.
Non Pre-emptive Shortest Job First
Example
 In the following example, there are five jobs
named as P1, P2, P3, P4 and P5. Their arrival time
and burst time are given in the table below.
Shortest Remaining Time First (SRTF)
Scheduling Algorithm
 This Algorithm is the preemptive version of SJF scheduling.
In SRTF, the execution of the process can be stopped after
certain amount of time. At the arrival of every process, the short
term scheduler schedules the process with the least remaining
burst time among the list of available processes and the running
process.
 Once all the processes are available in the ready queue, No
preemption will be done and the algorithm will work as SJF
scheduling. The context of the process is saved in the Process
Control Block when the process is removed from the execution
and the next process is scheduled. This PCB is accessed on the
next execution of this process.
 Example
 In this Example, there are five jobs P1, P2, P3, P4, P5 and P6.
Their arrival time and burst time are given below in the table.
For Example
Priority Based Scheduling
 Priority is assigned for each process.
 Process with highest priority is executed first and so on.
 Processes with same priority are executed in FCFS
manner.
 Priority can be decided based on memory requirements,
time requirements or any other resource requirement.
 Priority scheduling is a non-preemptive algorithm and
one of the most common scheduling algorithms in batch
systems.
For Example
For Example
For Example
Process Wait Time : Service
Time - Arrival Time

P0 9-0=9

P1 6-1=5

P2 14 - 2 = 12

Average Wait Time: (9+5+12+0) / 4 = 6.5


Disadvantage
 A major problem with priority scheduling
algorithms is indefinite blocking or starvation. A
priority scheduling algorithm can leave some low
priority processes waiting indefinitely for the CPU.

 Solution- Aging is the technique of gradually


increasing the priority of processes that wait
in the system for long time.
Round Robin scheduling algorithm
 Round Robin scheduling algorithm is one of the most popular
scheduling algorithm which can actually be implemented in
most of the operating systems.
 This is the preemptive version of first come first serve
scheduling.
 The Algorithm focuses on Time Sharing.
 In this algorithm, every process gets executed in a cyclic way.
 A certain time slice is defined in the system which is called
time quantum.

 Each process present in the ready queue is assigned the CPU for
that time quantum, if the execution of the process is completed
during that time then the process will terminate else the
process will go back to the ready queue and waits for the next
turn to complete the execution.
Round Robin scheduling algorithm
Advantages

 It can be actually implementable in the system


because it is not depending on the burst time.
 It doesn't suffer from the problem of starvation or
convoy effect.
 All the jobs get a fare allocation of CPU.
Disadvantages

 The higher the time quantum, the higher the response


time in the system.
 The lower the time quantum, the higher the context
switching overhead in the system.
 Deciding a perfect time quantum is really a very
difficult task in the system.
For Example
For Example
Multilevel Queue Scheduling
 Another class of scheduling algorithms has been created for
situations in which processes are easily classified into different
groups.

 For example: A common division is made between


foreground(or interactive) processes and background (or
batch) processes. These two types of processes have different
response-time requirements, and so might have different
scheduling needs. In addition, foreground processes may have
priority over background processes.

 A multi-level queue scheduling algorithm partitions the


ready queue into several separate queues. The processes
are permanently assigned to one queue, generally based
on some property of the process, such as memory size,
process priority, or process type. Each queue has its own
scheduling algorithm.
Multilevel Queue Scheduling
 Multiple-level queues are not an independent scheduling
algorithm. They make use of other existing algorithms to
group and schedule jobs with common characteristics.
 Multiple queues are maintained for processes with
common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
 For example, CPU-bound jobs can be scheduled in one
queue and all I/O-bound jobs in another queue. The
Process Scheduler then alternately selects jobs from each
queue and assigns them to the CPU based on the algorithm
assigned to the queue.
Multilevel Queue Scheduling
 For example: separate queues might be used for
foreground and background processes. The foreground
queue might be scheduled by Round Robin algorithm,
while the background queue is scheduled by an FCFS
algorithm.

 In addition, there must be scheduling among the


queues, which is commonly implemented as fixed-
priority preemptive scheduling. For example: The
foreground queue may have absolute priority over the
background queue.
Multilevel Queue Scheduling
 Let us consider an example of a multilevel queue-
scheduling algorithm with five queues:
 System Processes
 Interactive Processes
 Interactive Editing Processes
 Batch Processes
 Student Processes
 Each queue has absolute priority over lower-priority
queues. No process in the batch queue, for example,
could run unless the queues for system processes,
interactive processes, and interactive editing processes
were all empty. If an interactive editing process entered
the ready queue while a batch process was running, the
batch process will be preempted.
Multilevel Queue Scheduling
Multilevel Feedback Queue Scheduling
 In a multilevel queue-scheduling algorithm, processes are
permanently assigned to a queue on entry to the system.
Processes do not move between queues. This setup has the
advantage of low scheduling overhead, but the
disadvantage of being inflexible.
 Multilevel feedback queue scheduling, however, allows a
process to move between queues. The idea is to separate
processes with different CPU-burst characteristics. If a
process uses too much CPU time, it will be moved to a
lower-priority queue. Similarly, a process that waits too
long in a lower-priority queue may be moved to a higher-
priority queue. This form of aging prevents starvation.
Multilevel Feedback Queue Scheduling

 In general, a multilevel feedback queue scheduler is


defined by the following parameters:
 The number of queues.
 The scheduling algorithm for each queue.
 The method used to determine when to upgrade a
process to a higher-priority queue.
 The method used to determine when to demote a
process to a lower-priority queue.
 The method used to determine which queue a process
will enter when that process needs service.
Multiple-Processor Scheduling
 In multiple-processor scheduling multiple CPU’s are
available and hence Load Sharing becomes possible.
 However multiple processor scheduling is
more complex as compared to single processor
scheduling.
 In multiple processor scheduling there are cases when
the processors are identical i.e. HOMOGENEOUS, in
terms of their functionality, we can use any processor
available to run any process in the queue.
Approaches to Multiple-Processor
Scheduling
 One approach is when all the scheduling decisions and I/O
processing are handled by a single processor which is called
the Master Server and the other processors executes only
the user code. This is simple and reduces the need of data
sharing. This entire scenario is called Asymmetric
Multiprocessing.
 A second approach uses Symmetric Multiprocessing where
each processor is self scheduling. All processes may be in a
common ready queue or each processor may have its own
private queue for ready processes. The scheduling proceeds
further by having the scheduler for each processor examine the
ready queue and select a process to execute.

Processor Affinity
 When a process runs on a specific processor there are certain
effects on the cache memory.
 The data most recently accessed by the process populate the
cache for the processor and as a result successive memory
accesses by the process are often satisfied in the cache memory.
 Now if the process migrates to another processor, the contents
of the cache memory must be invalidated for the first processor
and the cache for the second processor must be repopulated.
 Because of the high cost of invalidating and repopulating
caches, most of the SMP(symmetric multiprocessing) systems
try to avoid migration of processes from one processor to
another and try to keep a process running on the same
processor. This is known as PROCESSOR AFFINITY.
Processor Affinity
 There are two types of processor affinity:
 Soft Affinity – When an operating system has a
policy of attempting to keep a process running on
the same processor but not guaranteeing it will do
so, this situation is called soft affinity.
 Hard Affinity – Some systems such as Linux also
provide some system calls that support Hard
Affinity which allows a process to migrate between
processors.
Load Balancing
 Load Balancing is the phenomena which keeps
the workload evenly distributed across all processors in
an SMP system. Load balancing is necessary only on
systems where each processor has its own private queue of
process which is eligible to execute. Load balancing is
unnecessary because once a processor becomes idle it
immediately extracts a runnable process from the
common run queue.
 On SMP(symmetric multiprocessing), it is important to
keep the workload balanced among all processors to fully
utilize the benefits of having more than one processor else
one or more processor will sit idle while other processors
have high workloads along with lists of processors
awaiting the CPU.
Load Balancing
There are two general approaches to load balancing :
 Push Migration – In push migration a task routinely
checks the load on each processor and if it finds an
imbalance then it evenly distributes load on each
processors by moving the processes from overloaded
to idle or less busy processors.
 Pull Migration – Pull Migration occurs when an idle
processor pulls a waiting task from a busy processor for
its execution.
Process Synchronization
 Process Synchronization means sharing system
resources by processes in a such a way that,
Concurrent access to shared data is handled
thereby minimizing the chance of inconsistent
data.
 Maintaining data consistency demands
mechanisms to ensure synchronized execution
of cooperating processes.
Process Synchronization
On the basis of synchronization, processes are categorized
as one of the following two types:
 Independent Process : Execution of one process does
not affects the execution of other processes.
 Cooperative Process : Execution of one process affects
the execution of other processes.
 Process synchronization problem arises in the case of
Cooperative process also because resources are shared in
Cooperative processes.
Process Synchronization
 Process Synchronization was introduced to handle
problems that arose while multiple process executions.
Some of the problems are discussed below:
Cooperating Process
 When two or more process cooperates with each other,
their order of execution must be preserved otherwise there
can be conflicts in their execution and inappropriate
outputs can be produced.

 A cooperative process is the one which can affect the


execution of other process or can be affected by the
execution of other process. Such processes need to be
synchronized so that their order of execution can be
guaranteed.
Cooperating Process
 Cooperating process may directly share a logical address
space(both code and data) or be allowed to share data only
through files.
 The logical space is shared through the use of lightweight
processes called threads.

 Concurrent access to shared data may result into data


inconsistency. So, Process sychronization is required.
Race condition
 Let us discuss this problem with the help of
example.
 Let us take a bounder buffer problem where
there are two cooperating processes :producer
process and consumer process.
 These processes are sharing the same variable
counter.
 This variable gets incremented when producer
process produces and decrements when
consumer process consumes.
Producer Process Code
item nextProduced;

while(1) {

while(counter==Buffer_size);
buffer[in] = nextProduced;
in = (in + 1) % buffer_size;
counter++;
}

Consumer Process Code


item nextConsumed;

while(1){

while(counter==0);
nextConsumed = buffer[out];
out = (out + 1) % buffer_size;
counter--;
}
 Suppose the initial value of variable counter is 5 and
producer and consumer executes the statements
counter++ and counter–- concurrently.

 The concurrent execution can be done with


interleaving of statements as follows:
 We have arrived at the incorrect state” counter=“4”
recording that there are four full buffers, infact there
are five.

 We have arrived at incorrect state because both


processes manipulated variable counter concurrently.
Race Condition
 A situation like this where several processes
access and manipulate the same data
concurrently and the outcome of execution
depends on the particular order in which the
access takes place is called a Race Condition.

 To guard against this, we require some form of


synchronization among processes.
Critical Section Problem

 Critical section is a code segment that can be accessed


by only one process at a time. Critical section contains
shared variables which need to be synchronized to
maintain consistency of data variables.

 It means that in a group of cooperating processes, at a


given point of time, only one process must be
executing its critical section. If any other process also
wants to execute its critical section, it must wait until
the first one finishes.
Critical Section Problem
Solution to Critical Section Problem
A solution to the critical section problem must satisfy the following three
conditions:

 Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its
critical section at a given point of time.

 Progress
If no process is in its critical section, and if one or more threads want to
execute their critical section then any one of these threads must be
allowed to get into its critical section.

 Bounded Waiting
After a process makes a request for getting into its critical section, there
is a limit for how many other processes can get into their critical
section, before this process's request is granted. So after the limit is
reached, system must grant the process permission to get into its
critical section.
Peterson’s Solution
Peterson’s Solution is a classical software based solution
to the critical section problem.
In Peterson’s solution, we have two shared variables:

 Boolean flag[i] :Initialized to FALSE, initially no one is


interested in entering the critical section
 int turn : The process whose turn is to enter the critical
section.
Peterson’s Solution
Peterson’s Solution
 Peterson’s Solution preserves all three conditions :

Mutual Exclusion is assured as only one process can
access the critical section at any time.
 Progress is also assured, as a process outside the
critical section does not blocks other processes from
entering the critical section.
 Bounded Waiting is preserved as every process gets a
fair chance.

Disadvantages of Peterson’s Solution
 It involves Busy waiting
 It is limited to 2 processes.
Synchronization Hardware
 Many systems provide hardware support for critical section
code. The critical section problem could be solved easily in
a single-processor environment if we could disallow
interrupts to occur while a shared variable or resource is
being modified.
 In this manner, we could be sure that the current sequence
of instructions would be allowed to execute in order
without pre-emption. Unfortunately, this solution is not
feasible in a multiprocessor environment.
 Disabling interrupt on a multiprocessor environment can
be time consuming as the message is passed to all the
processors.
 This message transmission lag, delays entry of threads into
critical section and the system efficiency decreases.
Introduction to Semaphores
 In 1965, Dijkstra proposed a new and very significant
technique for managing concurrent processes by using
the value of a simple integer variable to synchronize
the progress of interacting processes. This integer
variable is called semaphore.

 So it is basically a synchronizing tool and is accessed


only through two low standard atomic
operations, wait and signal designated
by P(S) and V(S)respectively.
Introduction to Semaphores
 In very simple words, semaphore is a variable which
can hold only a non-negative Integer value, shared
between all the threads, with
operations wait and signal, which work as follow:
 P(S): if S ≥ 1 then S := S - 1
 else <block and enqueue the process>;

 V(S): if <some process is blocked on the queue>
 then <unblock a process>
 else S := S + 1;
Introduction to Semaphores

 The classical definitions of wait and signal are:


 Wait: Decrements the value of its argument S, as soon
as it would become non-negative(greater than or equal
to 1).
 Signal: Increments the value of its argument S, as
there is no more process blocked on the queue.
Properties of Semaphores
 It's simple and always have a non-negative Integer
value.
 Works with many processes.
 Can have many different critical sections with different
semaphores.
 Each critical section has unique access semaphores.
 Can permit multiple processes into the critical section
at once, if desirable.
Types of Semaphores
1) Binary Semaphores : They can only be either 0 or 1.
They are also known as mutex locks, as the locks can
provide mutual exclusion. All the processes can share
the same mutex semaphore that is initialized to 1.
Then, a process has to wait until the lock becomes 0.
Then, the process can make the mutex semaphore 1
and start its critical section. When it completes its
critical section, it can reset the value of mutex
semaphore to 0 and some other process can enter its
critical section.
Types of Semaphores
2) Counting Semaphores : They can have any value and are
not restricted over a certain domain. They can be used to
control access a resource that has a limitation on the
number of simultaneous accesses.

 The semaphore can be initialized to the number of


instances of the resource. Whenever a process wants to use
that resource, it checks if the number of remaining
instances is more than zero, i.e., the process has an
instance available.

 Then, the process can enter its critical section thereby


decreasing the value of the counting semaphore by 1. After
the process is over with the use of the instance of the
resource, it can leave the critical section thereby adding 1 to
the number of available instances of the resource.
Limitations of Semaphores

 Priority Inversion is a big limitation of semaphores.


 Their use is not enforced, but is by convention only.
 With improper use, a process may block indefinitely.
Such a situation is called Deadlock.
Classical Problems of Synchronization

Below are some of the classical problem depicting flaws of


process synchronization in systems where cooperating
processes are present.
 Bounded Buffer (Producer-Consumer) Problem
 The Readers Writers Problem
 Dining Philosophers Problem
Bounded Buffer (Producer-Consumer)
Problem
 Bounded buffer problem, which is also
called producer consumer problem, is one of the
classic problems of synchronization.

 The problem describes two processes, User and


Consumer, who shares a common fixed size buffer.

 Producer: The producer's job is to generate a bit of
data, put it into the buffer and start again.

 Consumer: The consumer is consuming the data(i.e


remaining it from the buffer) one piece at a time.
Bounded Buffer (Producer-
Consumer) Problem
 If the buffer is empty, then a consumer should not
try to access the data item from it.
 Similarly, a producer should not produce any data
item if the buffer is full.
 Counter: It counts the data items in the buffer. or to
track whether the buffer is empty or full. Counter is
shared between two processes and updated by both.
How it works?
 Counter value is checked by consumer before consuming
it.
 If counter is 1 or greater than 1 then start executing the
process and updates the counters.
 Similarly producer checks the buffer for the value of
Counter for adding data.
 If the counter is less than its maximum values, it means
that there is some space in Buffer.
 It starts executing for producing the data items and
update the counter by implementing it by one.
 Let max= maximum size of the buffer.
 If buffer is full then counter=max and consumer is busy
executive other instructions or has not been allotted its
time slice yet.
In this buffer is full producer has to wait until
consumer set counter by commenting its value by 1.
In this situation, buffer is empty, that is counter =0, and the
producer is busy executive other instructions or has not been
allotted its time slice yet. At this consumer is ready to consume an
item from the buffer.
 Consumer waits until counter =1

 When the buffer is empty and producer busy in filling


data items in Buffer in while consumer goes to SLEEP.
When the counter goes to 1, then system generates
WAKEUP calls to make consumer to wake up and start
executing it.
Solution for Bounded Buffer (Producer-
Consumer) Problem

One solution of this problem is to use semaphores. The


semaphores which will be used here are:

 S, a binary semaphore which is used to acquire and


release the lock.

 E, a counting semaphore whose initial value is the


number of slots in the buffer, since, initially all slots
are empty.

 F, a counting semaphore whose initial value is 0.


For Producer
 Void producer()
 {
 While(True)
 {
 Produce(); //entry section
 Wait(E);
 Wait(S);
 Append(); // critical section
 Signal(S); //exit section
 Signal(F);
 }
 }
 Looking at the above code for a producer, we can see that a
producer first waits until there is atleast one empty slot.

 Then it decrements the empty semaphore by wait(E)


because, there will now be one less empty slot, since the
producer is going to insert data in one of those slots.

 Then, it acquires lock on the buffer by wait(S), so that


the consumer cannot access the buffer until producer
completes its operation.

 After performing the insert operation, the lock is released


and the value of full is incremented by signal(F)
because the producer has just filled a slot in the buffer.
For Consumer
 Void consumer()
 {
 While(T)
 {
 Wait(F); //entry section
 Wait(S);
 Take(); // critical section
 Signal(S); //exit section
 Signal(E);
 Use()
 }
 }
 The consumer waits until there is atleast one full slot in the
buffer.

 Then it decrements the full semaphore by wait(F)


because the number of occupied slots will be decreased by
one, after the consumer completes its operation.

 After that, the consumer acquires lock on the buffer by


wait(S)

 Following that, the consumer completes the removal


operation so that the data from one of the full slots is
removed.

 Then, the consumer releases the lock by signal(S).


 Finally, the empty semaphore is incremented by 1 by
signal(E), because the consumer has just removed data
from an occupied slot, thus making it empty.
Readers-Writers Problem

 There is a shared resource which should be accessed


by multiple processes.
 There are two types of processes : reader and writer.
 Any number of readers can read from the shared
resource simultaneously, but only one writer can write
to the shared resource.
 When a writer is writing data to the resource, no
other process can access the resource. A writer cannot
write to the resource if there are none zero number of
readers accessing the resource at that time.
Readers-Writers Problem

 If we have shared memory or resources, then readers -


writers can create conflict due to these combinations:
writer - writer and writer- reader access critical section
simultaneously that create synchronization problem,
loss of data etc.
 We use semaphores to avoids these problems in reader
writer problem.
Readers-Writers Problem
Readers-Writers Problem

 We follow some concepts using semaphore to solve


readers - writers problem:
 Reader and writer cannot enter into critical section at
same time.
 Multiple reader can access critical section
simultaneously but multiple writer cannot access.
Solution of Reader –writer Problem
To solve this problem we using the three Semaphores:
 Wrt
 Mutex
 Readcount
 Mutex is used to synchronize multiple reader.
For Writer
 Wait(wrt);
 Write operation();
 Signal(wrt);
For Reader
 Wait(mutex);
 Readcount++;
 If(readcount==1)
 Wait(wrt);
 Signal(mutex);
 Read operation
 Wait(mutex)
 Readcount--;
 If(readcount==0)
 Signal(wrt);
 Signal(mutex);
 It is the responsibility of first reader to ensure that
no writer can enter the critical section if a reader is
already there as it results into conflict by using
wait(wrt).

 Also it is signal(wrt) in reader’s code is executed by


last reader so that writer can enter critical section
when it is empty.

 Wait(mutex) and signal(mutex) are used to make


sure that readcount variable is updated by one
reader at a time.
The Dining Philosopher Problem
 The Dining Philosopher Problem states that K
philosophers seated around a circular table with
one chopstick between each pair of philosophers.

 There is one chopstick between each philosopher.

 A philosopher may eat if he can pickup the two


chopsticks adjacent to him.

 One chopstick may be picked up by any one of its


adjacent followers but not both.
The Dining Philosopher Problem
The Dining Philosopher Problem
 There are three states of philosopher : THINKING,
HUNGRY and EATING.
 Here there are two semaphores : Mutex and a
semaphore array for the philosophers.
 Mutex is used such that no two philosophers may
access the pickup or putdown at the same time.
Mutex Solution
 Void philosopher(int i) basically i is No. of philosopher.
{
 While(true)
 {think();
 Take. Chopstick(Ri);
 Take. Chopstick(Li);
 Eat();
 Put. Chopstick(Li);
 Put. Chopstick(Ri);
}
}
Semaphore solution
 void philosopher(int i)
 While(true)
 {
 Think();
 Take. Chopstick(Ri);
 If available(Li)
 {
 Take. Chopstick(Ri);
 Eat();
 Put. Chopstick(Ri);
 Put. Chopstick(Li);
 }
 Else
 {
 Put. Chopstick(Ri);
 Sleep(T)
 }
 }
Semaphore solution
The Dining Philosopher Problem
 Allow at most four philosophers to be sitting
simultaneously at the table.
 Allow a philosopher to pick up his/her chopstick only
is both chopsticks are available.
 Use an asymmetric solution; that is ,an odd
philosopher picks up first his/her left chopstick and
then his/her right chopstick, whereas an even
philosopher pick up his/her right chopstick and then
his/her left chopstick.
Head command in Linux
 The head command, as the name implies, print the top
N number of data of the given input. By default, it
prints the first 10 lines of the specified files. If more
than one file name is provided then data from each file
is preceded by its file name.
Tail command in Linux
 The tail command, as the name implies, print the last
N number of data of the given input. By default it
prints the last 10 lines of the specified files. If more
than one file name is provided then data from each file
is precedes by its file name.
wc command in Linux

 wc stands for word count. As the name implies, it is


mainly used for counting purpose.
 It is used to find out number of lines, word
count, byte and characters count in the files
specified in the file arguments.
 By default it displays four-columnar output.
 First column shows number of lines present in a file
specified, second column shows number of words
present in the file, third column shows number of
characters present in file and fourth column itself is
the file name which are given as argument.
wc command in Linux Option
 -l: line in the file
 -w: no. of words
 -c: no. of Bytes
 -m: no. of Characters
 -L: length of longest (number of characters)
line
tee command in Linux
 tee command reads the standard input and writes it
to both the standard output and one or more files.
 It basically breaks the output of a program so that it
can be both displayed and saved in a file.
 It does both the tasks simultaneously, copies the result
into the specified files or variables and also display the
result.
 -a Option : It basically do not overwrite the file but
append to the given file.

You might also like