Chapter 2&3
Chapter 2&3
Processes
• Note that only one process can be running but many process may be
ready and waiting.
4
Process Control Block
• Each process is represented in the OS by a process control block(PCB) –also
called a task control block. It contains many pieces of information
associated with a specific process, including these:
• Process State. New, Ready, Running, Waiting, Terminated.
• Program counter. The address of the next instruction to execute.
• CPU registers: Information of different registers.
• CPU scheduling information: Info including a process priority, pointers to scheduling
queues.
• Memory management info: Info include value of limit register and page table.
• Accounting Information: Includes CPU time used, process number.
• I/O status info: Includes I/O devices allocated to process, open files, and so on.`
5
Process Scheduling
• Multiprogramming – A computer running more than one program at a
time. One of the most important aspects of OS is the ability to
multiprogram. Multiprogramming increases CPU utilization by organizing
jobs so the CPU always has one to execute. The OS keeps several jobs in
memory simultaneously. The OS picks and begins to execute one of the
jobs in memory. Eventually, the job may have to wait for some task. In a
multiprogrammed system, the OS simply switches to another job and
executes.
• To meet these objectives, the process scheduler selects an available
process for program execution.
• For scheduling purpose, queues(scheduling queues) are used to store job
info.
6
Cont.
Scheduling Queues
Job queue: As process enter the system, it
is put into a job queue.
Ready queue: Processes that are ready
and waiting to execute are kept on a list
called ready queue.
Device Queue: Contain list of processes
waiting for a particular I/O device. Also
called I/O queue.
Process scheduling
7
Cont.
Schedulers
• A process migrates among the various scheduling queues throughout its
lifetime. The OS must select processes from these queues in some fashion.
The selection process is carried out by the appropriate scheduler.
• Long-term scheduler or job scheduler– selects processes from disk and
loads them into memory for execution. Executes only when a process
leaves the system or if free memory is available. It controls the Degree of
programming– no of processes in memory.
• Short-term scheduler or CPU scheduler– selects from among the
processes that are ready to execute and allocates the CPU. Select processes
frequently.
• The primary distinction between the two lies in frequency of execution.
8
Cont.
Context Switching
• Interrupts cause OS to change a CPU from its current task and to run
a new task. When an interrupt occurs, the system needs to save
current context of the process running so that it can restore that
context during resuming. The context as represented in PCB includes
value of CPU registers, process state, and memory-management info.
• Switching the CPU to another process requires performing a state
save of the current process and a state restore of a different process.
This task is known as context switching. Context switching time is
pure overhead, because system does no useful work while switching.
9
Interprocess Communication(IPC)
• Processes executing concurrently in the operating system may be
either independent processes or cooperating processes. A process is
independent if it cannot affect or be affected by the other processes
executing in the system. And process that does not share data with
any other process is independent. A process is cooperating is it can
affect or be affected by the other processes.
• Reasons for process communication:
• Information sharing
• Computation speedup
• Modularity
• Cooperating processes require an IPC mechanism that will allow them
to exchange data and information. There are two fundamental
models of IPC: shared memory and message passing.
10
Cont.
• Shared memory model – processes can exchange information by
reading and writing data to a shared region of memory.
• Message-passing model – communication takes place by means of
messages exchanged between the cooperating processes. This
models is useful for exchanging smaller amounts of data and is also
easier to implement in a distributed system than shared memory.
11
Chapter 3
Threads
13
Cont.
• The implementation of threads and processes differs from one OS to
another. But in most cases, a thread is contained inside a process.
• Multiple threads can exist within the same process and share
resources such as memory, while different processes do not share
these resources.
• Like process states, threads also have states:
• New, Ready, Running , Waiting and Terminated.
• Like processes, the OS will switch between threads for CPU usage.
• Creating threads is inexpensive compared to processes:
• They don’t need new address space, program code or OS resources.
• Context switching is faster as the only things to save/restore are Program
counter, registers and stacks.
14
Processes and Threads
Similarities Differences
- Both share CPU and only one - Unlike processes, threads are
thread/process is active at a time. not independent of one
- Like processes, threads within a
another.
process execute sequentially.
- Like processes, thread can create - Unlike processes, threads are
child. designed to assist one another.
- Like process, if one thread is blocked,
another thread can be run.
15
Examples of Threads
In a word processor
o background thread may check spelling and grammar, while a
foreground thread processes user input ( keystrokes), while yet a
third thread loads images from the hard drive, and a fourth does
periodic automatic backups of the file being edited.
In a spreadsheet program
oOne thread could display menus and read user input, while another
thread executes user commands and updates the spreadsheet.
In a web browser
oMultiple threads allow for multiple requests to be satisfied
simultaneously, without having to service requests sequentially or to
fork off separate processes for every incoming request.
16
Multithreading
• Multitasking is the ability of an OS to execute more than one program
simultaneously. Though we say so but in reality no two programs on a
single processor machine can be executed at the same time.
• Multiprocessing is the ability to execute more than one program using
multiple processors.
• Multithreading refers to the ability on an OS to support multiple threads of
execution within a single process.
• If a process has multiple thread of control, it can perform more than one
task at a time:
• Each threads have their own program counter, stacks and registers.
• But they share common code, data and some OS data structures like files.
• Multithreading is the ability of an OS to execute the different parts of the
program, called threads, simultaneously.
• Implementing multitasking is relatively easier in than multithreading.
17
Cont.
Instruction
trace
18
Benefits of Multithreading
• There are four major benefits of multi-threading:
• Responsiveness
• One thread cab give response while other threads are clocked or slowed down doing
computations.
• Resource sharing
• Threads share common code, data and resources of the process to which they belong.
This allows multiple tasks to be performed within the same address space.
• Economy
• Creating & allocating processes is expensive, while creating threads is cheaper as they
share the resources of the process to which they belong. Hence, it’s more economical to
create and context-switch threads.
• Scalability / utilization of multi-processor architectures
• The benefits of multithreading is increased in a multi-processor architecture, where
threads are executed in parallel. A single-threaded process can run on one CPU, no
matter how many area available.
19
Multithreading models
There are two types of multithreading models in modern OS:
• 1 Kernel threads
• Are supported by the OS kernel itself.
• Need user/kernel mode switch to change threads
• 2 User threads
• Are threads application programmer put in their programs. They are managed
without the kernel support.
• Cannot support multiprocessing.
• There are three types of relationships between kernel and user
threads:
• Many-to-one – It maps many user level threads in to one kernel thread.
• One-to-one – A separate kernel thread is created to handle each user-level thread.
• Many-to-many – Allows the mapping of many user threads in to many kernel threads.
20
Cont.
21