OS-Unit.1
OS-Unit.1
1 2
OS UNIT-1 OS UNIT-1
• Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt
• The operating system preserves the state of the CPU by storing registers and the program counter.
3 4
OS UNIT-1 OS UNIT-1
• Distributed systems use multiple central processors to serve multiple real- time applications and • It also makes sure when a program is executed or terminated, it will stop the communication between
multiple users. the device and the computer system.
• Data processing jobs are distributed among the processors accordingly.
• The processors communicate with one another through various communication lines. Process Management
• The process is a program under the execution.
• The operating system manages all the processes so that each process gets the CPU for a specific time
to execute itself, and there will be less waiting time for each process.
• This management is also called process scheduling.
Storage Management
• Storage management is a procedure that allows users to maximize the utilization of storage devices
Functions of an operating System
while also protecting data.
1. Memory Management
• The operating system is in charge of storing and accessing files.
2. Device Management
• The creation of files, the creation of directories, the reading and writing of data from files and
3. Process Management
directories, as well as the copying of the contents of files and directories from one location to another
4. Storage Management
are all included in storage management.
5. Resource Management
Resource Management
Memory Management
• Resource Management in Operating System is the process to manage all the resources efficiently
• In a computer system, we have primary memory, which is also called main memory, and it is made
like CPU, memory, input/output devices, and other hardware resources
by the sequence of bytes.
• Resource management is an important thing because resources of a computer are limited and multiple
• Each byte in the main memory has a unique address which helps it to check if it is occupied or not.
processes or users may require access to the same resources like CPU, memory etc. at the same time.
• For executing each program in its computer system, it should be in the main memory.
• The operating system has to manage and ensure that all processes get the resources they need to
• Operating systems can easily check which bytes of main memory are empty and which are not.
execute, without any problems.
• So it allocates the main memory for the program execution, and when the program is completed or
terminated, then it deallocates the memory.
Security and Protection
• Operating systems also keep a record that which byte of memory is assigned to which program.
• The operating system uses password protection to protect user data and similar other
techniques.
Device Management
• It also prevents unauthorized access to programs and user data. The operating system
• The operating system manages the communication between the computer system and the peripheral
provides various techniques which assure the integrity and confidentiality of user data.
devices connected to the I/O port of the system.
• The following security measures are used to protect user data:
• Peripheral devices use their respective drivers to communicate with the system.
• Protection against unauthorized access through login.
• The operating system determines which program or process is accessed by which connection and
• Protection against intrusion by keeping the firewall active.
device.
• Protecting the system memory against malicious access.
5 6
OS UNIT-1 OS UNIT-1
3. Device Management:
System Calls: • Request device, release device: If there are multiple users of the system, we first request the device.
After we finished with the device, we must release it.
“System calls provide the interface between a process & the OS”. These are usually available in the form of
• Read, write, reposition: Once the device has been requested & allocated to us, we can read, write &
assembly language instruction.
reposition the device.
System calls can be roughly grouped into 5 major categories.
1. Process Control.
4. Information maintenance:
2. File Manipulation
• Get time or date, set time or date: Most systems have a system call to return the current date & time or
3. Device Management
set the current date & time.
4. Information maintenance
• Get system data, set system data: Other system calls may return information about the system like
5. Communication
number of current users, version number of OS, amount of free memory etc.
1. Process Control:
• Get process attributes, set process attributes: The OS keeps information about all its processes & there
• End, abort: A running program needs to be able to has its execution either normally (end) or abnormally
are system calls to access this information.
(abort).
• Load, execute: A process or job executing one program may want to load and executes another
5. Communication:
program.
There are two modes of communication such as:
• Create Process, terminate process: There is a system call specifying for the purpose of creating a new
• Message passing model: Information is exchanged through an inter process communication facility
process. We may want to terminate a job or process that we created
provided by operating system. The source of the communication known as client & receiver known
• Get process attributes, set process attributes: If we create a new job or process we should able to control
as server exchange messages by read message & write message system calls.
its execution. This control requires the ability to determine & reset the attributes of a job
• Shared memory model: processes use map memory system calls to access regions of memory owned
• Wait time: After creating new jobs or processes, we may need to wait for them to finish their execution
by other processes. They exchange information by reading & writing data in the shared areas.
(wait time).
• Wait event, signal event: We may wait for a specific event to occur (wait event). The jobs or processes
then signal when that event has occurred (signal event).
Operating Systems Structures
• Operating systems have complex structures.
2. File Manipulation: • we want a structure that is easy to understand so that we can adapt an operating system to meet our
• Create file, delete file: We first need to be able to create & delete files. Both the system calls require specific needs.
the name of the file & some of its attributes. • Similar to how we break down larger problems into smaller, more manageable subproblems,
• Open file, close file: Once the file is created, we need to open it & use it. We close the file when we building an operating system in pieces is simpler.
are no longer using it. • The operating system is a component of every segment.
• Read, write, reposition file: After opening, we may also read, write or reposition the file (rewind or • The strategy for integrating different operating system components within the kernel can be thought
• Get file attributes, set file attributes: For either files or directories, we need to be able to determine the
values of various attributes & reset them if necessary. 1. Simple structure
2. Monolithic structure
7 8
OS UNIT-1 OS UNIT-1
3. Layered structure
4. Micro-kernel structure
5. Modular structure
Simple structure
• Such operating systems do not have well-defined structures and are small, simple, and limited. The
interfaces and levels of functionality are not well separated.
• MS-DOS is an example of such an operating system.
Advantages:
• In MS-DOS, application programs are able to access the basic I/O routines.
• Because layering is unnecessary and the kernel alone is responsible for managing all operations, it
• These types of operating systems cause the entire system to crash if one of the user programs fails.
is easy to design and execute.
• Due to the fact that functions like memory management, file management, process scheduling, etc.,
are implemented in the same address area, the monolithic kernel runs rather quickly.
• Utilizing the same address speeds up and reduces the time required for address allocation for new
processes.
Disadvantages:
• The monolithic kernel's services are interconnected in address space and have an impact on one
another, so if any of them malfunctions, the entire system does as well.
• It is not adaptable. Therefore, launching a new service is difficult.
Advantages:
• It delivers better application performance because of the few interfaces between the application program Layered structure
and the hardware. • An OS can be broken into pieces and retain much more control over the system.
• It is easy for kernel developers to develop such an operating system. • In this structure, the OS is broken into a number of layers (levels).
Disadvantages: • The bottom layer (layer 0) is the hardware, and the topmost layer (layer N) is the user interface.
• The structure is very complicated, as no clear boundaries exist between modules. • These layers are so designed that each layer uses the functions of the lower-level layers.
• It does not enforce data hiding in the operating system. • This simplifies the debugging process, if lower-level layers are debugged and an error occurs during
debugging, then the error must be on that layer only, as the lower-level layers have already been
Monolithic structure debugged.
• The monolithic operating system controls all aspects of the operating system's operation, including
file management, memory management, device management, and operational operations.
• The core of an operating system for computers is called the kernel (OS).
• All other System components are provided with fundamental services by the kernel.
• The operating system and the hardware use it as their main interface.
• When an operating system is built into a single piece of hardware, such as a keyboard or mouse,
the kernel can directly access all of its resources.
9 10
OS UNIT-1 OS UNIT-1
Advantages • It resembles layered structure due to the fact that each kernel has defined and protected interfaces,
• Layering makes it easier to enhance the operating system, as the implementation of a layer can be but it is more flexible than a layered structure as a module can call any other module.
changed easily without affecting the other layers.
• It is very easy to perform debugging and system verification.
Disadvantages
• In this structure, the application’s performance is degraded as compared to simple structure.
• It requires careful planning for designing the layers, as the higher layers use the functionalities of only
the lower layers.
Micro-kernel Structure
• This structure designs the operating system by removing all non-essential components from the Advantages
kernel. • Efficient
• This results in a smaller kernel called the micro-kernel. • Easy to maintained
• Advantages of this structure are that all new services need to be added to user space and does not • Easy to debug and update
require the kernel to be modified. Disadvantages
• Thus it is more secure and reliable as if a service fails, then rest of the operating system remains • It may lose stability
untouched. • Less security
Advantages:
• It makes the operating system portable to various platforms.
• As microkernels are small so these can be tested effectively.
Disadvantages:
• Increased level of inter module communication degrades system performance.
Modular Structure
• The kernel has only a set of core components and other services are added as dynamically. Loadable
modules to the kernel either during runtime or boot time.
11 12
OS - UNIT 1 OS - UNIT 1
UNIT 1 – Chapter 2
2 Ready
Process Management The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into
this state after Start state or while running it by but interrupted by the scheduler to assign CPU to
Process some other process.
“A process is basically a program in execution”. The execution of a process must progress
in a sequential fashion.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
A process defines the fundamental unit of computation for the computer Components:
running and the processor executes its instructions.
• Object: Program code to be executed.
• Data: The data is being used in execution of a program.
4 Waiting
• Resources: While executing the program the resource it may require.
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input,
• Status of the process execution: Verify the status of program execution.
or waiting for a file to become available.
1 Start
This is the initial state when a process is first started/created.
OS - UNIT 1 OS - UNIT 1
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates. A 10 IO status information
This includes a list of I/O devices allocated to the process.
PCB keeps all the information needed to keep track of a process as listed below in the table
6 CPU registers
Various CPU registers where process need to be stored for execution for running state.
8 Memory management information The OS scheduler determines how to move processes between the ready and run queues which can only
This includes the information of page table, memory limits, Segment table depending on memory have one entry per processor core on the system; in the above diagram, it has been merged with the
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc. Two-State Process Model
Two-state process model refers to running and non-running states which are described below −
OS - UNIT 1 OS - UNIT 1
“Their main task is to select the jobs to be submitted into the system and to decide which process to run”. remove the process from memory and make space for other processes, the suspended process
Schedulers are of three types − is moved to the secondary storage. This process is called swapping, and the process is said to
• Short-Term Scheduler
• Medium-Term Scheduler Comparison among Scheduler
3. Operating system does not allow a child to continue if its parent terminates. processes is a cooperating process. There are several reasons for providing environment that allows process
cooperation:
Co-operating Processes Information sharing: Since several users may be interested in the same piece of information (for instance, a
“Co-operating process is a process that can affect or be affected by the other processes while executing. If shared file), we must provide an environment to allow concurrent access to such information.
suppose any process is sharing data with other processes, then called co-operating process”. Computation speedup: If we want a particular task to run faster, must break it into subtasks, each of which
Benefit of the operating processes are: will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer
1. Sharing of information has multiple processing elements (such as CPUs or I/O channels).
2. Increases computation speed Modularity: We may want to construct the system in a modular fashion, dividing the system functions into
4. Convenience Convenience: Even an individual user may work on many tasks at the same time. For instance, a user may be
• Co-operating processes share the information: Such as a file, memory etc. System must provide an editing, printing, and compiling inparallel process quire interprocess communication (IPC) mechanism that
environment to allow concurrent access to these types of resources. will allow them to exchange data and information.
• Computation speed will increase if the computer has multiple processing elements are connected Th There are two fundamental models of interprocess communication:
• System is constructed in a modular fashion. System function is divided into number of modules. (2) message passing.
Process 1 Printf("abc") In the shared-memory model, a region of memory thar shared by cooperating processes established. Processes
Process 2 Printf("CBA") can then exchange information by reading and writing data to the shared region.
CBAabe abCeBA abeCBA In the message passing model, communication takes place by means of messages exchanged between the
• Behaviour of co-operating processes is nondeterministic ie. it depends on relative execution sequence cooperating processes.
• Message passing is also easier to implement than shared memory for intercomputer communication. UNIT 1 – Chapter 3
• Shared d memory allows maximum speed convenience communication, as it can be done at memory
CPU Scheduling
speeds when within a computer.
• Shared memory is faster than message passing, as message-passing systems are typically implemented
using system calls and thus require the more time-consuming task of kernel intervention. Basic concept:
• In contrast, in shared-memory systems, system calls are required only establish shared-memory regions. CPU scheduling is a process which allows one process to use the CPU while the execution of another
Once shared memory established, all treated routine memory accesses and no assistance from the kernel process is on hold due to unavailability of any resource like I/O.
is required. “Select the process from ready queue and CPU allocate that process”.
The aim of CPU scheduling is to make the system efficient and fast.
SCHEDULING CRITERIA:
Many algorithms exist for CPU scheduling. Various criteria have been suggested for comparing these
CPU scheduling algorithms. Common criteria include:
1. CPU utilization: This may range from 0% to 100% ideally. In real systems it ranges from 40% for lightly-
loaded systems to 90% for heavily-loaded systems.
2. Throughput: The number of processes completed per time unit is throughput. Long processes may be of
the order of one process per hour whereas in case of short processes, throughput may be 10 or 12 processes
per second.
3. Turnaround time: The interval of time between submission and completion of a process is called
turnaround time. It includes execution time and waiting time.
4. Waiting time: Sum of all the times spent by a process at different instances waiting in the ready queue is
called waiting time.
5. Response time: In an interactive process the user is using some output generated while the process
continues to generate new results.
Instead of using the turnaround time that gives the difference between time of submission and time of
completion, response time is sometimes used.
Response time is thus the difference between time of submission and the time the first response occurs.
Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling
algorithms.
• First-Come, First-Served (FCFS) Scheduling
24
Unit-01 Unit-01
First Come First Serve (FCFS) Average turnaround time = (3 + 6 + 30) / 3 = 39 / 3 = 13 msecs.
requests and processes by order of their arrival shown below calculate Average waiting time?
P2 3 P4 = 0-0 = 0
P3 3 P3 = 3-1 = 2
25 26
Unit-01 Unit-01
P2 8 P1 P2 P4 P1 P3
P3 7 0 1 5 10 17 26
P4 3 Average waiting time = ((10 – 1) + 0 + (17 – 2) + (5 – 3)) / 4 = 26 / 4 = 6.5 msecs.
Using SJF algorithm, the processes would be scheduled as shown below. 2. If non-preemptive SJF scheduling is used, the result is as follows:
P4 P1 P3 P2 P1 P2 P4 P3
0 3 9 16 24 0 8 12 17 26
Average waiting time = (0 + 3 + 9 + 16) / 4 = 28 / 4 = 7 msecs. Average waiting time = ((0 + (8 – 1) + (12 – 3) + (17 – 2)) / 4 = 31 / 4 = 7.75 msecs.
27 28
Unit-01 Unit-01
P4 3 ms 6 4 ms
1) In the following example, we will assume lower numbers to represent higher priority.
P5 5 ms 10 2 ms
Process Priority Burst time (msecs)
P1 3 10
P2 1 1
P3 3 2
P4 4 1 Here,
P5 2 5 Total Turn Around Time = 7 + 4 + 11 + 14 + 14 = 50 ms
Using priority scheduling, the processes are scheduled as shown below: Average Turn Around Time = (Total Turn Around Time)/(no. of processes) = 50/5 = 10.00 ms
Total Waiting Time = 4 + 0 + 5 + 10 + 12 = 31 ms
P2 P5 P1 P3 P4
Average Waiting Time = (Total Waiting Time)/(no. of processes) = 31/5 = 6.20 ms
Total Response Time = 0 + 0 + 5 + 10 + 12 = 27 ms
0 1 6 16 18 19
Average Response Time = (Total Response Time)/(no. of processes) = 27/5 = 5.40 ms
Average waiting time = (6 + 0 + 16 + 18 + 1) / 5 = 41 / 5 = 8.2 msecs.
29 30
Unit-01
P1 24
P2 3
P3 3
Let the duration of a time slice be 4 msecs, which is to say CPU switches between
processes every 4 msecs in a round-robin fashion. The Gantt chart below shows the
scheduling of processes.
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Average waiting time = (4 + 7 + (10 – 4)) / 3 = 17/ 3 = 5.66 msecs.
31