0% found this document useful (0 votes)
218 views

OS Course File

The document provides details for a course on Operating Systems including the institute and department visions, RTU scheme and syllabus, prerequisite knowledge, textbooks, timetable, course plan, and topics to be covered. The course plan outlines 25 lectures covering key topics in operating systems like processes, process scheduling, inter-process communication, memory management, virtual memory, deadlocks, and devices. The intention is to impart in-depth knowledge of emerging technologies in computer science to students.

Uploaded by

xyz
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
218 views

OS Course File

The document provides details for a course on Operating Systems including the institute and department visions, RTU scheme and syllabus, prerequisite knowledge, textbooks, timetable, course plan, and topics to be covered. The course plan outlines 25 lectures covering key topics in operating systems like processes, process scheduling, inter-process communication, memory management, virtual memory, deadlocks, and devices. The intention is to impart in-depth knowledge of emerging technologies in computer science to students.

Uploaded by

xyz
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 169

Swami Keshvanand Institute of Technology, Management &Gramothan,

Ramnagaria, Jagatpura, Jaipur-302017, INDIA


Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

A
Course File
on
5IT4-03: Operating System
Programme: B.Tech.(IT)
Semester: V Semester
Session: 2021-22

Vipin Jain
Associate Professor
Department of Information Technology

1
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Contents
1. Institute Vision/Mission/Quality Policy
2. Departmental Vision/Mission
3. RTU Scheme & Syllabus
4. Prerequisite of Course
5. List of Text and Reference Books
6. Time Table
7. Syllabus Deployment: Course Plan & Coverage*
8. PO/PSO-Indicator-Competency
9. COs Competency Level
10. CO-PO-PSO Mapping Using Performance Indicators(PIs)
11. CO-PO-PSO Mapping: Formulation & Justification
12. Attainment Level (Internal Assessment)
13. Learning Levels of Students Through Marks Obtained in 1st Unit Test/Quiz
14. Planning for Remedial Classes for Average/Below Average Students
15. Teaching-Learning Methodology
16. RTU Papers (Previous Years)
17. Mid Term Papers (Mapping with Bloom’s Taxonomy & COs)
18. Tutorial Sheets (with EMD Analysis)**
19. Technical Quiz Papers
20. Assignments (As Per RTU QP Format)
21. Details of Efforts Made to Fill Gap Between COs and POs (Expert Lecture/ Workshop/ Seminar/
Extra Coverage in Lab etc.)
22. Course Notes

Note:
1.*1st lecture of the course should cover prerequisite
2. **E: Easy, M: Moderate, D: Difficult
3. Format for Points 8-11 should be referred from AICTE’s Recommendations for Examination
Reforms

2
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

INSTITUTE VISION / MISSION / QUALITY POLICY


OUR INSPIRATION
"Mass illiteracy is the root cause behind backwardness of India. If we want
speedy progress of nation we need to root it out as early as possible.”
– Swami Keshvanand 

VISION
“To promote higher learning in advanced technology and industrial
research to make our country a global player.”

MISSION
“To promote quality education, training and research in field of
Engineering by establishing effective interface with industry and to
encourage faculty to undertake industry sponsored projects for students.”

QUALITY POLICY
We are committed to ‘achievement of quality’ as an integral part of our
institutional policy by continuous self-evaluation and striving to improve
ourselves.
 Institute would pursue quality in
• All its endeavours like admissions, teaching- learning processes,
examinations, extra and co-curricular activities, industry institution
interaction, research & development, continuing education, and
consultancy.
• Functional areas like teaching departments, Training & Placement Cell,
library, administrative office, accounts office, hostels, canteen, security
services, transport, maintenance section and all other services.”

3
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

DEPARTMENTAL VISION/MISSION

VISION

V1: Produce quality computer engineers trained in latest tools and


technologies.
V2: Be a leading department in the state and region by imparting in-depth
knowledge to the students in emerging technologies in computer science &
engineering

MISSION

Deliver resources in IT enable domain through:


M1: Effective Industry interaction and project based learning.
M2: Motivating our students for employability, entrepreneurship, research
and higher education
M3: Providing excellent engineering skills in a state-of-the-art
infrastructure

4
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

RTU SCHEME & SYLLABUS

5
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

6
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

PREREQUISITE OF COURSE

For this course, there is no required any special prerequisite. We are


making an assumption that you are already aware of some basic concepts
about computer program, software, hardware, processes etc.

LIST OF TEXT AND REFERENCE BOOKS

Text Books:
 A. Silberschatz and Peter B Galvin: Operating System Principals, Wiley India
Pvt. Ltd.
 Tanenbaum: Modern Operating System, Prentice Hall.

Reference Books:
 DM Dhamdhere: Operating Systems – A Concepts Based Approach, Tata
McGraw Hill
 Charles Crowly: Operating System A Design – Oriented Approach, Tata
McGraw Hill.
 Achyut S Godbole: Operating Systems, Tata McGraw Hill

TIME TABLE

7
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

8
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

SYLLABUS DEPLOYMENT: COURSE PLAN & COVERAGE

Course Plan
Lecture
Topics
Number
Lecture 0 Scope of the course, Course Outcomes and Structure of the syllabus
Definition of an Operating System, Layered Structure, Structure of OS: Kernel, Shell, Responsibility
Lecture 1
& Need
Lecture 2 OS Functions , Types of OS
Lecture 3 System Calls
Lecture 4 Process: Creation, Termination, States and PCB
Lecture 5 Process Scheduling Algorithms & Criteria
Lecture 6 Process Scheduling Algorithms contd.
Lecture 7 Process Scheduling Algorithms contd.
Lecture 8 Process Scheduling Algorithms contd.
Lecture 9 Inter-process communication
Lecture 10 Race Condition, Critical Section, Mutex and Semaphores
Lecture 11 Classical Problems of IPC and their solutions
Lecture 12 Threads: Types, Thread Models
Lecture 13 Introduction to memory management,
Lecture 14 Contiguous Memory Allocation
Lecture 15 Paging & TLB - Non Contiguous Memory Allocation
Lecture 16 Page Table Structures
Lecture 17 Virtual Memory, Demand Paing and Page Replacement Algorithms
Lecture 18 Page Replacement Algorithms contd.
Lecture 19 Segmentation, Segmentation with paging
Lecture 20 Deadlock, Deadlock conditions, Shared and Non-shared Resources
Lecture 21 Resource Allocation Graph, Deadlock detection detection methods, Deadlock Prevention
Lecture 22 Deadlock Avoidance & Banker's Algorithm
Lecture 23 Banker's Algorithm with example
Lecture 24 Deadlock Detection and Recovery
Lecture 25 Devices, Drivers, Device Handling and Structure of a Disk
Lecture 26 Disk Scheduling Algorithms
Lecture 27 Disk Scheduling Algorithms contd.
Lecture 28 File: Type and Structure
Lecture 29 File Access Methods & Access Control Matrix
Lecture 30 File Allocation and Inode
Lecture 31 Directory and type of Directories
Lecture 32 File Security and User Authentication

9
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Lecture 33 Unix & Linux Case Study


Lecture 34 Case Study Time OS
Lecture 35 Case Study Mobile OS

COURSE COVERAGE

Lect.
Date Topic Covered
No.
1 2020-07-02 Operating System: Objective, scope and outcome
2 2020-07-03 Syllabus and Books Discussion

10
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

3 2020-07-07 Introduction to Operating system


4 2020-07-09 Computer System & OS: Structure and operations
5 2020-07-10 OS Functions
6 2020-07-14 Types of OS
7 2020-07-16 System Calls
8 2020-07-17 Process: Creation, Termination, States and PCB
9 2020-07-20 Process Scheduling Algorithms & Criteria
10 2020-07-21 Process Scheduling Algorithms contd.
11 2020-07-25 Process Scheduling Algorithms contd.
12 2020-07-27 Process Scheduling Algorithms contd.
13 2020-07-31 Process Scheduling Algorithms contd.
14 2020-08-01 Inter-process communication, Race Condition
15 2020-08-05 Critical Section, Mutex and Semaphores
16 2020-08-08 Classical Problems of IPC and their solutions
17 2020-08-10 Threads: Types, Thread Models
18 Introduction to memory management, Contiguous Memory
2020-08-11 Allocation
19 2020-08-17 Paging & TLB - Non Contiguous Memory Allocation
20 2020-08-19 Page Table Structures ,
21 2020-08-22 Virtual Memory, Demand Paing
22 2020-08-24 Page Replacement Algorithms
23 2020-08-26 Page Replacement Algorithms contd.
24 2020-08-29 Segmentation, Segmentation with paging
25 2020-08-31 Deadlock, Deadlock conditions
26 2020-09-02 Shared and Non-shared Resources Resource Allocation Graph
27 2020-09-05 Deadlock detection methods, Deadlock Prevention
28 2020-09-07 Deadlock Avoidance & Banker's Algorithm
29 2020-09-09 Banker's Algorithm with example
30 2020-09-12 Deadlock Detection and Recovery
31 2020-09-14 Devices, Drivers, Device Handling and Structure of a Disk
32 2020-09-16 Process Management Quiz
33 2020-09-19 Memory Management Quiz
34 2020-09-21 Process Management Quiz Solution
35 2020-09-23 Deadlock Quiz
36 2020-09-26 Memory Management Quiz Solution
37 2020-09-28 Deadlock Quiz Solution
38 2020-09-30 Numerical Examples
39 2020-10-10 Midterm Paper Solutions Discussion
40 2020-10-12 Midterm Paper Solutions Discussion
41 2020-10-14 Answer Sheet Discussion
42 2020-10-17 File Management Introduction

11
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

ProgramOutcomes/Program
Program Outcome/Program Specific
Specific Outcomes
Indicator – Indicators - Competencies
Competency
Outcome

12
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Apply mathematical techniques such as


1.1 calculus, linear algebra, and statistics to
solve problems
Apply advanced mathematical
PO 1: Engineering knowledge: Apply the 1.2 techniques to model and solve computer
knowledge of mathematics, science, science & engineering problems
engineering fundamentals, and an
Apply laws of natural science to an
engineering specialisation for the solution 1.3
engineering problem
of complex engineering problems.
Apply fundamental engineering
1.4
concepts to solve engineering problems
Apply computer science & engineering
1.5
concepts to solve engineering problems.
PO 2: Problem analysis: Identify, Articulate problem statements and
2.1
formulate, research literature, and analyse identify objectives
complex engineering problems reaching
Identify engineering systems, variables,
substantiated conclusions using first 2.2
and parameters to solve the problems
principles of mathematics, natural
sciences, and engineering sciences. Identify the mathematical, engineering
2.3 and other relevant knowledge that
applies to a given problem
Reframe complex problems into
2.4
interconnected sub-problems
problems Identify, assemble and
2.5
evaluate information
Identify existing processes/solution
methods for solving the problem,
2.6
including forming justified
approximations and assumptions
Compare and contrast alternative
2.7 solution processes to select the best
process.
2.8 Combine scientific principles and
engineering concepts to formulate
model/s (mathematical or otherwise) of
a system or process that is appropriate

13
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

in terms of applicability and required


accuracy.
Identify assumptions (mathematical and
physical) necessary to allow modeling
2.9
of a system at the level of accuracy
required.
Apply engineering mathematics and
2.10 computations to solve mathematical
models
Produce and validate results through
2.11 skilful use of contemporary engineering
tools and models
Identify sources of error in the solution
2.12
process, and limitations of the solution.

Extract desired understanding and


conclusions consistent with objectives
2.13 and limitations of the analysis
PO 3: Design/Development of Solutions: Recognize that need analysis is key to
3.1
Design solutions for complex engineering good problem definition
problems and design system components
Elicit and document, engineering
or processes that meet the specified needs 3.2
requirements from stakeholders
with appropriate consideration for public
health and safety, and cultural, societal, Synthesize engineering requirements
and environmental considerations. 3.3
from a review of the state-of-the-art
Extract engineering requirements from
3.4 relevant engineering Codes and
Standards such as IEEE, ACM, ISO etc.
Explore and synthesize engineering
requirements considering health, safety
3.5
risks, environmental, cultural and
societal issues
Determine design objectives, functional
3.6 requirements and arrive at
specifications
3.7 Apply formal idea generation tools to

14
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

develop multiple engineering design


solutions
Build models/prototypes to develop
3.8
diverse set of design solutions
Identify suitable criteria for evaluation
3.9
of alternate design solutions
Apply formal decision making tools to
3.10 select optimal engineering design
solutions for further development
Consult with domain experts and
stakeholders to select candidate
3.11
engineering design solution for further
development
Refine a conceptual design into a
3.12 detailed design within the existing
constraints (of the resources)
Generate information through
3.13 appropriate tests to improve or revise
design
PO 4: Conduct investigations of Define a problem, its scope and
complex problems: Use research-based 4.1 importance for purposes of
knowledge and research methods including investigation
design of experiments, analysis and
Examine the relevant methods, tools
interpretation of data, and synthesis of the
and techniques of experiment design,
information to provide valid conclusions. 4.2
system calibration, data acquisition,
analysis and presentation
Apply appropriate instrumentation
4.3 and/or software tools to make
measurements of physical quantities
Establish a relationship between
4.4 measured data and underlying physical
principles.
4.5 Design and develop experimental
approach, specify appropriate
equipment and procedures

15
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Understand the importance of statistical


design of experiments and choose an
4.6
appropriate experimental design plan
based on the study objectives
Use appropriate procedures, tools and
4.7 techniques to conduct experiments and
collect data
Analyze data for trends and
4.8 correlations, stating possible errors and
limitations
Represent data (in tabular and/or
graphical forms) so as to facilitate
4.9
analysis and explanation of the data,
and drawing of conclusions
Synthesize information and knowledge
4.10 about the problem from the raw data to
reach appropriate conclusions
Identify modern engineering tools,
5.1 techniques and resources for
engineering activities
Create/adapt/modify/extend tools and
5.2 techniques to solve engineering
problems
Identify the strengths and limitations of
PO 5: Modern tool usage: Create, select,
tools for (i) acquiring information, (ii)
and apply appropriate techniques,
5.3 modelling and simulating, (iii)
resources, and modern engineering and IT
monitoring system performance, and
tools including prediction and modelling to
(iv) creating engineering designs.
complex engineering activities with an
understanding of the limitations. Demonstrate proficiency in using
5.4
discipline specific tools
Discuss limitations and validate tools,
5.5
techniques and resources
Verify the credibility of results from
5.6 tool use with reference to the accuracy
and limitations, and the assumptions
inherent in their use.

16
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Identify and describe various


engineering roles; particularly as
PO 6: The engineer and society: Apply 6.1 pertains to protection of the public and
reasoning informed by the contextual public interest at global, regional and
knowledge to assess societal, health, local level
safety, legal, and cultural issues and the
consequent responsibilities relevant to the Interpret legislation, regulations, codes,
professional engineering practice. and standards relevant to your
6.2
discipline and explain its contribution to
the protection of the public
Identify risks/impacts in the life-cycle
7.1
of an engineering product or activity
Understand the relationship between the
PO 7: Environment and sustainability: technical, socio economic and
7.2
Understand the impact of the professional environmental dimensions of
engineering solutions in societal and sustainability
environmental contexts, and demonstrate Describe management techniques for
the knowledge of, and need for sustainable 7.3
sustainable development
development.
Apply principles of preventive
engineering and sustainable
7.4
development to an engineering activity
or product relevant to the discipline
Identify situations of unethical
8.1 professional conduct and propose
PO 8: Ethics: Apply ethical principles and ethical alternatives
commit to professional ethics and
Identify tenets of the ASME
responsibilities and norms of the 8.2
professional code of ethics
engineering practice.
Examine and apply moral & ethical
8.3
principles to known case studies
Recognize a variety of working and
PO 9: Individual and team work: Function 9.1 learning preferences; appreciate the
effectively as an individual, and as a value of diversity on a team
member or leader in Implement the norms of practice (e.g.
diverse teams, and in multidisciplinary rules, roles, charters, agendas, etc.) of
settings. 9.2
effective team work, to accomplish a
goal.

17
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Demonstrate effective communication,


9.3 problem solving, conflict resolution and
leadership skills
9.4 Treat other team members respectfully
Listen to other members and maintain
9.5
composure in difficult situations
Present results as a team, with smooth
9.6 integration of contributions from all
individual efforts
Read, understand and interpret technical
10.1
and non-technical information
Produce clear, well-constructed, and
10.2 well- supported written engineering
documents
PO 10: Communication: Communicate
Create flow in a document or
effectively on complex engineering 10.3
presentation
activities with the engineering community
and with the society at large, such as, Listen to and comprehend information,
10.4
being able to comprehend and write instructions, and viewpoints of others
effective reports and design
Deliver effective oral presentations to
documentation, make effective 10.5
technical and non- technical audiences
presentations, and give and receive clear
instructions. Create engineering-standard figures,
10.6 reports and drawings to complement
writing and presentations
Use a variety of media effectively to
10.7 convey a message in a document or a
presentation
Describe various economic and
PO 11: Project management and 11.1 financial costs/benefits of an
finance: Demonstrate knowledge and engineering activity
understanding of the engineering and
management principles and apply these to Analyze different forms of financial
one’s own work, as a member and leader 11.2 statements to evaluate the financial
in a team, to manage projects and in status of an engineering project
multidisciplinary environments.
11.3 Analyze and select the most appropriate
proposal based on economic and

18
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

financial considerations.
Identify the tasks required to complete
an engineering activity, and the
11.4
resources required to complete the
tasks.
Use project management tools to
11.5 schedule an engineering project so it is
completed on time and on budget.
Describe the rationale for requirement
12.1 for continuing professional
development

Identify deficiencies or gaps in


knowledge and demonstrate an ability
12.2 to source information to close this gap
Identify historic points of technological
advance in engineering that required
PO 12: Life-long learning: Recognise the 12.3
practitioners to seek education in order
need for, and have the preparation and to stay current
ability to engage in independent and life-
long learning in the broadest context of Recognize the need and be able to
technological change. clearly explain why it is vitally
12.4
important to keep current regarding new
developments in your field
Source and comprehend technical
12.5 literature and other credible sources of
information
Analyze sourced technical and popular
12.6 information for feasibility, viability,
sustainability, etc.
Possess the concepts of Data Structure
PSO1.1
PSO1: Core Engineering Skills: Exhibit and Database Management System
fundamental concepts of Data Structures,
Possess the concepts of core
Databases, Operating Systems, Computer
engineering subjects including
Network, Theory of Computation, PSO1.2
Operating System, Computer Networks
Advanced Programming and Software
and Software Engineering.
Engineering.
PSO1.3
Apply basic programming skills to

19
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

solve real world problems


Apply fundamental software
PSO2.1 engineering concepts to solve real
PSO2: Standard Software Engineering world problem
practices: Demonstrate an ability to
Possess conceptual knowledge for
design, develop, test, debug, deploy,
PSO2.2 designing, analysing and testing a
analyze, troubleshoot, maintain, manage
software
and secure a software.
Estimate and evaluate the cost related to
PSO2.3
a Software
Identify the requirement of continuing
PSO3: Future Endeavors: Recognize the PSO3.1 education through postgraduation like
need to have knowledge of higher M.Tech., MS, MBA etc.
education institutions/ organizations/
companies related to computer science & List various higher education institutes
engineering. PSO3.2 and organizations related to computer
science & engineering.

COs Competency Level

Syllabus

20
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

5IT4-03: Operating System

Credit: 3
Max. Marks: 150(IA:30, ETE:120)
3L+0T+0P
End Term Exam: 3 Hours
S. Hours
No.
1 Introduction and History of Operating systems: Structure and 4
operations; processes and files Processor management: inter
process communication, mutual exclusion, semaphores, wait and
signal procedures, process scheduling and algorithms, critical
sections, threads, multithreading
Memory management: contiguous memory allocation, virtual 5
2 memory, paging, page table structure, demand paging, page
replacement policies, thrashing, segmentation, case study
Deadlock: Shared resources, resource allocation and scheduling, 15
resource graph models, deadlock detection, deadlock avoidance,
3
deadlock prevention algorithms

Device management: devices and their characteristics, device


drivers, device handling, disk scheduling algorithms and policies
4 File management: file concept, types and structures, directory 7
structure, cases studies, access methods and matrices, file security,
user authentication
5 UNIX and Linux operating systems as case studies; Time OS 8
and case studies of Mobile OS

Bloom’s Taxonomy

21
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Leve
l Descriptor Level of Attainment Keywords
List, define, tell, describe, recite, recall,
1 Remembering Recalling from memory identify, show, label, tabulate, quote,
name, who, when, where, etc.
describe, explain, paraphrase, restate,
Understandin Explaining ideas or associate, contrast, summarize,
2 g concepts differentiate interpret, discuss
Calculate, predict, apply, solve,
illustrate, use, demonstrate, determine,
Using information in model, experiment, show, examine,
3 Applying another familiar situation modify
Breaking information into
part to explore
understandings and classify, outline, break down, categorize,
4 analysing relationships analyze, diagram, illustrate, infer, select
assess, decide, choose, rank, grade, test,
measure, defend, recommend, convince,
Justifying a decision or select, judge, support, conclude, argue,
5 Evaluating course of action justify, compare, summarize, evaluate
Generating new ideas, Design, formulate, build, invent, create,
products or views to do compose, generate, derive, modify,
6 Creating things develop, integrate
** It may be noted that some of the verbs in the above table are associated with multiple
Bloom’s Taxonomy level. These verbs are actions that could apply to different activities. We
need to keep in mind that it’s the skill, action or activity we need out students to demonstrate
that will determine the contextual meaning of the verb used in the assessment question.

Cour Course Course Outcomes Unit Bloom's PO Indicators

22
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

se
Name Mapping Level
Code
Upon successful
completion of this course,
students should be able to:

5IT4-03.1: Describe the Unit – 2 1


basics of operating 1.4,1.5,2.1,2.2,2.4,2.5,2
5IT4- Operatin system, .6,2.7,2.12,2.13,4.1,4.2,
03 g System mechanisms of OS to 4.3,4.4,
(OS) 4.8,4.9,4.10,5.1,5.2,5.3,
handle processes, threads
5.5,7.1,7.2,7.4,12.1,
and their communication 12.2,12.3,12.4,
PSO(1.1,1.2,1.3,2.1,2.2
,3.1)
5IT4-03.2: Analyze the Unit – 3 4 1.4,1.5,2.1,2.2,2.4,2.5,2
memory management and .6,2.7,2.12,2.13,
its allocation policies. 4.1,4.2,4.3,4.4,4.8,4.9,4
.10, 5.1,5.2,5.3,5.5,
7.1,7.2,7.3,7.4, 12.1,
12.2,12.3,12.4,

PSO(1.1,1.2,1.3,2.1,2.2
,3.1)
5IT4-03.3 Illustrate Unit - 4 3 1.4,1.5,2.1,2.2,2.4,2.5,2
different conditions for .6,2.7,2.12,2.13,
deadlock and their 4.1,4.2,4.3,4.4,4.8,4.9,4
possible solutions. .10, 5.1,5.2,5.3,5.5,
7.1,7.2,7.3,7.4
12.1,
12.2,12.3,12.4,

PSO(1.1,1.2,1.3,2.1,2.2
,3.1)
5IT4-03.4: Discuss the Unit - 5 2 1.4,1.5,2.1,2.2,2.4,2.5,2
storage management .6,2.7,2.12,2.13,
policies with respect to 4.1,4.2,4.3,4.4,4.8,4.9,4
different storage .10, 5.1,5.2,5.3,5.5,
7.1,7.2,7.4, 12.1,
management technologies.
12.2,12.3,12.4,12.6,

PSO(1.1,1.2,1.3,2.1,2.2
,3.1)
5IT4-03.5: Evaluate Unit - 6 5 1.4,1.5,

23
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

concept of operating 5.1,5.2,5.3,5.5,5.6,


system with respect to 7.1,7.2,7.4
unix, linux, time, mobile PSO(1.1,1.2,1.3,2.1,2.2
Os ,3.1)

10. CO-PO-PSO Mapping Using Performance Indicators (PIs)

24
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

P P P P P P
P P P P P P P P P
O O O S S S
  O O O O O O O O O O O O
1 1 1
1 2 3 4 5 6 7 8 9
0 1 2 1 2 3
C 3 3 2
O 2 2 3 3 3 3
1
C 3 3 2
O 2 2 3 3 3 3
2
C 3 3 2
O 2 2 3 3 3 3
3
C 3 3 2
O 2 2 3 3 3 3
4
C 3 3 2
O 2 3 3
5

25
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

11. CO-PO-PSO Mapping: Formulation & Justification

The CO-PO/PSO mapping is based on the correlation of course outcome (CO) with Program
Outcome Indicators. These indicators are the breakup statements of broad Program Outcome
statement.

The correlation is calculated as number of correlated indicators of a PO/PSO mapped with CO


divided by total indicators of a PO/PSO. The calculated value represents the correlation level
between a CO & PO/PSO. Detailed formulation and mathematical representation can be seen below
in equation 1:

Input:
COi: The ith course outcome of the course
POj: The jth Program Outcome
Ijk: The kth indicator of the jth Program Outcome
 (Ijk, COi): level of CO-PO mapping
= 1, if, 0< <0.33
2, if, 0.33≥<0.66
3, if, 0.66 ≥< 1
: Degree of correlation

Equation 1

26
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

RTU Papers (Previous Years)

27
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

28
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

29
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

30
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

31
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

32
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

33
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

34
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

35
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Mid Term Papers

36
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

37
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

38
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

39
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

40
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Technical Quiz Papers

41
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

42
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

43
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

44
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

45
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

46
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

47
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

48
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

49
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

50
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

51
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

52
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

53
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

54
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

55
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

56
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

57
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

58
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

59
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Deadlock Quiz Result

Roll
Number Name of Student Score
54 Nishant 32 / 50
55 Nishant kumar 34 / 50
56 Nitesh goyal 48 / 50
57 pankaj Sharma 46 / 50
58 poonam vijay 42 / 50
59 Prachi Behl 28 / 50
60 Prafull 28 / 50
61 PRAGYA GAGGAR 50 / 50
62 Prakriti Agrawal 40 / 50
63 Pranav Parashar 38 / 50
65 Prateek Baheti 50 / 50
66 Prateek Goyal 46 / 50
67 Rahul chandnani 34 / 50
68 Rishi khandelwal 36 / 50
69 RITIK GARG 48 / 50
71 Ritika jalewa 48 / 50
72 Rohit Singh Foujdar 42 / 50
73 Ronit Gupta 48 / 50
74 Sachin Sharma 50 / 50
76 saksham agrawal 46 / 50
77 Saksham Kalavtia 50 / 50
78 Saurabh Gupta 46 / 50
79 shreyas 22 / 50
80 Shruti Dubey 50 / 50
81 Shruti Gupta 48 / 50
82 Shubham Sharma 26 / 50
83 Skand Gupta 40 / 50
85 Tanisha Dhemla 32 / 50
86 Tanmay Bhargava 50 / 50
87 Tanu Mehra 48 / 50
88 Tarun Sharma 42 / 50
89 Tushar Saxena 50 / 50
91 Vibhor Jain 32 / 50
92 Vinay bansal 14 / 50
93 Vinod Kalwani 22 / 50
94 Vishesh Sharma 50 / 50
300 Adarsh Dixit 46 / 50

60
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

301 VINAY VYAS 50 / 50


302 Ranu Goyal 48 / 50
303 Vishal Bothra 38 / 50
304 prachi munot 50 / 50
305 bhavika samdani 48 / 50
306 Uma agarwal 30 / 50
307 Lakshya Dewani 28 / 50
308 vibhuti sharma 34 / 50

Assignments (As Per RTU QP Format)

61
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Assignment Midterm-1
Q: 1 Write short notes on following:
1. Thrashing
2. Multiprogramming system
3. Time sharing system
4. Process State Model
5. Address binding
6. Types of scheduling
7. Threads vs. Process
8. Internal and External fragmentation
9. Race condition and Critical Section
10. Operating system- Definition and Structure
11. Logical and physical address space

PART B
Q1. What is contiguous memory allocation? how memory is protected from unintended access?
Q2. Solve the following using First fit, worst fit, best fit allocation algorithms:
Given free memory partitions of 10KB, 30KB, 5KB and 20KB (in order),how would each of
the algorithm places processes of p1=20KB,p2=10KB,P3=5KB(in order).which algorithm makes the
most efficient use of memory.
Q3. What is segmentation? How the address translation performed in it?

Q: 4 Consider the following Scenario and find the average waiting time and average turnaround time
for FCFS, and Round robin (T.Q=1)
Process A.T B.T
P1 0 3
P2 3 1
P3 5 4
P4 2 2

Q5 Consider the following Scenario and find the average waiting time and average turnaround time
for SRTF and Priority (Both)
Process A.T B.T Priority

62
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

P1 5 6 3
P2 1 4 1
P3 0 3 0
P4 3 5 2

PART C
Q1. Explain the page replacement algorithms (FIFO, LRU, Optimal) with an example.

Q.2 What do you understand by Semaphore, wait and Signal operation? how it is useful to solve the
critical section problem?
Q3. What are two difference between user-level threads and kernel level threads? Under what
circumstances is one type is better than the other?
Q4. What is hardware support of paging? Explain with diagram and find effective access time for the
following scenario:
If hit ratio is 90% and memory access time is 100ns and TLB access time is 20ns.

Assignment Midterm-1

63
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

PART A

Write short answers of Following:


1. Operating system- Definition and Structure
2. Process State Model
3. Process Control Block
4. Memory compaction
5. Process Schedulers
6. Context Switching
7. Race condition and Critical Section
8. Contiguous & Non-Contiguous Memory allocation
9. Address Binding, Logical & Physical Address Space
10. Thrashing
PART B
1. Demonstrate First Fit, Best Fit, Worst Fit and Next Fit algorithms in : (Note - With an
example)
(a) Fixed Sized Partitioning
(b) Variable Sized Partitioning
2. Explain functions of Operating System. Also explain system call to implement these function.
3. (a) What is Process synchronization?
(b) What are Semaphores? Give solution of any One IPC problem using Semaphores.
4. What is thread? How it is differ from process? Explain multithreading models.
5. Explain Peterson’s solution for critical section problem.
PART C
Q.1 How Logical address is translated into Physical Address in Non-contiguous Memory
Allocation (Paging). What is the significance of using TLB? Explain in detail
Q.2 Explain the page replacement algorithms (FIFO,LRU,Optimal)with an example
Q.3 What is an operating system? Explain Multi-tasking, Multi-programming, and
Multiprocessing operating systems?
Q.4 Define the terms as waiting time, turnaround time. Consider any one Scenario and find
the average waiting time and average turnaround time for FCFS, nonpreemptive Priority,
SRTF and Round robin (T.Q=2).

Introduction

64
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

An Operating System (OS) is an interface between hardware and user. An OS is responsible for the
management and coordination of activities and sharing of resources of the computer.
"Operating system acts as an intermediary between the computer user and computer hardware".
A computer system can be divided into four components:
1. The Hardware
2. The Operating System
3. The Application Program
4. The User
• The hardware can be Central Processing Unit (CPU), the memory and input output devices that
provide the basic computing resources for the system.
• The operating system controls and coordinates the use of the hardware among various application
programs for the various users.
• The application program such as word processor, compilers and web browsers. It is used to define
the ways in which these resources are used to solve user's computing problems.
• A user is an agent or end user, who uses a computer.

Basically, an operating system can be defined as:


1. A layer of software, which takes care of technical aspects of a computer’s operation.
2. System software platform responsible for background support to resources and application
software’s.

65
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

(compiler, interpreter, debugger)


Some of the famous operating system providers are Microsoft (Windows OS), Apple (Mac OSX),
Red Hat (Linux).
Operating System Views
Operating System has two types of views:
1. User view
2. System view
1. User view
The user's view of the computer varies according to the interface being used.
2. System view
From the computer's point of view, the operating system is the program involved with the hardware.
We can view an operating system as a "Resource Manager".
Responsibilities of an Operating System
The following are the main responsibilities of an Operating System:
1. Initializing computer hardware.
2. Interaction with user and computer hardware.
3. Time sharing for different programs.
4. Provide Multitasking i.e. run more than one program at a time.
5. Provide Multiprocessing i.e. can deal with more than one program at time.
6. Multithreading.
7. Responsible for error messaging between user and computer hardware.
8. Configuring different hardware, Software and keep them in contact with other.
9. Management and allocation of files & directories.
10. Memory allocation and management for memory.
Need of Operating System
Operating System as Resource Manager
Operating system controls various resources of system like memory, input/output device, processor,
secondary storage etc. these resources are required by a program during its execution. If multiple
programs are in execution then resource requirements can cause conflicts. Resulting into decreasing
system utilization. Thus in order to avoid such conflicts a control program is needed which can

66
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

monitor and manage all the resource allocation activities. This is where the operating system comes
into play. The job of the operating system is to provide an orderly and controlled allocation of the
processors, memories and input/output devices among the various programs competing for them.
When a computer (or network) has multiple users, the need for managing and protecting the
memory, input/output devices, and other resources is even greater, since the users might otherwise
interfare with one another. In addition, users often need to share not only hardware, but information
(files, databases, etc.) as well. This view of the operating system keeps track of users using the
resources, grant resource requests, to create accounts for usage, and mediate conflicting requests
from different program and users.
Operating System as Virtual Machine
Different classes of users need different kind of user services. Hence, running a single Operating
system on a computer system can not the got word many users. This problem is solved using a
virtual machine operating system (VMOS). The VMOS create several virtual machine each of which
is allocated to one user, who can use any operating system of his own choice on the virtual machine
and run this program under this operating system. This way users of the computer system can use
different operating system at the same time on a single hardware machine. We call each of these
operating system as a guest operating system & call the virtual machine as the host operating system.
The goal of virtualization technology is to create an independent environment for different
applications on a single hardware machine. This is done by creating virtual instances of operating
systems, applications etc. designed to run directly on hardware. The software extends the capabilities
of your existing machine so that it can run multiple applications (especially operating systems inside
multiple virtual machines) at the same time. A Virtual Machine (VM) is application software which
is responsible for creating these virtual instances. The end users have the same look and feel on a
VM as they would have on actual physical hardware. These VMs are portable, platform independent
and sandboxed from the host system. A host can even run multiple VMs simultaneously on a single
hardware configuration.
Concepts of Operating System:
Basically there are three elements in an operating system.
1. Kernel
2. Shell
3. BIOS (Basic Input Output System)

67
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

The Kernel
The kernel is the core of operating system. It is a collection of routines mostly written in C. These
routines communicate with the hardware directly. Kernel is loaded into memory when system is
booted, and remains in memory.
An application program access the kernel through a set of system calls. Kernel is then basic
component of an Operating System. It provides lowest level of abstraction layer for various
resources. The kernel is responsible for Operating System basic Function like process management,
device management, system call, disk management etc.
The Shell
Shell is the interface between user and the kernel. A shell is a piece of software that provides an
interface for users of an operating system which provides access to the services of a kernel. The
name shell originates from shells being an outer layer of interface between the user and the internals
of the operating system (the kernel). Operating system shells generally fall into one of two
categories: command-line and graphical. Command-line shells provide a command-line interface
(CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). In
either category the primary purpose of the shell is to invoke or "launch" another program. Computer
don’t have capability of translating commands into action. That requires an interpreter and this job in
UNIX is done through by outer part of operating system known as shell.
Types of Operating System
Mainframe Operating System
Mainframes are large sized computer in major corporate data centers. The operating systems for
mainframes are heavily oriented toward processing many jobs at once, most of which need
prodigious amounts of I/O. They typically offer three kinds of services: batch, transaction processing
and timesharing.
Example: OS/390, 0S/360
A network operating system (NOS), also referred to as the Dialogue, is the software that runs on a
server and enables the server to manage data, users, groups, security, applications, and other
networking functions. The network operating system is designed to allow shared time and printer

68
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

access among multiple computers a local area network (LAN), a private network or to other
networks.
Example: Microsoft Windows server 2003, Microsoft Windows server 2008, UNIX, Linux, Mac OS
x and Novell Netware.
There are two types of Network Operating system:
1. Peer to Peer NOS
2. Client Server NOS

Peer to Peer NOS


In a peer to peer network operating system users are allowed to share resources and files located on
their computers and access shared resources from others. There is no centralized file server or
repository to manage. A peer to peer network sets all connected computer equal and all share the
same abilities to utilize resources available on the network.
Client / Server NOS
Client/server network operating systems allow the network to centralize functions and applications
in one or more dedicated file servers. The file servers become the heart of the system, providing
access to resources and providing security. Individual workstations (clients) have access to the
resources available on the file servers. The network operating system provides the mechanism to
integrate all the components of the network and allow multiple users to simultaneously share the
same resources irrespective of physical location.
Example: Novell Netware, Windows server.
Multiprocessor Operating System
Most Systems are single processor systems, i.e. they have only one main CPU. In case of
multiprocessor operating system, more than one processor in close communication, sharing the
computer bus, the clock and sometimes memory and peripheral devices. The systems are called
parallel computers, multicomputer or multiprocessors.
Personal Operating System
Usually referred to as PC there systems provide easy graphical interface to a user. They are single
user system and perform some basic tasks, such as:
• Recognizing information from keyboard and mouse.
• Sending information to the monitor.
• Storing and retrieving information from memory.
Example: Windows 95, 98, 2000, NT, XP, etc.

69
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Advantages
• Less cost.
• Efficient for small app.
• Reliable and easy to maintain
Applications
• Word processing
• Spreadsheets
• Graphics presentation
Real Time Operating System:

A real-time operating system works on deadlines. It is used to control machinery, scientific


instruments and industrial systems. An RTOS typically has very little user interface capability, and
no end-user utilities, since the system will be a "sealed box" when delivered for use. A very
important part of an RTOS is managing the resources of the computer so that a particular operation
executes in precisely on deadline, every time it occurs. This operating system just can't afford any
delay or prioritize other processes over deadlines. The result of such actions can be catastrophic and
can risk human lives.

Real time operating systems are of two types:

1. Hard Real Time System

If the action absolutely occur at a certain moment (or within a certain range), we have a hard real
time system. The features of such systems are:

• Secondary storage limited or absent.

70
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

• Data stored in short term memory, or read only memory (ROM).

• Conflicts with time sharing system.

• Not supported by general purpose operating systems.

Applications

• Air Traffic Control

• Railway Tracking Systems

• Boiler Control Systems

Example: car engine control system, robot satellite, Industrial process controllers.

2. Soft Real Time System

Another kind of real time system is a soft real time system, in which missing an occasional deadline
is acceptable.

• Limited utility in industrial control or robotics.

• Useful in applications like multimedia, virtual reality, requiring advanced operating system
features.

Applications

• System that control scientific experiments.

• Fuel injection system.

• Medical imaging system.

• Control system.

• Games, live audio video systems.

Advantages

• Maximum System Consumption

71
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

• There is almost no or very little dours time of system.

• Use of all resources efficiently.

Name of Operating System

• Vx Works

• RTX

• Micro Controller OS (Free ware)

Embedded Operating System:

An Embedded Operating system is a special purpose computer system designed to perform one or a
few dedicated functions and usually embedded as a part of a complete device.

This operating system is designed to be compact, efficient and reliable, forsaking many functions
that non-embedded computer operating system provide, and which may not be used by the
specialized applications they run.

Smart Card Operating Systems:

The smart card’s Chip Operating System (C operating system) is a sequence of instructions
permanently embedded in the ROM of the smart card. Unlike familiar PCDOS operating system or
Windows Operating System, C operating system instructions are not dependent on any particular
application, but are frequently used by most applications.

Distributed Operating System:

The development of networked computer that could be linked and communicate with each other,
gave rise to distributed computing. A Distributed operating system manages a group of independent
computers and makes them appear to be a single computer. Distributed system depends on
networking for their functionality. Distributed systems are able to share computational tasks and
provide a rich set of features to users. Distributed systems are carried out on more than one machine.

72
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Batch Operating System:

This type of operating system requires the program, data and appropriate system commands to be
submitted together in the form of a Batch job. It allows little or no interaction between users and
executing program. Greater potential for resource utilization. Program that do not require interaction
and program with long execution time may be served well by a batch operating system. Example:
Payroll, Statistical analysis, forecasting.

Time Sharing Operating System

These types of operating systems share a quantum of time between many users, Such that it can
handle multiple tasks simultaneously. They are also known as multiprogramming/multiuser systems.
For example, Main Frame Computers that has many users logged in together are time sharing.

Operating System Services

Operating system provides a number of services to application program and user. Application
access these services through application program interface (API). Operating system also provides
certain services to the program and to the users of those programs for accessing various system
resources. Some common services provided by operating system are:

1. Program Execution

This is the first and foremost service that is provided by the operating system to users. The main
purpose of a computer system is to allow the user to execute programs. So the operating system
provides an environment where the user can conveniently run programs. Operating system must
provide instructions for loading of a program into memory and execute it, normally or abnormally.

2. Input/Output operations

When a program is in running state it would need operations taking input and output the results. It is
the responsibility of operating system to provide input data from input devices or send output to
output devices or data files.

73
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

3. File system manipulation

The user of the computing system may definitely need to create the files on secondary storage to
record their work permanently for future use or delete the file that is already in existence. Operating
system must provide a mechanism that carefully handles all these file manipulation operations on the
respective devices.

4. Communication

There are some instances where processes need to communicate with each other to exchange
information. The program may be running on a single machine or may be run on several machines.
This phenomenon is known as “Inter process communication”. To maintain data security and
integrity, an operating system must handle inter process communication.

5. Error detection and recovery

Error may occur during execution of program. There are some operations like divide by
zero/memory access violation error may occur. Operating system constantly monitors the system for
detecting errors.

6. Resource sharing and allocation

When there are several users in the system or there are many programs competing for limited
resources, then the operating system guarantees optimal allocation and de-allocation of resources to
ensure increased throughput.

7. Accounting

The operating system keeps track of number of system resources used by user in multiuser
environment. It also keeps track of duration for which resources are used by user.

8. System protection

Operating system provides the services to user by ensuring security access by authenticate users to
important data. It also ensures that user processes are isolated from each other. Along with the
mentioned services, the operating system should also provide following services:

74
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

• User Interface (UI): A UI is the part of operating system that allows a user to enter and receive
information. UI can be Command Based (CLI) using a keyboard or Graphical User Interface (GUI).

• Memory Management (MM): A MM is the module of operating system which is responsible for
controlling and coordinating memory units, memory blocks for various running programs and
optimizing overall memory utilization.

• Device Management: The Device Management module controls the peripheral devices in a
computer. The OS interacts with these devices using device drivers which are small programs that
use system calls to interact with hardware devices.

• Process Management: This is the most important module of an operating system. The OS is
responsible for allocation of resources to processes, enable sharing/exchange of information and
synchronized interaction among the processes.

System Calls / Monitor Calls

The way that program, talk to the operating system is via "system calls." In short System calls
provide the interface between a process and the operating system. These system calls are generally
available as routines written in C and C++, although certain low-level tasks (e.g. tasks where
hardware must be accessed directly), may need to be written using assembly language instructions.

75
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

A system call is invoked to perform certain privileged operations by switching context from user
mode to kernel/supervisor mode. In user mode, Operating system acts as a general purpose service
provider and allows users to execute their programs (no privileged instructions are allowed to
execute in this mode). While in supervisor mode, Operating system allows only privileged
instructions to be executed.

On Unix, Unix-like and other POSIX-compatible operating systems, popular system calls are open,
read, write, close, wait, fork, exit and kill. Many of today's operating systems have hundreds of
system calls.

Table: Popular system calls

76
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Introduction to Process

Originally, computers can run only one program at a time, which had total access to all of the
computer’s resources. It is believed that modern systems run many programs concurrently, with the
operating system running its own tasks to do its own work. Multitasking and multiuser system needs
to be distinguishing between different programs being executed by the system. This is accomplished
with the help of a process. Much of the complexity of the operating system arises from the need for
multiple processes to share the hardware at the same time, CPU sharing or scheduling, process
synchronization mechanisms and a deadlock strategy. The process manager play a major role here

77
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

and hides all the complexities to the users which is called as the process abstraction. In addition, the
process manager is also responsible for the operating system’s protection and security.

Program vs Process:

1. A program resides on the disk. It is compiled or tested but still does not compete for CPU and
other resources. Once a user wants to execute the program, it is located on the disk and loaded in the
main memory time.

2. A program in execution is a “Process”.

3. A process is an abstract model of a sequential program in execution. It forms a schedulable unit of


work. It is identifiable object in the system having following components.

• The object program (or code) to be executed.

• The data on which the program will execute.

• The status of the process execution.

• A dedicated address space in the memory.

The operating system keep its processes separate and allocate the resources they need so that they
are less likely to interfere with each other and cause problems like deadlock and thrashing. The
operating system may also provide mechanisms for interprocess communication to enable processes
to interact in safe and predictable ways.

There are several definitions of a process:

• A program in execution.

• An asynchronous activity.

• The “animated spirit” of a procedure.

• The “locus of control” of a procedure in execution.

• That entity to which processors are assigned.

• The “dispatchable” unit.

78
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

A program is a passive entity write the process is an active entity.

The components of a process are:

• The object program to be executed (called the program text in UNIX).

• The data on which the program will execute.

• Resources required by the program (for example, files).

• The status of the process’s execution.

Structure of a Process

A Process Includes:

• Stack

 Local variables

 Function Parameters

• Head

 Dynamically allocated memory

• Data

 Global variables

• Texts

 Program Instructions

As shown in figure, a process is more than the program code (text section). It also includes current
activities as program counter and the contents of processor’s register. It has a stack which holds the
temporary data such as function parameters, local variables, return addresses, etc. The heap is the
memory that is dynamically allocated during process run time. The data section contains the global
variables.

Some terms which are commonly used in process management:

79
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

1. Task: On some systems processes are called as tasks. The task is like an envelope for the program.

2. Job: A job is the unit of work that a computer operator gives to the OS. Batch processes are often
called as jobs.

3. Thread: They are sometimes called as a ‘lightweight process’. The difference is that the thread is
not enough to execute whole program. A thread is a kind of stripped down process but not enough to
call a complete process. A thread is only a CPU assignment. Several threads contribute to a single
task, therefore the information about one process or task is used by many threads. Each task must
have at least one thread in order to do any work. Eg. of a thread is small processes at the inside an
app like Microsoft word automatic spelling checker thread.

4. CPU Burst Time: A period of uninterrupted CPU activity.

5. Input/Output Burst Time: A period of uninterrupted Input/Output activity.

6. Context: There are certain things that the operating system keeps track of as a process is running.
The information that the operating system is keeping track of is referred to as the process context.

Process Operation

1. Process model

2. Process creation

3. Process termination

Process Model

In this model all the runnable software are organized into a number of sequential process. We think
each process has its own virtual CPU. In reality CPU switches back and forth from process to
process. It provides a view of collection of processing running in parallel and CPU switches from
one process to another. This rapid back and forth switching is called multiprogramming.

80
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Figure (a) Multiprogramming of four programs. (b) Conceptual model of four independent,
sequential processes. (c) Only one program is active at once

Example: Program-Recipe

Input – Flour, eggs, sugar etc. (ingredients).

Output – Cake.

In this case, the process is the activity consisting of our baker reading the recipe, fetching the
ingredients and baking the cake. Process state is the state field in process descriptor. So according to
state process model, a five-state process model has following five states: New, Ready, Running,
Blocked, Exit.

• New : A process has been created but has not yet been admitted to the pool of executable
processes.

• Ready : Processes that are prepared to run if given an opportunity. That is, they are not waiting on
anything except the CPU availability.

• Running : The process that is currently being executed. (Assume single processor for simplicity.)

• Blocked : A process that cannot execute until a specified event such as an IO completion occurs.

• Exit : A process that has been released by OS either after normal termination or after abnormal
termination (error).

81
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Process Creation

A process may create several new processes, via a create-process () system call, during the course of
its execution. The creating process is called a parent process, and the new processes are called the
children of that process. Each of these new processes may in turn create other processes, forming a
tree structure of processes. Most operating systems (including UNIX and the Windows family of
operating systems) identify processes according to a unique process identifier (or pid), which is
typically an integer number.

In general, a process will need certain resources (CPU time, memory, files, I/O devices) to
accomplish its task. When a process creates a subprocess, that subprocess may be able to obtain its
resources directly from the operating system, or it may be constrained to a subset of the resources of
the parent process. The parent may have to partition its resources among its children, or it may be
able to share some resources (such as memory or files) among several of its children. Restricting a
child process to a subset of the parent’s resources prevents any process from overloading the system
by creating too many subprocesses.

There are four principle events that cause process to be created:

1. System initialization.

2. Execution of a process creation system call by a running process.

3. A user request to create a new process.

4. Initiation of a batch job.

When an operating system is booted, typically several process are created some of these are
foreground process while other are background processes, which are not associated with particular
user, but instead have some specific function.

In UNIX: To create a new process fork () system call is used. This call creates an exact clone of the
calling process.

In Windows: A single win32 function call, CreateProcess () handle both process creation and
loading the correct program into the new process.

82
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Process Termination

After a process has been created it starts running. Sooner or later the new process will terminate,
usually due to the following conditions:

1. Normal exit (Voluntary): Most processes terminate because they have done their work.

In UNIX – exit are the commands for normal exit of a process.

In Windows – ExitProcess.

2. Error exit: A reason for termination is that the process discovers an error.

If a system types the command gcc ramesh.c, to compile the program ramesh.c and no such file
exist, the compiler simply exit.

3. Fatal error: Termination of process is due to an error caused by the process often by program bug.

Include executing an illegal instruction referencing non-existent memory or dividing by zero.

4. Killed by another process: Another reason a process might terminate is that a process executes a
system call telling the operating system to kill some other process.

Commands:

In UNIX – Kill.

In Windows – Terminate process.

2.4 Process States and Transitions

In order to determine either the process should be executed or not, the process scheduler associates a
‘State’ with each process. The current activity of a process is known as its state. The state can be an
integer variable which saves the scheduler time in deciding what to do with the process.

Broadly speaking the state of process can be one of the following:

83
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Figure: Five state model for process execution (State Transition Diagram)

States of the process:

• New: Process being created.

• Running: Instruction are being executed.

• Blocked: Process is waiting for some event to occur (such as an input/output completion or
reception of a single).

• Ready: The process is waiting to be assigned to a processor.

• Terminated: The process has finished execution.

One process can be running on any processor at any instant.

A new process is created to execute a program. This event occurs because of various reasons listed
in PROCESS CREATION.

The operating system will move a process from the NEW state to the READY state when it is
prepared to take on an additional process. Most systems set some limit based on the number of
existing processes or the amount of virtual memory committed to existing processes. This limit
assures that there are not so many active processes as to degrade performance.

84
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

When it is time to select a process to run, the operating system chooses one of the processes in the
ready state. This is the job of the scheduler or dispatcher using some predefined algorithm, such as
FCFS, SIN, priority scheduling, SRT or round robin, to determine which process will get the CPU,
when, and for how long.

The process is preempted because the operating system decides to schedule some other process. This
transition occurs either because a higher priority process becomes ready, or because the time slice of
the process elapses.

The program being executed makes a system call to indicate that it wishes to wait until some
resource request made by it is satisfied, or until a specific event occurs in the system.

Five major causes of blocking are:

• Process requests an I/O operation.

• Process requests memory or some other resource.

• Process wishes to wait for a specified interval of time.

• Process waits for message from another process.

• Process wishes to wait for some action by another process.

Execution of the program is completed or terminated. Five primary reasons for process
termination are as follows:

• Self termination: The program being executed either completes its task or realizes that it cannot
execute meaningfully and makes a ‘terminate me’ system call.

• Termination by a parent: A process makes a ‘terminate Pi’ system call to terminate a child process
Pi, when it finds the execution of the child process is no longer necessary or meaningful.

• Exceeding resource utilization: An OS may limit the resources that a process may consume. A
process exceeding a resource limit would be terminated by the kernel.

• Abnormal conditions during execution: The kernel cancels a process if an abnormal condition
arises in the program being executed.

85
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

• Incorrect integration with other processes: The kernel may cancel a process for incorrect interaction
with other processes.

A request made by the process is satisfied or an event for which it was waiting occurs.

Process Table and Process Control Block (PCB)

The operating system typically performs several operations when it creates a process. First it must be
able to identify each process; therefore it assigns a process identification number (PID) to each
process. Next the operating system creates the PCB/Process Descriptor which maintains information
about the process that then Operating system needs.

If the operating system supports multiprogramming, then it needs to keep track of all the processes.
For each process, its process control block (PCB) is used to track the process’s execution status. This
block of memory contains information about process state, program counter, stack pointer, status of
opened files, scheduling algorithms etc. All these information is required and must be saved when
the process is switched from one state to another.

1. Pointer: This is the stack pointer which is required to be saved when the process is switched from
one state to another to retain the current position of the process.

86
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

2. Process State: As discussed above, a process can appear in any of the five states. This field stores
that respective state of process.

3. Process Number: Every process when newly created is assigned with a unique process identifier
called ‘processid’ or usually ‘pid’.

4. Program Counter: This field stores the counter which contains the address of the next instruction
that is to be executed for this process.

5. Registers: These are the CPU registers which includes: Accumulator, Base Registers, Index
registers, stack pointers and general purpose registers.

6. Memory Limits: This field contains the information about memory management system used by
operating system. This may include the page tables, segment tables etc.

7. Open Files List: This information includes the list of files opened for a process. It also includes
the list of all I/O devices that are required for those files.

8. Miscellaneous Accounting and Status Data: This field includes the information about the amount
of CPU used, time constraints, jobs or process numbers etc.

The PCB also stores the register content, (called the execution content) of the processor when it was
blocked from running. This execution content architecture enables operating system to restore a
process’s execution context when the process returns to the running state. When one process is
running on a CPU and another needs to run on the same CPU, we need to switch between the
processes. This is called a context switching (or a “state” save and “state” restore or CPU switch).
When the process made transitions from one state to another, the operating system must update
information in the process’s PCB. The operating system typically maintains pointers to each
process’s PCB in a process table so that it can access the PCB quickly.

87
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Figure: Process table and Process control block (PCB)

The PCB maintains context for the process when it is not running. The context includes CPU
registers, memory management information, etc. When a process ‘’looses’ the CPU, the context is
needed to be saved so that the process can be resumed later.

Figure: CPU Switch From Process to Process

Threads

88
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

When the computers were first invented, they were capable of executing one program at a time. Thus
once program was completely executed, then they picked the second one to execute and so on. With
time, the concept of timesharing was developed whereby each program was given a specific amount
of processor time and when its time got over the second program standing in queue was called upon
(this is called Multitasking). Each running program (called the process) had its own memory space,
its own stack, heap and its own set of variables. One process could spawn another process, but once
that occurred the two behaves independent of each other. The programs wanted to do more than one
thing at the same time (this is called Multithreading). For example, a browser might want to
download one file in one window, while it is trying to upload another and print some other file. This
ability of a program to do multiple things simultaneously is implemented through threads.

A thread is a lights weight process (LWP) thats may connected to other threads to perform
concurrent parallel processing. This phenomena of connecting or communicating is known as
Multithreading. Thread is nothing but collection of instructions for a specific task or to contribute its
part in Multithreading process.

Some examples of threads are:

• Win 32 (in Windows), 32 bit OS

• C thread (in OSX)

• POSIX thread (Solaris, Linux etc.)

Multithreading

Threads, sometimes called as Light Weight Processes (LWPs) are independent part of a process. A
thread is lightweight, since the operating system doesn’t have to give it its own memory space. A
thread shares memory with the other threads in the process.

A process is the integration of many parallel executing threads which enables us to split a program
into logically separate pieces. All these independent pieces do not interact with each other until
required.

A task is multithreaded if it is composed of several independent sub-processes which do not work on


common data, but each of those piece runs in parallel to other.

89
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

As explained above, Multitasking is the ability of an operating system to execute more than one
program simultaneously. Though we say so but in reality no two programs on a single processor
machine can be executed at the same time. The CPU switches from one program to another so
quickly that appears as if all the programs are executing parallely. Multithreading is the ability of an
operating system to execute the different parts of the program, called threads, simultaneously. The
program has to be designed well so that the different threads do not interfere with each other. This
concept helps to create scalable applications because you can add threads as and when needed.
Individual programs are all isolated from each other in terms of their memory and data, but
individual threads are not as they all share the same memory and data variables.

Thread has a program counter that keeps track of which instruction to execute next. It has registers,
which hold its current working variables. It has a stack, which contains the execution history, with
one frame for each procedure called but not yet returned from.

Although a thread must execute in some process, the thread and its process are different concepts
and can be treated separately. Processes are used to group resources together, threads are the entities
scheduled for execution on the CPU as shown in figure.

Each process provides the resources needed to execute a program. A process has a private virtual
address space, executable code, open to handle system objects, a security context, a unique process
identifier, environment variables, a priority class, minimum and maximum working set sizes, and at

90
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

least one thread of execution. Each process is started with a single thread, often called the primary
thread, but can create additional threads.

A thread is the entity within a process that can be scheduled for execution. All threads of a process
share its virtual address space and system resources. In addition, each thread maintains exception
handlers, a scheduling priority, thread local storage, a unique thread identifier, and a set of
structures. The thread context includes the thread’s set of machine registers, the kernel stack, a
thread environment block, and a user stack in the address space of the process.

We take an example of kitchen. If the kitchen is a process then the numbers of household appliances
that are required in the kitchen are the threads. In order to work, a thread needs power. The power
sockets are like kernel threads or CPUs. Since, there are more appliances then the power points, we
need to schedule the appliances as per their individual power requirement. When one part of the
program needs to share data with another, they must be synchronized or be made to wait for each
other.

You would have all noticed that the database or a web server interacts with a number of users
simultaneously. How are they able to do that? It is possible because they maintain a separate thread
for each user and hence can maintain the state of all the users. If the program is running as one
sequence then it is possible that failure in some part of the program will disrupt the functioning of
the entire program. But, if the different tasks of the program are in separate threads then even if
some part of the program fails, the other threads can execute independently and will not halt the
entire program.

Multithreading allows a server, such as a web server, to serve requests from several users
concurrently. Thus, we can avoid that requests are left unheard if the server is busy with processing a
request. One simple solution to that problem is one thread that puts every incoming request in a
queue, and a second thread that processes the requests one by one in a first-come first-served
manner. However, if the processing time is very long for some requests (such as large file requests
or requests from users with slow network access data rate), this approach would result in long
response time also for requests that do not require long processing time, since they may have to wait

91
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

in queue. One thread per request would reduce the response time substantially for many users and
may reduce the CPU idle time and increase the utilization of CPU and network capacity.

Multi-threaded processes have the advantage over multi-process systems that they can perform
several tasks concurrently without the extra overhead needed to create a new process.

When multiple threads in a single process exhibit process execution then this is called
“multithreading”. When there is one correspondence between a thread and a process (i.e. a process
contains single thread) then this is called “single threading or simply threading”. In the figure two
processes are shown, the process in figure is a single threaded process having a single thread of
execution. This thread shares the address space allocated to the process and various other parameters
like code, data and files.

Figure 2 (a) Single-threaded Process and (b) Multithreaded Process


Thread also has some private data of its own like registers and a separate stack. Whereas the process
shown in figure(b) is a process having multiple threads (3 threads) running independently. These
threads share the address space, code, data and files. Also, each thread has its separate register and
stack as shown in figure.
Advantages of using Threads
Subsequent trends in software and hardware design indicated that systems could benefit from
multiple threads of execution per process. Some advantageous factors are:

92
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

1. Software Design: Due to modularity of compiler design, many of today’s applications contain
segments and separate modules that can be independently executed. Separating these modules into
individual threads can improve application performance.
2. Performance: In a multi-threaded application, threads can share a processor so that multiple tasks
are performed in parallel. Concurrent parallel execution can significantly reduce the time required
for execution.
3. Cooperation: Many applications rely on independent components to communicate and
synchronize activities. Before threads, these components were executed as multiple processes that
established interprocess communication channels via the kernel. Performance with a thread is often
much better than that of a process because a process’s threads can communicate using their shared
address space.
Threads and Processes
Since, thread can access every memory address within the process address space, one thread can
read, write, or even completely wipe out another thread’s stack. In addition, to share an address
space, all the threads share the same set of open files, child processes, alarms, and signals, etc.
In figure (a) we see three traditional processes and 3 threads. Each process has its own address space
and a single thread each of them operates in a different address space. In contrast, in figure (b) we
see a single process with three threads all three of them share the same address space.

Figure (a) Three processes each with one thread


(b) One process with three threads
The CPU switches rapidly back and forth among the threads providing the illusion that the threads
are running in parallel. We can always run a separate program using process mechanism to
implement a task. The reason to use threads, is that they are cheaper than the normal processes and

93
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

they can be scheduled for execution in a user-dependent way, with less overhead. Threads are
cheaper than a whole process because:
1. They do not require the full set of resources to be implemented.
2. The PCBs for threads are much smaller, since each thread has only one stack and some registers.
3. It has no open files lists or resources lists, no accounting structures to update.
4. CPU time for a process reduces while using threads.
All of these resources are shared by the thread within the process. Threads can be assigned priorities
as well for their execution. Likewise a process, thread can also be in any of the one state: running,
blocked, ready or terminated.
It is important to realize that each thread has its own stack. When a thread has finished its work, it
can exit by calling a library procedure, say, thread_exit. It then vanishes and is no longer
schedulable. One thread can wait for a (specific) thread to exit by calling a procedure, for example,
thread_wait. Another common thread call is thread_yield, which allows a thread to voluntarily give
up the CPU to let another thread run.

94
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Table: Comparison between process and thread

Type of Threads
1. User Level Threads
User level thread performs threading operations in user space i.e. these threads are created by
runtime library that cannot execute privileged instructions or access kernel primitives directly. These
threads are transparent to OS i.e. OS dispatches the multi-threaded process as a unit. Therefore they
are also called as many to one thread mapping because the OS maps all threads to a single execution
context. The scheduling is not done by OS because kernel is not aware about the multiple threads of
a process. User-level libraries perform the thread scheduling.
Advantages:
1. Does not require OS support to run therefore they are more portable.
2. Scheduling can be easily modifiable just by modifying the user level library.

95
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

3. Low overhead because interupt processing and kernel invocation is not required.
Disadvantages:
1. Do not scale well on multiprocessor system.
2. Entire process blocks when any of its threads requests a blocking I/O operation.
3. After a user level thread blocks, the OS can dispatch a new process which may contain a low
priority threads. Therefore user level threads also do not support system wide priority scheduling. So
they can be parely used in real time systems.
2. Kernel Level Threads
Kernel level threads attempt to address the limitations of user level threads by mapping each thread
to its own execution context. It is also called as one to one thread mapping. Each user thread is
provided with a kernel thread that OS can dispatch. Each kernel level thread also stores Thread
Specific Data (TSD) such as register contents, thread ID etc. for each thread in system.
Advantages
1. Improved performance due to concurrent execution.
2. Kernel manages the thread so priority scheduling can be done.
3. Scale well on multiprocessors.
Limitations
1. Less efficient due to scheduling and synchronization operations invoked by the kernel which
increases overhead.
2. Software that employ kernel level thread is often platform dependent and therefore portability is
limited.
3. Consume more resources than user level thread.

96
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Table: Type of threads

Multithreading Models
Multithreading is the ability of an operating system to concurrently run multiple threads. According
to the number of user and kernel level threads a relationship can be defined between them.

97
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

1. Many-to-One Model

Many to one model maps many user level threads to one Kernel level thread. Thread management is
done in user space. When thread makes a blocking system call, the entire process will be blocks.
Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on
multiprocessors.

This model maps many user level threads to one kernel thread. Thread management is done by the
thread library in user space. Because only one thread can access the kernel at a time, multiple threads
are unable to run in parallel on multiprocessor, and the entire process will block if a thread makes a
blocking system call.
2. One-to-One Model
This model maps each user thread to a kernel thread. It allows another thread to run when a thread
makes a blocking system call, it also allows multiple threads to run in parallel on multiprocessor.

The only drawback to this model is that creating a user thread requires creating the corresponding
kernel thread and this overhead can burden the performance of an application. Therefore most
implementations of this model restrict the number of threads supported by the system.
Example: Linux, along with the family of Windows operating systems use this model.

98
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

3. Many-to-Many Model
This model maps many user threads to a smaller of equal number of kernel threads. Developers can
create as many user threads as necessary, and the corresponding kernel threads can run in parallel on
a multiprocessor.

One variation to the many-to-many model is that it maps many user threads to a smaller or equal
number of kernel threads and also allows a user thread to be bound to a kernel thread. This variation
is known as two-level model.
Example: operating systems such as IRIX, HP-UX, etc uses this model.

INTER PROCESS COMMUNICATION


To be very abstract, capability of an operating system that allows one process to communicate with
another process, running simultaneously on one or more computers is Inter Process Communication
(IPC). IPC enables controlling and sharing of data between several applications without interfering
with one another.
Inter Process Communication (IPC) is a set of programming interfaces that allows a programmer to
coordinate activities among different programs/processes that can run concurrently in an operating
system. This allows a program to handle many user requests at the same time. Since even a single

99
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

user request may result in multiple processes running in the operating system on the user’s behalf,
the processes need to communicate with each other and the IPC interfaces make this possible.
IPC is required in all multiprocessing systems, but it is not generally supported by single-process
operating systems such as DOS, OS/2 and MS-Windows. There are two fundamental models of inter
process communication which allows exchange data and information between processes.
Shared Memory Technique
Interprocess communication using shared memory technique requires a region of shared memory
which resides in the address space of the process creating the shared memory segment. In shared
memory technique, the processes which are sharing information are agreed for removing the
restriction for accessing the other process’s memory. Other processes that wish to communicate.
Using this shared memory segment must attach it to their address space and they can exchange
information by reading and writing data in the shared areas. One can imagine that some part of
memory being set aside by the operating system for use by different processes. The System IPC
describes the use of the shared memory mechanism as consisting of steps taken in order:
• The program first initializes the shared memory area. This is done by generating a shared memory
identifier
using the system call.
• Once the segment identifier is obtained, we have to attach the shared memory segment to some
address.
• On success from both system calls, we proceed by invoking the function, which reads. keyboard
inputs other than ‘\n’ (“Enter” key) and copies them in the shared memory. If the keyboard input is
an alphanumeric character, the program stops reading inputs from the keyboard and the process
terminates.
• Detaching of process is done automatically, but the shared memory segment is not destroyed. This
has to be done by invoking a system call. Otherwise, the shared memory segment will persist in
memory or in the swap space. Let us take an example of Producer Consumer problem to illustrate
the concept of shared memory systems. A Producer process produces information that is consumed
by the Consumer process.
Let us take an example of three people A, B and C. A produces some data which is consumed by B.
After processing the data B supplies the data to C. Now consider a situation where B is consuming
very slow while C is consuming very fast. At an instance of time, C will finish its execution and has
to sit ideal. Hence leads to Producer Consumer problem.

100
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

The solution to this problem is shared memory. To allow Producer process and Consumer
process run concurrently, we must provide buffers of items that can be filled by the
producer and consumed by the consumer. This buffer will reside in the region of memory
that is shared by producer and consumer processes.
Message Passing Technique
In Message Passing Technique, the communication takes place by means of message exchanged
between the cooperating processes. This mechanism provides process with the facility to
communicate and synchronize their actions without sharing the same address space. It is particularly
used for distributed environment where different processes reside on different computers, connected
through a network. The most popular example of message passing is a chat program used on the
World Wide Web, where different users can communicate with each via exchanging
messages.

101
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

A message passing facility provides at least two operations: send message and receive message. If
two users want to communicate with each other via message passing mechanism, they must establish
a communication link between them. The link can be implemented in any of the three given ways:
1. Direct or Indirect Communication
Processes must have a way to communicate each other. They can use either direct or indirect
communication.
Under direct communication, each process that wants to communicate must explicitly mention the
name of the recipient or the sender of the communication.
For example:
Send (p, message) – send a message to p.
Receive (q, message) – Receive a message from q.
With indirect communication, the messages are sent to and receive from mailboxes or
ports. Each mailbox has unique identification.
For example:
Send (a, message) – send a message to mailbox ‘a’.
Receive (a, message) – receive a message from mailbox ‘a’.
2. Synchronous or Asynchronous Communication
Message passing may either be done through blocking or non blocking mechanism and it is called as
synchronous and asynchronous communication.
 Blocking send: The sending process is blocked until the message is received by the mailbox.
 Non blocking send: The sending process sends the message and resumes operation.
 Blocking receive: The receiver blocks until a message is available.
 Non blocking receive: The receiver retrieves either valid message or a null.

Synchronization
A process is cooperating when it can affect or be affected by other processes’ execution. This
cooperation can be done by either of the two inter process communication methodologies: Shared
memory or Message passing. Process synchronization is required when one process must wait for
another to complete some operation before proceeding. For example, one process (called a writer)
may be writing data to a certain main memory area, while another process (a reader) may be reading
data from that area and sending it to the printer. The reader and writer must be synchronized so that
the writer needs not to overwrite.

102
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Race Conditions
In operating systems, processes that are working together share some common storage (main
memory, file etc.) that each process can read and write.
A race condition 䐀 is defined as the situation in which multiple threads or processes read and write a
shared data item and the final result depends on the order of execution. Only one ‘customer’ thread
at a time should be allowed to examine and update the shared variable.
Consider the example of two threads, when two threads access a shared variable at the same time.
The first thread reads the variable and the second thread reads the same value from the variable.
Then the first thread and second thread perform their operations on the value, and they see which
thread can write the value last to the shared variable. The value of the thread that writes its value last
is preserved, because the thread is writing over the value that the previous thread wrote.3.6
OPERATING SYSTEM Suppose there exists a variable A whose value is initially 100. Two
processes P0 and P1 are accessing this common variable as follows:
P0:
Read A;
A=A–100;
Write A;
P1:
Read A;
A=A+200;
Write A;
P0 and P1 will both read the value of A as 100 & try to update its value. The execution of these two
processes is expected to change the value of A to 200. This will be so, if P0 and P1 executes in
following order: P0 followed by P1 If we consider the concurrent access of these processes i.e. P0
P1 P0 and so, then the result will be arbitrary or might be wrong sometime.
When all the individual functions in a process are properly arranged, this leads to the successful
execution of all the functions in a timely manner. However, if the sequence of operations is thrown
out of balance, this creates a bottleneck. In the worse scenario, sometimes it is impossible to execute
process sequentially.
The system may need to process the fifth function in the string before the first and second functions
can be completed, the entire string must be aborted and re-established in the proper order. One
common example of a race condition is processing of data. If a system receives commands to read
existing data while writing new data, this can lead to a conflict that causes the system to shut down
in some manner. The system may display any error message if the amount of data being processed

103
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

placed an undue strain on available resources or the system may simply shut down. When this
happens, it is usually a good idea to reboot the system and begin the sequence again. If the amount
of data being processed is considerable, it may be better to allow the assimilation of the new data to
be completed before attempting to read any of the currently stored data. Many systems avoid the
potential for a race condition by setting priorities in the operational protocols. The priorities are
established to function well within the capabilities of the system and thus limit the ability of a race
condition to develop.
Exclusive and Non-Exclusive Locks
To control both read and write access to files, a system can use exclusive and non-exclusive locks. If
a user wishes to read a file, a non-exclusive lock is used. Other users can also get non-exclusive
locks to simultaneously read the file. But when a nonexclusive lock is placed on a file, no user can
write to that file.
Similarly, to write a file, an exclusive lock is needed. When an exclusive lock is places no other user
can read or write to that file.
The Critical Section Problem
A critical section is a part of a program which has exclusive access to shared data. Each process has
a segment of code, called a critical region, in which the process may change common variables,
update tables, write a file etc. The most important feature of this system is that when one process is
operating in a critical region, no other process is allowed to execute in its critical region. That is, no
two processes are executing in their critical sections at the same time.
In the past it was possible to implement this technique by using interrupts. By switching off
interrupts a process can guarantee itself uninterrupted access to shared data. But this method has
some drawbacks:
1. Masking interrupts can be dangerous, since there is always a possibility that important interrupts
may be missed.
2. Generally it is not implementable in a multiprocessor environment since interrupts will continue
to serve for other processors too.
Each process must seek for the permission while entering in the ‘critical section’. The section of
code implementing this request is entry section. The critical section may be followed by exit section.
The remaining code is remainder section as shown in figure.
Do {
Entry section
Critical section
Exit section

104
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Remainder section
} while (TRUE);
A solution to critical section problem must satisfy following requirements:
1. Mutual exclusion: Principle of mutual exclusion states that “If a process is executing in its critical
section, then no other processes can be executed in their critical sections”.
2. Progress: If there is no process in critical region and there are some processes that wants to
execute in critical region, then the processes that are not executing in the remainder section has a
right to race for critical region.
3. Bounded waiting: There is always a bound or a limit for a process to enter in the critical section.
4. Fault tolerance: Process running outside its Critical Region should not block other processes
accruing the Critical Region.
5. No assumptions may be made about speeds or the number of CPUs.
Mutual Exclusion with Busy Waiting
A process that is about to enter its critical section must first check to see if any other processes are in
their critical sections. There are various proposed solutions for achieving mutual exclusion, so that
while a process is busy updating shared memory in its critical region no other processes will enter
the critical region.

Figure: Mutual exclusion using critical regions


We can achieve mutual exclusion with Busy-Waiting. Busy waiting is a term used in operating
system development and here especially in process synchronization. When two or more processes

105
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

want to enter the same critical section, something has to be done to prevent more than one process
from entering. Busy-waiting or spinning is a technique in which a process repeatedly execute a loop
of code while waiting for an event to occur, such as whether keyboard input is available, or if a lock
is available. The CPU is not engaged in any real productive activity during this period, and the
process does not progress toward completion.
Most of the early algorithms were based on busy waiting. This means, that when they were not able
to enter the critical section they waited actively, till they were allowed. There are various proposals
for achieving mutual exclusion, so that while one process is busy updating shared memory in its
critical section, no other process will enter its critical section and cause trouble.
Disabling Interrupts
As we know, that the CPU switches between processes when clock interrupts occur. When any
process enters its critical region it disables all interrupts and re-enables them just before leaving it.
When all interrupts are disabled, no clock interrupts can occur and the CPU will not switch from one
process to another, ensuring that when one process is executing in its critical region the no other
process can execute.
while( true )
{
< disable interrupts >;
< critical region >;
< enable interrupts >;
< remainder >;
}
This approach is not good because the system will end if any process turned the interrupts off and
never turned them on again, i.e. the process, after entering the critical section, terminated
abnormally. Also if the system is a multiprocessor system, disabling interrupts affects only one
processor while the other processors will continue servicing.
Lock Variables
Lock is a shared variable, whose initial value is 0.To enter in the critical region a process must
acquire the lock. To acquire the lock, the process first reads the lock variable to determine whether it
is set, i.e. the value of the lock is 1. If the lock variable is set then the process will try to acquire the
lock at some future time. If the lock variable is not set (i.e. 0) then the process writes to the lock
variable to set it, thereby preventing any other process from acquiring the lock or entering the critical
section. The process is then able to proceed. When the process has completed it releases the lock by
writing a value to the lock variable to clear it, thereby allowing another processor to acquire the lock.

106
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

do{
acquire lock
critical section
release lock
remainder section
} while (TRUE);
An acquire failure occurs when the lock has been acquired by a first process and a
second process desires the lock. The second process reads the value of the lock variable,
and determines that the lock is acquired by another process.
The problem with this potential solution is that the operation reads the value of the lock variable, the
operation compares that value to 0, but before it can set the lock, another process is scheduled, runs,
sets the lock and enters its critical section. When the first process returns, it will enter its critical
section at the same time, violating the policy of mutual exclusion.

Strict Alternation
Busy waiting means continuously testing a variable until some value appears, a lock that uses busy
waiting is called a spin lock. It should be avoided, since it wastes CPU time.
while( true ) while( true )
{ {
while( turn != 0 ); while ( turn != 1 );
/* Spin lock: Not my turn */ /* Spin lock: Not my turn */
critical_section(); critical_section();
turn = 1; /* Release 1 */ turn = 0; /* Release 0 */
remainder_section(); remainder_section();
}/* Process 0 (p0) */ }/* Process 1 (p1) */
• This algorithm uses an integer variable turn which keeps track of whose turn it is to enter the CR
• Initially, process 0 inspects turn, fields it to be 0, and enters its CR,
• Process 1 also finds it to be 0 and therefore sits in a tight loop continually testing turn to see when
it becomes
• When process 0 leaves the CR, it sets turn to 1, to allow process 1 to enter its CR,
• Suppose that process 1 finishes its CR quickly, so both processes are in their non CR
(with turn set to 0)
• Process 0 finishes its non CR and goes back to the top of its loop. Process 0 executes its whole loop
quickly, exiting its CR and setting turn to 1.
• At this point turn is 1 and both processes are executing in their non CR,
• Process 0 finishes its non CR and goes back to the top of its loop,
• Unfortunately, it is not permitted to enter its CR, turn is 1 and process 1 is busy with its non CR.
• It hangs in its while loop until process 1 sets turn to 0,
• This algorithm does avoid all races. But violates condition fault tolerance.

107
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Test and Set Lock (TSL) Instruction


A separate hardware support is also available to prevent two processes from entering into critical
section simultaneously. Computers with multiple processors provides a special instruction
TSL RX, LOCK
This reads the content of the memory word lock into register RX and stores a non-zero value at the
memory address lock. This operation is completely inseparable and no other processor can access
the memory word until the instruction is finished. The CPU executing the instruction locks the
memory bus to prohibit other CPUs from accessing the memory until it is untied. Lock is a Boolean
variable used to coordinate the access to share memory. Whenever a process needs to enter in the
critical region, it monitors the status of variable lock. If lock is 0, the process can enter in the critical
region by setting lock to 1.
Implementation of TSL Instruction:
Enter
Ins1. TSL RX, LOCK
Ins2. CMP RX, #0
Ins3. JNE Enter
Exit
Ins4. MOV LOCK, #0
RET
The above instructions set show the use of TSL instruction in programming.
Enter module:
Instruction 1 uses TSL instruction to copy the content of shared variable lock in the RX
register. The value of register RX is then compared with 0 in Instruction 2.
• If the value is not 0, means the lock is already set. Hence the counter jumps back
to ‘Enter’ and repeat the procedure (instruction 3).
• If the value is 0, the subprogram returns with lock as set.
Exit module:
After finishing the execution the process leaves the critical region by simply moving 0 to
the lock variable.

108
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Sleep and Wake up Calls


As we have seen, busy waiting can be wasteful. Processes waiting to enter their critical
sections waste processor time checking to see if they can proceed. A better solution to
the mutual exclusion problem, which can be implemented with the addition of some new
primitives, would be to block processes when they are denied access to their critical
sections. Two primitives, Sleep and Wakeup, are often used to implement blocking in
mutual exclusion.
The two system calls used are:
• Sleep : It is a system call that causes the caller to block, that is, be suspended until
some other process wakes it up.
• Wakeup : It is a system call that wakes up the process. The wake up call has one
parameter, the process to be awakened.
One example where the sleep and wake up calls are used is the Producer-Consumer or
Bounded Buffer problem, explained in further section.
Mutexes: Mutual Exclusion
We are interested in allowing only one process or thread access to shared data at any given time. For
serialize access to these shared data, all processes will be executed except one. Suppose the
processes A and B are simultaneously trying to access the shared data, then if: A is trying to modify
the data, B must be excluded from doing so and vice versa. This is called ‘Mutual Exclusion’.
Mutual exclusion is informally called Mutex.
Mutexes can be implemented by using locks. Consider an example which shows an idea for each
process and thread to obtain locked-access to share data:
Get_Mutex (m); // Access shared data
Release_Mutex (m);
This protocol is meant to ensure that only one process can access the function Get_Mutex. All other
processes or threads are made to wait until the running process calls Release_Mutex to release the
lock. Since, mutexes are simple they can be easily implemented in the user space if TSL instruction
is available.
When more processes are involved, some modifications are necessary to this algorithm. The key for
obtaining mutexes optimality is that, if a second process tries to obtain the mutex, when another
process already has it, will get caught in a loop, which does not terminate until the other process

109
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

releases the mutex. This solution causes busy waiting i.e. the program actively executes an empty
loop, wasting CPU cycles, rather than moving the process out of the scheduling queue. This is also
called as spin lock, since the process ‘spins’ on the loop while waiting.
With threads the situation is somewhat different. A thread can not be interrupted if executed for a
long time. Consequently, a thread that tries to acquire a lock by busy waiting will loop forever and
never acquire that lock because it never allows any other thread to run and release the lock.
Semaphores
Semaphores are the classic method for restricting access to share resources (e.g. storage) in a multi-
processing environment. A semaphore is an integer counting variable and is used to solve problems
where a competition among processes exists to access the shared memory. The idea is that one part
of the program tends to increase the semaphore and other part tends to decrease the semaphore. The
value of flag variable dictates weather a program will wait or continue, or something special will
occur.
“A semaphore S is an integer variable that apart from initialization, is accessed only through two
standard atomic operations : Wait () and Signal ()”. Semaphores are a pair composed of an integer
variable, called the semaphore counter, and a queue (FIFO) of ids of processes waiting to get
synchronized access to the shared resource that the semaphore is controlling.
The counter is initialized to a positive value, and only two atomic operations can be performed on
the semaphore by a process:
Wait: a process performs a wait operation to tell the semaphore that it wants exclusive access to the
shared resource. If the semaphore is empty, then the process is allowed to continue its execution
immediately. This process decreases the semaphore counter by one. If it gets negative, it blocks the
process and enters its id in the queue.
wait (S)
{
While(S < = 0) ;No-Op
S--;
}
Signal: a process performs a signal operation to inform the semaphore that it has finished using the
shared resource. If there are processes suspended on the semaphore, the semaphore wakes one of
them. If there are no processes suspended on the semaphore, the semaphore goes into the empty
state. This process increases the semaphore by one.
If it is still negative, it unblock the first process of the queue, removing its id from the queue itself.
Signal (S)

110
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

{
S++;
}
Three features of semaphores are:
• A semaphore counter is not just a single value. It’s rather a set of one or more values, and the exact
number of values is specified upon semaphore creation.
• The creation of a semaphore is independent of its initialization. This is a fatal flaw, since we cannot
atomically create a new semaphore set and initialize all the values in the set.
• A semaphore must be explicitly destroyed when the shared resource is no longer necessary, since
the system wide total number of semaphore sets is constant. Moreover, semaphore counters are
always nonnegative, a zero causing the waiting process to be put to sleep.
Usage: Operating systems often distinguish between counting and binary semaphores. The value of
a counting semaphore can range over an unrestricted domain. The value of a binary semaphore can
range only between 0 and 1. On some systems, binary semaphores are known as mutex locks, as they
are locks that provide mutual exclusion.
We can use binary semaphores to deal with the critical section problem for multiple processes. The n
processes share a semaphores, mutex, initialized to 1. Each process Pi is organized
do {
waiting (mutex);
|| critical section
signal (mutex);
|| remainder section
} while (True);
Counting semaphores can be used to control access to a given resources consisting of a finite number
of instances. The semaphore is initialized to the number of resources available. Each process that
wishes to use a resource performs a wait () operation on the semaphore thereby decrementing the
count. When a process releases a resources, it performs a signal () operation incrementing the count.
When the count for the semaphore goes to 0, all the resources are being used. After that, processes
that wish to use a resources will block until the count becomes greater then 0.
Implementation
It is easy to see how a semaphore can be used to enforce mutual exclusion on a shared resource. A
semaphore is assigned to the resource. It is shared among all processes that need to access the

111
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

resource, and its counter is initialized to 1. A process then wait on the semaphore upon entering the
critical section for the resource, and signal on leaving it. The first process will get access. If another
process arrives while the former is still in the critical section, it will be blocked, and so will further
processes. In this situation the absolute value of the counter is equal to the number of processes
waiting to enter the critical section. Every process is leaving the critical section will let a waiting
process use the resource by using signal procedure of the semaphore.
wait(s)
Semaphore s;
{
while (s==0); /* wait until s>0 */
s=s-1;
}
signal(s)
Semaphore s;
{
s=s+1;
}
Init(s, v)
Semaphore s;
Int v;
{
s=v;
}
In the above example wait and signal are two functions which respectively shows the increment and
decrement of semaphore variable ‘s’. No other process can access the semaphore when wait or signal
are executing.
The value of a semaphore is the number of units of the resources which are free. If there is only one
resource a “binary semaphore” with values 0 or 1 is used.
Example: Let us take semaphore value to 1.
If process P1 wants to enter its critical section it has to wait till the value of s is decremented to 0.

112
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

When the value goes to 0, P1, enters its critical section and signals after leaving its critical section.
The value of s is incremented to 1. If other process P2 wants to enter its critical section while P1 is in
its critical section, it uses the shared semaphore value 0 and performs wait operation first and
decrement the value of semaphore by 1.
P2 continues to loop in the while until P1 executes signal(s). So, only one process will be executing
at a time. Operating on a semaphore set is done by using the semop system call.
Another simple example of semaphores is reading and writing via buffers. Here we can count the
number of items currently present in the buffer. When the buffer becomes full, the process which is
filling is made to wait until the space in the buffer is available.

113
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Basic Concepts of CPU Scheduling

CPU scheduling is the basic of multiprogramming. The main aim of multiprogramming is to keep
the CPU as busy as possible, for that OS allocates the CPU to the next process while previous
process waits for an I/O operation. But in a simple computer system, the CPU sit idle until the I/O
request is granted. By switching the CPU among processes, the operating system can make the
computer more productive.Scheduling of this kind is a fundamental operating-system function.
Almost all computer resources are scheduled before use. The CPU is, of course, one of the primary
computer resources. Thus, its scheduling is central to operating system design.
CPU and I/O Bound
Nearly all processes alternate bursts of computing with (disk) I/O requests, as shown in figure.
Typically the CPU runs for a while without stopping, then a system call is made to read from a file
or write to a file. When the system call completes, the CPU computes again until it needs more data
or has to write more data, and so on. Note that some I/O activities count as computing. For example,
when the CPU copies bits to a video RAM to update the screen, it is computing, not doing I/O,
because the CPU is in use. I/O in this sense is when a process enters the blocked state waiting for an
external device to complete its work.

Bursts of CPU usage alternate with periods of waiting for I/O


(a) A CPU bound process (b) An I/O bound process
The important thing to notice about figure is that some processes, such as the one in figure (a), spend
most of their time computing, while others, such as the one in figure (b), spend most of their time
waiting for I/O. The former are called CPU bound, the latter are called I/O bound. CPU bound
processes typically have long CPU bursts and thus infrequent I/O waits, whereas I/O bound
processes have short CPU bursts and thus frequent I/O waits. Note that the key factor is the length of
the CPU burst, not the length of the I/O burst. I/O bound processes are I/O bound because they do
not compute much between I/O requests, not because they have especially long I/O requests. It takes
the same time to read a disk block no matter how much or how little time it takes to process the data

114
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

after they arrive. It is worth noting that as CPUs get faster, processes tend to get more I/O bound.
This effect occurs because CPUs are improving much faster than disks. As a consequence, the
scheduling of I/O bound processes is likely to become a more important subject in the future. The
basic idea here is that if an I/O bound process wants to run, it should get a chance quickly so it can
issue its disk request and keep the disk busy.
Process Schedulers
The scheduler is a component of an operating system that determines which process should be run
and when. CPU scheduling is the process of selecting a process and allocating the processor to the
selected process for execution. There are 3 distinct types of schedulers: a long-term scheduler (also
known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler
and a short-term scheduler (also known as a dispatcher). The names suggest the relative frequency
with which these functions are performed. The perfect CPU scheduler:
• Minimize Latency: Response or Job completion time.
• Maximum Throughput: Maximize Jobs/time.
• Maximize Utilization: Keep I/O busy
• Fairness: Everyone makes progress, no one starves

Quencing diagram for the CPU scheduling


1. Long-term Scheduler
The long-term scheduler decides which jobs or processes are to be admitted to the ready queue.
When an attempt is made to execute a program, its admission to the set of currently executing
processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates
what processes are to run on a system, and the degree of concurrency to be supported at any one time
i.e.: whether a high or low amount of
processes are to be executed concurrently, and how the split between IO intensive and CPU intensive
processes is to be handled. In modern OS, this is used to make sure that real time processes get
enough CPU time to finish their tasks. Without proper real time scheduling, modern GUI interfaces
would seem sluggish.
Long-term scheduling is also important in large-scale systems such as batch processing systems,
computer clusters, super computers etc.

115
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

2. Mid-term Scheduler
The mid-term scheduler, present in all systems with virtual memory, temporarily removes processes
from main memory and places them on secondary memory (such as a disk drive) or vice versa. This
is commonly referred to as “swapping out” or “swapping in”. The mid-term scheduler may decide to
swap out a process which has not been active for some time, a process which has a low priority, a
process which is page faulting frequently, or a process which is taking up a large amount of memory.
In order to free up main memory for other processes, it swaps in the process later when more
memory is available, or when the process has been unblocked and is no longer waiting for a
resource.
3. Short-term Scheduler
The short-term scheduler (also known as the dispatcher) decides which of the ready, inmemory
processes are to be executed next following a clock interrupt, an IO interrupt, an operating system
call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more
frequently than the long-term or mid-term schedulers. A scheduling decision will at a minimum have
to be made after every time slice and these are very short. This scheduler can be preemptive,
implying that it is capable of forcibly removing processes from a CPU when it decides to allocate
that CPU to another process, or non-preemptive, in case the scheduler is unable to “force” processes
off the CPU.
Dispatcher module gives control of the CPU to the process selected by the short term scheduler, this
involves:
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that program.
Dispatch latency is the time it takes for the dispatcher to stop one process and start another running.

Dispatcher

116
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

States of a Process

Schedulers and process state transition


Scheduling Criteria / Goals / Performance Metrics
Many algorithms are provided there to schedule the CPU. We should have some criteria to select an
algorithm, which suits the circumstances. Some of the major criteria are :
1. CPU utilization – keep the CPU as busy as possible
2. Throughput – number of processes that complete their execution per time unit
3. Turnaround time – amount of time to execute a particular process
4. Waiting time – amount of time a process has been waiting in the ready queue
5. Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)
There are varieties of possibilities to choose a process to run:
1. When a new process is created, a decision needs to be made whether to run the parent process or
the child process.
2. When a process exits, that process can no longer run, so some other process must be chosen from
the set of ready processes.
3. When process switches from running to waiting, it could be because of input/output request, or
wait for child to terminate, or wait for synchronization operation to complete.
4. When an input/output interrupt occurs, if the interrupt came from an input/output device that has
now completed its work, some other process that was blocked waiting for the input/ output may now
be ready to run. It is up to the scheduler to decide that if the newly ready process should run, the
process that was running at the time of the interrupt should continue or some third process should
run.
Scheduling Algorithms
Scheduling is the method by which thread, process or data flows are given access to system
resources. Scheduling can be divided into following categories:
1. Non Preemptive
A non preemptive scheduling algorithm picks a process to run and then just lets it run until it blocks
(either on I/O or waiting for another process) or until it voluntarily releases the CPU. Even if it runs
for hours, it will not be forceably suspended. In effect, no scheduling decisions are made during
clock interrupts. After clock interrupt processing has been completed, the process that was running
before the interrupt is always resumed.

117
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Non-preemptive algorithms are designed in such a way so that once a process enters the running
state it is not removed from the processor until it has completed its service time or it voluntarily
releases the CPU.
2. Preemptive
A preemptive scheduling algorithm picks a process and lets it run for a maximum of some fixed
time. If it is still running at the end of the time interval, it is suspended and the scheduler picks
another process to run (if one is available). Doing preemptive scheduling requires having a clock
interrupt occur at the end of the time interval to give control of the CPU back to the scheduler.
Preemptive algorithms are driven by the notion of prioritized computation. The process with the
highest priority should always be the one currently using the processor. If a process is currently
using the processor and a new process with a higher priority enters the ready list, the process on the
processor should be removed and returned to the ready list until it is once again the highest-priority
process in the system.
Priorities may be assigned automatically by the system or they may be assigned externally. They
may be static or they may be dynamic.
• Static priorities do not change. Static priority mechanisms are easy to implement and have
relatively low overhead. They are not, however, responsive to changes in environment, changes that
might make it desirable to adjust a priority.
• Dynamic priority mechanisms are responsive to changes. The initial priority assigned to a process
may have only a short duration after which it is adjusted to a more appropriate value. Dynamic
priority schemes are more complex to implement and have greater overhead than static schemes. The
overhead is hopefully justified by the increased responsiveness of the system.
3. Cooperative
When computer usage evolved from batch mode to interactive mode, multiprogramming was no
longer a suitable approach. Each user wanted to see program running as if were the only program in
the computer. So approach, which was eventually supported by many computer operating system, is
known as Cooperative multitasking or Cooperative Scheduling.
4. Non Cooperative
Non cooperative schedulers using the optimal strategy that maximum the efficiency while fairness
is ensured at a system level ignoring application characteristics. The task of a given application all
have the same computation and communication requirements, but those requirements can vary from
one application to another application.
First Come First Served (FCFS)
This is a Non-Preemptive scheduling algorithm. FIFO strategy assigns priority to processes in the
order in which they request the processor. The process that requests the CPU first is allocated the
CPU first. This is easily implemented with a FIFO (First In First Out) queue for managing the tasks,
as the process come in, they are put at the end of the queue. As the CPU finishes each task, it
removes it from the start of the queue
and heads on to the next task. Let’s take three processes that arrive at the same time in this order:
Process CPU Time Needed (ms)

118
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

P1 24
P2 3
P3 3
P1 starts at time 0 and ends at time 24.
P2 starts at time 24 and ends at time 27.
P3 starts at time 27 and ends at time 30.
Thus, P2 has to wait 24 milliseconds to start and P3 has to wait 27 milliseconds. The average waiting
time here is:

First Come First Served


P1‘s waiting time = 0
P2‘s waiting time = 24
P3‘s waiting time = 27
Average waiting time is the sum of waiting times of all the processes divided by number of
processes.
Average waiting time: (0 + 24 + 27)/3 = 17 milliseconds.
Turnaround time:
It is computed by subtracting the time the process entered the system from the time it terminated.
Therefore we can say that
Turnaround time = Burst time + Waiting time
Process Turnaround time
P1 24 + 0 = 24
P2 3 + 24 = 27
P3 3 + 27 = 30
Average turnaround time:
= (24 + 27 + 30)/3 = 27 milliseconds.
Limitations:
If we just change the order to P-2, then P-3, then P-1, then
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3 milliseconds.
Clearly, the average waiting time under a purely first-in first-out system is going to often be poor if

119
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

one task is significantly longer than the others.

Change Order of First Come First Served


Clearly, the average waiting time under a purely first-in first-out system is going to often be poor if
one task is significantly longer than the others. So, the first idea that comes about after seeing this is
the idea of having the shortest task go first, or shortest-job-first scheduling. This, obviously, would
be similar to the idea above, except that as each job comes in, it is sorted into the queue based on
size. In shortest-job-first (SJF), waiting job (or process) with the smallest estimated run-time-to-
completion is run next. In other words, when CPU is available, it is assigned to the process that has
smallest next CPU burst. The SJF scheduling is especially appropriate for batch jobs for which the
run times are known in advance. SJF is optimal means gives minimum average waiting time for a
given set of processes SJF scheduling can be classified into two schemes non-preemptive and
preemptive.
Non-Preemptive
Once CPU is given to the process it cannot be preempted until completes its CPU burst. Let’s take
four processes that arrive at the same time in this order:
Process CPU Time Needed (ms)
P1 6
P2 8
P3 7
P4 3

Gantt chart:

Non preemptive SJF without arrival time

P1‘s waiting time = 3 P2‘s waiting time = 16


P3‘s waiting time = 9 P4‘s waiting time = 0

120
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Average waiting time : = (3 + 16 + 9 +0)/4 = 7 milliseconds


Turnaround time:
Turnaround time = Burst time + Waiting time

Process Turnaround time


P1 6+3=9
P2 8 + 16 = 24
P3 7 + 9 = 16
P4 3+0=3
Average turnaround time:
= (9 + 24 + 16 + 3)/4 = 13 milliseconds.
When the processes do not come at the same time, i.e. the arrival time of each process is different
then the arrival time plays an important role in the scheduling.
Let’s take four processes that arrive one after another in this order:
Gantt Chart:

Figure 4.8 Non preemptive Shortest Job First


P1‘s waiting time = 0
P2‘s waiting time = 6
P3‘s waiting time = 3
P4‘s waiting time = 7
Average waiting time = (0 + 6 + 3 + 7)/4 = 4
Turnaround time:
Turnaround time = Burst time + Waiting time

121
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Average turnaround time:


= (7 + 10 + 4 + 11)/4 = 8 milliseconds

Preemptive :-
If a new process arrives with CPU burst length less than remaining time of current executing
process, a preemptive SJF algorithm will preempt the currently executing process. This scheme is
know as the Shortest-Remaining-Time-First (SRTF).
Consider the example

Gantt Chart:

Preemptive Shortest Job First

P1‘s waiting time = 9


P2‘s waiting time = 1
P3‘s waiting time = 0
P4‘s waiting time = 2
Average waiting time = (9 + 1 + 0 +2)/4 = 3

122
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Turnaround time:
Turnaround time = Burst time + Waiting time

Average turnaround time:


= (16 + 5 + 1 + 6)/4 = 7 milliseconds.
Limitations:
The difficulty with this algorithm, is to know which incoming process is indeed shorter than another.
There is no way to know the length of the next CPU burst (the amount of time a given process will
need), so this type of scheduling is largely impossible.
Round Robin Scheduling
One of the oldest, simplest, fairest, and most widely used algorithms is round robin scheduling.
In this approach, a “time slice” is defined, which is a particular small unit of time. In each time slice,
the CPU executes the current process until the end of the time slice. If that process is done, it is
discarded and the next one in the queue is dealt with.
However, if the process isn’t done, it is halted and put at the back of the queue and then the next
process in line is addressed during the next time slice. Round Robin Scheduling typically, higher
average turnaround time than SJF, but better response time.
For example, four processes that arrive at the same time in this order and the time quantum is 20:

123
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

P1‘s waiting time = 57+24 = 81


P2‘s waiting time = 20
P3‘s waiting time = 37+40+17 = 9
P4‘s waiting time = 57+40 = 97
Average waiting time
= (81 + 20 + 94 +97)/4 = 73 milliseconds
Turnaround time: Turnaround time = Burst time + Waiting time.

Average turnaround time:


= (134 + 37 + 162 + 121)/4 = 113.5 milliseconds.
Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in time-
sharing environments in which the system needs to guarantee reasonable response times for
interactive users.
The most interesting issue with round robin scheme is the length of the quantum. Setting the
quantum too short causes too many context switches and lower the CPU efficiency. On the other
hand, setting the quantum too long may cause poor response time and approximates First Come First
Served (FCFS).
Limitations:
This method vastly slows down short processes, because they have to share the CPU time with other
processes instead of just finishing up quickly.

124
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Solution:
A good rule of thumb is to make sure that the length of the time slice is such that 80% of your
processes can run in one time slice. This way, only the longer processes wrangle over CPU time and
the short jobs aren’t stuck at the back of the queue. There are many approaches to solve the problem
of CPU scheduling; likely, your operating system uses one or a combination of the above methods
for scheduling the CPU time of the programs you are running
Priority Scheduling
Round robin scheduling makes the implicit assumption that all processes are equally important.In
this scheduling algorithm each process has a priority associated with it, and as each process hits the
queue, it is sorted in based on its priority so that processes with higher priority are dealt with first.
It should be noted that equal priority processes are scheduled in FCFS order

To prevent high-priority processes from running indefinitely, the scheduler may decrease the priority
of the currently running process at each clock tick (i.e., at each clock interrupt). If this action causes
its priority to drop below that of the next highest process, a process switch occurs. Alternatively,
each process may be assigned a maximum time quantum that it is allowed to run. When this
quantum is used up, the next highest priority process is given a chance to run.
For example consider the following set of processes with given priorities and burst time assumed to
be arrived at 0

125
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

P1‘s waiting time = 16


P2‘s waiting time = 14
P3‘s waiting time = 0
P4‘s waiting time = 24
Average waiting time = (16+14+0+24)/4 = 13.5 milliseconds
Turnaround time:
Turnaround time = Burst time + Waiting time

Average turnaround time:


= (22 + 16 + 14 + 30)/4 = 20.5 milliseconds.
A variation on this theme is multilevel queue scheduling, where each task is designated one of
several different levels of importance, and the most important ones always run first. This is quite
effective at scheduling jobs for a CPU, assuming that the operating system has the sense to properly
assign priorities to the tasks.
Limitations:
The problem occurs when the operating system gives a particular task a very low priority, so it sits in
the queue for a larger amount of time, not being dealt with by the CPU. If this process is something
the user needs, there could be a very long wait, this process is known as “Starvation” or “Infinite

126
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Blocking”.
Solution:
Many operating systems use a technique called “aging”, in which a low priority process slowly
gains priority over time as it sits in the queue. Even if, the priority of the process is low, there is a
surety of its execution.
Multilevel Queue Scheduling
This algorithm is created for situations in which processes are easily classified into different groups.
Ready queue is partitioned into separate queues and each queue has its own scheduling algorithm.
For example, separate queues might be used for:
• Foreground (interactive) processes : Round Robin
• Background (batch) processes : FCFS
These two types of processes have different response time requirements and so may have different
scheduling needs. The processes are permanently assigned to one queue, generally based on some
property of the processes, such as memory size, process priority or process type.
Scheduling must be done between the queues, which is commonly implemented as fixed-priority
preemptive scheduling.
Fixed Priority Scheduling
Fixed priority scheduling; (i.e., serve all from foreground then from background). This may cause
the possibility of starvation.
Let us take an example with five queues:
• System Processes • Interactive Processes
• Interactive Editing Processes • Batch Processes
• Student processes
Each queue has absolute priority over lower-priority queues. No process in the lower priority queue
could run unless the queues for higher-priority systems were all empty. Like, if an interactive
process entered the ready queue while a batch process was running, the batch process would be
preempted.

127
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Another possibility is to time-slice among the queues:

Time Slice
A short interval of time allotted to each user or program in a multitasking or timesharing system.
Time slices are typically in milliseconds.
The scheduler is run once every time slices to choose the next process to run. If the time slice is too
short then the scheduler will consume too much processing time but if it is too long then processes
may not be able to respond to external events quickly enough.

Multilevel Feedback-Queue Scheduling


Before multilevel feedback scheduling algorithm, we used only multilevel scheduling algorithm
which was not so flexible, but this one uses separate queue for handling the processes, it
automatically adjust the priority of the process. If the priority is high and the process will be allotted
to lower priority queue, then automatically it can switch to the higher priority queue.for example,
one queue may implement round robin algorithm the other may use first come first serve algorithm,
the allocation is based on the type of
process i.e. the process is either of cpu bound or i/o bound etc. This type of scheduling allows the
process to move between the various queues. The idea is to implement aging. The process that takes
too much CPU time is assigned with lower-priority queues, in addition to the process that waits too
long for processing and is
assigned to higher-priority queues.
Multilevel-feedback-queue scheduler defined by the following parameters:
1. Number of queues.
2. Scheduling algorithms for each queue.
3. Method used to determine when to upgrade a process.
4. Method used to determine when to demote a process.

128
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

5. Method used to determine which queue a process will enter when that process needs service.
Example of Multilevel feedback queue scheduling:
Firstly consider three queue:
–Q0 : RR with time quantum 8ms
– Q1 : RR with time quantum 16ms
– Q2 : FCFS
• A new job enters queue Q0 that is served FCFS. When it gains CPU, job receives 8 milliseconds. If
it does not finish in 8 milliseconds, job is moved to queue Q1.
• At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2.
• At Q2 job is several FCFS.
• Each queue could have a different scheduling discipline/ time quantum.

Fair Share Scheduling


Assume we have a stream of jobs arriving for execution. When multiple processes need CPU time,
operating systems frequently allot CPU time slices on a round robin, equal time basis for processes
with the same priority. For example, if four processes are concurrently executing, the scheduler will
logically divide the available CPU cycles such that each process gets 25% of the whole (100% / 4 =
25%). This is not to say that each process will get 25% of the CPU time all at once. Most operating
systems define a time slice, something like 50 msec (arbitrary number which may vary) and will
give each process a time slice. So in this completely made up case, Each process will get 5 time
slices every second on a round robin basis. The incoming jobs may be categorized by certain abstract
groups, for which a fair share allocation of processor should be defined. Fair-share scheduling is a
scheduling strategy for computer operating systems in which the CPU usage is equally distributed
among system users or groups, as opposed to equal distribution among processes. The fair share
scheduler is using the term share to describe the relative importance of one workload versus another.

129
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

For example, if four users (A,B,C,D) are concurrently executing one process each, the scheduler will
logically divide the available CPU cycles such tha teach user gets 25% of the whole (100% / 4 =
25%). If user B starts a second process, each user will still receive 25% of the total cycles, but both
of user B’s processes will now use 12.5%. On the other hand, if a new user E starts a process on the
system, the scheduler will reapportion the available CPU cycles such that each user gets 20% of the
whole (100% / 5 = 20%).
Another layer of abstraction allows us to partition users into groups, and apply the fair share
algorithm to the groups as well. In this case, the available CPU cycles are divided first among the
groups, then among the users within the groups, and then among the processes for that user. For
example, if there are three groups (1,2,3) containing three, two, and four users respectively, the
available CPU cycles will be distributed as follows:
• 100% / 3 groups = 33.3% per group
• Group 1: (33.3% / 3 users) = 11.1% per user
• Group 2: (33.3% / 2 users) = 16.7% per user
• Group 3: (33.3% / 4 users) = 8.3% per user
Now for example if the user1 of group1 is having 2 processes then the 11.1% of the CPU time is
divided into the 2 processes of that user One common method of logically implementing the fair-
share scheduling strategy is to recursively apply the round-robin scheduling strategy at each level of
abstraction (processes, users, groups, etc.) The time quantum required by round-robin is arbitrary, as
any equal division of time will produce the same results. In the above method the CPU time
distribution is symmetric i.e. at initial level each group is having equal share of the CPU time, at
next level each user in a group is having equal share of the CPU time and so on. Another way of
implementing fair-share scheduling is that the CPU time distribution may be asymmetric. For
example group1 is assigned 3 shares, groups2 is assigned 1 share and group 3 is assigned 2 shares
respectively. The fair share scheduler would assign 3/(3+1+2)= 3/6=50% of the entire system
resources to group1. Group2 would receive 1/(3+1+2)=1/6=16.7%, while group3 would get
2/(3+1+2)= 2/6=33.3% of the system resources.

130
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

131
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

132
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

133
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

134
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

135
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

136
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

137
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

138
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

139
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

140
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

141
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

142
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

143
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

144
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

145
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

DEADLOCKS

Introduction
“Crisis and deadlocks when they occur have at least this advantage that they force us to think.”-
Jawaharlal Nehru

Previously, computers were known to run only one program at a time. All of the resources of the
system were available to that one program. As per the changing era, multiprogramming operating
systems were developed which can run multiple programs at once, without any conflicts. Programs
were required to specify in advance what resources they need so that they could avoid conflicts with
other programs running at the same time. Eventually, some operating systems offered dynamic
allocation of resources i.e. programs could request further allocations of resources after they had
begun running. This led to the problem of the deadlock.Here, is the simplest example with the help
of the following diagram which shows the allocated resources with the bold lines and queued
resources with dashed lines:
Program 1 requests resource A and receives it.
Program 2 requests resource B and receives it.
Program 1 requests resource B and is queued up, pending the release of B.
Program 2 requests resource A and is queued up, pending the release of A.

Resources allocated and queued in a system causing deadlock


Now, neither of the programs can proceed until the other program releases a resource. The operating
system does not know what action to take. At this point, the only alternative is to abort (stop) one of
the programs.

146
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

A set of process is in a deadlock state if each process in the set is waiting for an event that can
be caused by only another process in the set. In other words, each member of the set of deadlock
processes is waiting for a resource that can be released only by a deadlock process. None of the
processes can run, can not release any resources and none of them can be awakened. The number of
processes and kind of resources possessed and requested are unimportant. The simplest example of
deadlock is where process 1 has been allocated non
shareable resource A, say a tap drive and process 2 has been allocated non-sharable resource B, say a
printer. Now, if it turns out, with only two processes in the system that Process 1 needs resource B
(printer) to proceed and Process 2 needs resource A ( the tape drive) to proceed. Each process will
block the other process and all the other useful processes in the system. This situation is termed
deadlock. The system in deadlock state is because each process holds a resource being requested by
the other process and neither of the processes are willing to release the resources they hold. Consider
the European road rule system which says: on minor roads one should always wait for traffic coming
from right. If four cars arrive simultaneously at a cross road then, according to the rule all of them
must wait for each other and none of them can ever move. This situation is called ‘Deadlock’.

A Deadlock
“A deadlock is a situation where in two or more competing actions are waiting for the other to finish,
and thus neither ever does”. It is often seen in a paradox like the “chicken or the egg.
System Model
A system model consists of many resources, which are shared or distributed among several
competing processes. Memory, CPU, disk space, printers and tapes are the example of resources.
When a system has two CPUs then we can say that there are two instances of CPUs. Similarly, in a
network, we may have ten printers and we can say that there are ten instances of printers. In such
situations, we are not bothered about which instance of the requested resources is processing the

147
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

request. When a process is executing, it requests for a resources before using it and it must release
the resource after using it. Any process can request as many requests to carry out the assigned task. It
cannot make more requests than the maximum number available resources in the system. A process
may utilize the resources in the following sequence.
1. Request: The process requests for necessary resources. If the resources are not free then the
process has to wait until the resources are free so that it can acquire control on the resources.
2. Use: The process can operate/use the acquired resource to carry out assigned task.
3. Release: The process releases the resources when operation is complete on the acquired
resource.In the request and release steps, a process make system calls such as disk read/write,
printing, memory allocation. etc. Therefore, it is necessary to make sure that there is no conflict or a
situation where two processes are acquiring the same resources.
Resources
The resources may be either physical or logical. Examples of some physical resources are Printers,
Tape Drivers, Memory Space, and CPU Cycles, while Files, Semaphores, and Monitors are
categorized into logical resources. Resources are of two types:
1. Preemptable resource
2. Nonpreemptable resource
1. Preemptable resource: A preemptable resource is one that can be taken away from the process
owning it with no ill effects. Memory is an example of preemptable resource.
Consider for example, a system with 32 MB of user memory, one printer, and two 32-MB processes
that each want to print something. Process A requests and gets the printer, then starts to compute the
values to print. Before it has finished with the computation, it exceeds its time quantum and is
swapped out. Process B now runs and tries, unsuccessfully, to acquire the printer. Potentially, we
now have a deadlock situation, because A has the printer and B has the memory, and neither can
proceed without the resource held by the other. Fortunately, it is possible to preempt (take away) the
memory from B by swapping it out and swapping A in. Now A can run, do its printing, and then
release the printer. No deadlock occurs.
2. Non preemptable resource: A nonpreemptable resource in contrast, is one that cannot be taken
away from its current owner without causing the computation to fail. If a process has begun to burn a
CD-ROM, suddenly taking the CD recorder away from it and giving it to another process will result
in a garbled CD. CD recorders are not preemptable at an arbitrary moment.
Following are the three major operations which are processed on a resource:

148
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

General pattern for resource allocation


Request: To use any resource, it must be requested first by the respective process. If the resource is
not available at that moment, the requesting process is forced to wait. It depends upon the system
calls that how requests are queued and processed.
Use: If the requested resource is available to the process, it must be utilized to its full efficiency. The
resource that is currently running can be blocked if it is Preemptable.
Release: After the utilization of the resource by a process, it must be release as soon as possible for
the use of other processes. It is the job of operating system to make sure that processes get their
respective resources on time as per their priorities.
Deadlock Characterization
In a deadlock, process never finish executing and system resources are tied up, preventing other jobs
from starting
Conditions for Deadlock
Deadlock occurs when a number of processes are waiting for events which can only be caused by
another waiting process. These are the essential requirements for deadlock to occur. All these four
conditions must
be present for a deadlock to occur. If one of them is absent, there is no deadlock:
1. Mutual exclusion: This principle states that each resource is either assigned to a process
exclusively or is available to use, i.e. the resources must be non sharable. Only one process at a time
can use the resource, if another process requests that resource, the requesting process must be
delayed until the resource has been
released.

149
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

2. Hold and wait: Process holding resources can request additional resources, i.e. a process is
holding some resources and waiting to acquire additional resources that are currently being held by
other processes.
3. No preemption: Resources cannot be pre-empted, i.e. the processes can not be forced to give up
the resources they are holding before completion.
4. Circular waiting: There must be a set of processes P1, P2,…Pn. P1 is waiting for a resource or
signal from P2, P2 is waiting for P3…. and Pn is waiting for P1.
Deadlock Modelling
The above four condition of deadlock can be modelled using directed graph. The graph have two
kind of nodes: processes shown as circle and resource shown as squares. A process which has
acquired resource shown with arrows (edge) from the resource to the process as shown in Figure 7.4.
A process which has requested a resource but not yet been assigned to it is modeled with an arrow
from the process to the resource as shown in Figure 7.5. If such edges in graph create a cycle, there
is a deadlock. This deadlock
representation is called as “Resource Allocation Graph”.

In Figure 7.6 we can see a deadlock as process P3 is waiting for resource R3 which is currently held
by process P4. Process P4 is waiting for resource R4 and is not releasing resource R3. Both process

150
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

will wait forever, so a cycle is form, a cycle in the graph means that there is a deadlock involving the
process and resource (assume there is a one resource of each kind). In this example, the cycle is P3-
R3-P4-R4-P3.

Deadlock
If each resource type has several instance, than a cycle does not necessarily imply that a deadlock
has occurred. Consider the following resource allocation graph as shown Figure.

Resource allocation graph


The above figure depicts the following situation.
• The sets P, R and E:
P = {P1 , P2 , P3}
R = {R1 , R2 , R3, R4}
E = { P1  R1, P2 R3, R1 P2, R2 P2, R2 R1, R3 P3 }

151
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

• Resource instances:
One instance of resource type R1
Two instances of resource type R2
One instance of resource type R3
Three instances of resource type R4.
• Process states:
Process P1 is holding an instance of resource type R2 and is waiting for an instance of resource type
R1.
Process P2 is holding an instance of R1 and an instance of R2 and is waiting for an instance of R3.
Process P3 is holding an instance of R3.

Resource allocation graph with a deadlock


In the Figure suppose process P3 now request an instance of resource R2. Since no instance of
resource R2 is available or request edge P3  R2 is added in the graph, At this point two cycles
exists in the system as shown in Figure.
P1R1P2R3P3R2P1
P2R3P3R2P2
Processes P1, P2 and P3 are deadlock. Now consider the resource allocation graph as shown in
Figure.

152
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

In this graph, we also have a cycle


P1R1P3R2P1
However there is no deadlock.
Deadlock Handling
Deadlock problem can be handle in one of three ways.
• Protocol : We can use a protocol to prevent or avoid deadlock, ensuring that the system will never
enter a deadlock state.
• Detect and Recover : We can allow the system to enter a deadlock state, detect it, and recover.
• Ignore the Deadlock : We can ignore the problem altogether and pretend that deadlocks never
occur in the system. To ensure that deadlocks never occur, the system can use either a deadlock
prevention or deadlock avoidance scheme.
• Deadlock prevention provides a set of methods for ensuring that at least one of the necessary
conditions cannot hold these methods prevent deadlocks by constraining how requests for resources
can be made.
• Deadlock avoidance requires that the operating system be given in advance additional information
concerning which resources a process will request and use during its lifetime. With this additional
knowledge, it can decide for each request whether or not the process should wait. To decide whether
the current request can be satisfied or must be delayed, the system must consider the resource
currently available, the resources currently allocated to each process, and the future requests and
releases of each process.

153
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Deadlock detection and recovery is the optimal solution of the problem. We assume deadlock is
unlikely, but detect it and recover from it when it occurs rather than spending resource trying to
prevent it or avoid it.
Deadlock recovery involves two steps. The first is deadlock detection; this is essentially finding a
cycle in a graph of resource request. This is not too hard, but not the fast either fortunately, we do
not have to detect deadlock after each resource allocation. Instead we can check periodically for
deadlock, say every few second. The second deadlock is discovered or we have to figure out how to
break it. This involves preempting a resource, which might mean cancelling a process and starting it
over.
• Ignore a Deadlock : It is the simplest method to deal with deadlock. In this method we ignore the
deadlock and pretend as if we are totally unaware of them.
Deadlock Prevention
Deadlock prevention consists of falsifying one or more of the necessary conditions using static
resource allocation policies so that deadlocks are completely eliminated. (Refer 7.4.1).
Mutual Exclusion Prevention
If no resource were ever assigned exclusively to a single process, we would never have deadlocks. It
is not possible for all time to share the resources or signals which are being waited for (e.g. a printer,
CD drive, etc). If the resources can be shared, there is no reason to wait. Removing the mutual
exclusion condition means that no process may have exclusive access to a resource. This proves
impossible for resources that cannot be spooled, and even with spooled resources deadlock could
still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization
algorithms.
Hold and Wait Prevention
The “hold and wait” conditions may be removed by requiring processes to request all the resources
they will need before starting up or before embarking upon a particular set of operations. The most
common solution to this problem is to allocate the resources to the processes before starting the
execution. Since, the number of resources required per process is already known, there will be no
deadlock. This is the optimal way to reduce deadlock occurred by such type of condition. But
sometimes the processes do not know the number of resources required until they have started
execution, i.e. This advance knowledge of required resources is frequently difficult to satisfy.
Another problem is that the resources will not be used optimally, with this approach. A slighter way
to reduce this type of deadlock condition is to allow a process to request resources only when it has
none. A process may request some resources and use them. Before it can request any additional
resources, however, it must release all the resources that it is currently allocated.

154
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

No Preemption
As a solution to this problem we can use this approach: If a process that is holding some resources
requests another resource that cannot be immediately allocated to it, then all resources currently
being held are released (preempted). The preempted resources are added to the list of resources for
which the process is waiting. The process will be restarted only when it can regain its old resources,
as well as the new ones that it is requesting.The solution to no-preemption includes priority
algorithms that are lock free and wait free algorithms and optimistic concurrency control. Lock free
and wait free algorithms ensure that processes competing for a shared resource do not have their
execution indefinitely postponed by mutual exclusion. A
non-blocking algorithm is lock free, if there is guaranteed system-wide progress and wait free, if
there is also guaranteed per thread progress. Optimistic concurrency control algorithms are used in
relational databases which referred to as optimal locking, a reference to non-exclusive locks that are
created on the databases. Circular Wait Prevention
Circular wait problem can be eliminated in several ways. One of the ways simply says that a process
is entitled only to a single resource at a time. If it demands for another resource, it needs to free the
first one.
Another way to solve this problem is to provide a global numbering to all resources. By following
this rule the resources are allocated to processes sequentially and never lead to formation of cycles.

Circular wait prevention


Suppose there are two resources i and j and two processes A and B. We can get a deadlock only if A
requests resource j and B requests resource i. Assuming i and j are distinct resources, they will have
different numbers. If i > j, then A is not allowed to request j because that is lower than what it
already has. If i < j, then B is not allowed to request i because that is lower than what it already has.
Either way, deadlock is Impossible.
Algorithms that avoid circular waits include “disable interrupts during critical sections”, and “use a
hierarchy to determine a partial ordering of resources” (where no obvious hierarchy exists, even the
memory address of resources has been used to determine ordering).
Deadlock Avoidance

155
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

This approach to avoid the deadlock problem anticipates deadlock before it actually occurs. It
employs an algorithm to access the possibility that deadlock could occur and acting accordingly.
This method differs from deadlock prevention as it guarantees that deadlock cannot occur by
denying one of the necessary conditions of deadlock. In deadlock avoidance, we attempt to falsify
one or more of the necessary conditions in a dynamic way by keeping track of the current state and
the possible future conditions. The idea is to let the necessary conditions prevail as long as they do
not cause a deadlock but falsify them as soon as a deadlock becomes a possibility in the immediate
future. As a result, deadlock avoidance leads to better resource utilization.
If the necessary conditions for a deadlock are in place, it is still possible to avoid deadlock by being
careful when resources are allocated. Perhaps the most famous deadlock avoidance algorithm, is the
Banker’s algorithm. It is so named because the process is analogous to that used by a banker in
deciding if a loan can be safely made. The algorithm was developed by Edsger Dijkstra that tests for
safe states by simulating the allocation of maximum possible amounts of resources, and then makes
a check for the “safe-state” to test either the allocation will lead to possible deadlock conditions for
all other pending activities. This simulation is done before actual allocation is made.The Banker’s
Algorithm seeks to prevent deadlock by becoming involved in the granting or denying of system
resources. Each time that a process needs a particular non
sharable resource, the request must be approved by the banker. The banker is a conservative loaner.
Every time that a process makes a request for a resource to be “loaned” the banker takes a careful
look at the bank books and attempts to determine whether or not a deadlock state could possibly
arise in the future if the loan
request is approved. If the banker has enough free resource to guarantee that even one process can
terminate, it can then take the resource held by that process and add it to the free pool.
If, on the other hand, at any point in the reduction the banker cannot guarantee any processes will
terminate because there is not enough free resource to meet the smallest claim, a state of deadlock
can ensure. This is called an unsafe state. In this case the loan request is denied and the requesting
process is usually blocked.
Safe and Unsafe States
The main algorithms of deadlock avoidance are based on the concepts of safe and unsafe
states.Consider an example of a banker. In the below table, Four customers are there each of whom
has been granted a number of credit units. Even the total number of resources required by the
customers is 22 units, the banker reserves only 10 units to service them.

156
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Suppose at certain moment, customer A has been granted 1 unit, B has been granted 1unit, C has
been granted 2 units and D has been granted 4 units, and the situation becomes:

The banker now reserves only 2 units to service the customers. The state of Figure 7.12 is safe
because with 2 units left, the banker can delay any request except a request from customer C
(because the current need of C is only 2 units), thus letting C finish and release all four resources.
With four units in hand, the banker can let either D or B have the necessary units and so on.
Safe States: The key to a state being safe is that there is at least one way for all users to finish. “A
state is safe if not deadlocked and there is a scheduling order in which all processes can complete
their execution, even if they all ask for maximum number of their resources at once”.The scheduling
order is known as safe sequence. A sequence of processes <P1, P2 , ...., Pn> is a safe sequence for
the current allocation state if, for each Pi , the resource requests that Pi can still make can be satisfied
by the currently available resources plus the resources held by all Pj , with j < i. So we can say that,
“a system is in safe state only if there exists a safe sequence”. The safe sequence for the example in
the above Figure 7.12 may be < C, D, A, B >.
Consider what would happen if a request from B for one more unit were granted in above table.
We would have following situation:

This is an unsafe state since, if all the customers namely A, B, C, and D asked for their maximum
loans, then banker could not satisfy any of them and we would have a deadlock.

157
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Unsafe States: “An unsafe state is not deadlocked, but no such scheduling order exists in which all
the processes can complete their execution.” It may even work, if a process releases resources at the
right time.

The Banker’s Algorithm for Single Instance of Resources


Banker’s Algorithm assumes that as each process enters the system, it declares the maximum
number of instances of each resource type that it might ever require. It further assumes that if a
process is simultaneously allocated its maximum of each resource, then the process will terminate
without additional requests and release all of its allocated resources. These resources then become
available for allocation to other processes.
The steps included in Banker’s algorithm for single resource are:
1. Request granted ONLY if it results in a safe state.
2. If request results in an unsafe state, it is denied and user holds resources and waits until request is
eventually satisfied.
3. In finite time, all requests will be satisfied.
When the request occurs, the algorithm see if granting the resource leads to a safe state. If it does,
the request is granted, otherwise it is postponed. To check whether the state is a safe state, the
algorithm checks the number of available resources. If the number is sufficient to satisfy the process,
the resource is allocated and the process is now checked for the closest to the limit, and so on. If all
the resources are allocated with safe states, there is no deadlock.

The Banker’s Algorithm for Multiple Instances of Resources


The Banker’s algorithm is run by the operating system whenever a process requests resources. The
algorithm prevents deadlock by denying or postponing the request if it determines that accepting the
request could put the system in an unsafe state (one where deadlock could occur).
The above algorithm can also be generalized to handle multiple instances of each resource type. For
example, CD drive is a resource type and if you have 2 CD drives then you can say that there are 2

158
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

instances of CD drive.
When a new process enters the system, it must declare the maximum number of instances of each
resource type that it may need. This number should be less than the total number of resources in the
system. When a user requests a set of resources, the system must determine whether the allocation of
these resources will leave the system in a safe state. If yes, the resources are allocated; otherwise, the
process must wait until some other process releases enough resources. Several data structures are
maintained to implement the Banker’s algorithm. Let n be the number of processes in the system and
m be the number of resource types.
 Available: A Vector of length m indicates the number of available resources of each type. If
Available[j] = k, there are k instances of resource type Rj available.
 Max: An n x m matrix defines the maximum demand of each process. If Max [i, j] = k, then
process Pi may request at most k instances of resource type Rj.
 Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj.
 Need: An n x m matrix indicates the remaining resource need of each process. If Need[i, j] = k,
then Pi may need k more instances of Rj to complete its task. Note that
Need [i,j] = Max[i,j] – Allocation [i,j]
Safety Algorithm
To find out whether the system is in safe state or not, a safety algorithm is used.
1 Let Work and Finish be vectors of length m and n, respectively.
Initialize: Work = Available,
and Finish [i] = false for i = 0, 1, …, n- 1.
2. Find an index i such that both:
(a) Finish [i] = false
(b) Needi <= Work
If no such i exists, go to step 4.
3. Work = Work + Allocation,
Finish[i] = true
Go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.
Resource Request Algorithm
This algorithm determines that requests can be granted or not. Let Request be the request vector for
process Pi. If Requesti [j] = = k, then process Pi wants k instances of resource type Rj. When a
request for resources is made by process Pi, the following actions are taken:
1. If Requesti <= Needi go to step 2. Otherwise, raise error condition, since process has exceeded its
maximum claim.
2. If Requesti <= Available, go to step 3. Otherwise Pi must wait, since resources are not available.
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Requesti

159
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Allocationi = Allocationi + Requesti


Needi = Needi – Requesti
If the resulting resource allocation state is safe, the transaction is completed, and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti, and the
old resource allocation state is restored.
For example, consider a system with four processes P1 through P4 and three resource types R1, R2
and R3, Resource type R1 has 5 instances, resource type R2 has 4 instances, and resource type R3
has 3 instances. Suppose at time T0, the resource
allocation is as follows:

For an instance, consider above vector Available = 1 1 1. It means that at a given time T0 the system
has 1 instance of R1, 1 instance of R2 and 1 instance of R3 free and can be allocated. Now if P1
requests the needed resources for its completion, the operating system will allocate it because
Request < Available, according to the Resource-Request Algorithm. Now the vector Available will
become 3 1 1 (as Work = Work + Allocation).Similarly, for P2, then for P3, and then for P4. We can
see that the system is currently in a safe state because the sequences <P1, P2, P3, P4> satisfies the
safety criteria. For another example consider a system with five processes P1 to P5 and three
resources R1 to R3. Resource type R1 has 10 instances, resource type R2 has 5 instances, and
resource type R3 has 7 instances. Suppose that at time T0 the following situation of the system has
been taken:

160
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

We claim that the system is currently in a safe state. Indeed, the sequence <P2, P4, P5, P3, P1>
satisfies the safety criteria. Suppose now that process P2 requests one additional instance of resource
type R1 and two instances of resource type R3, so Request2 = (1, 0, 2). In order to decide whether
this request can be immediately granted, we first check that the Request2 <= Available (i. e. (1, 0, 2)
<= (3, 3, 2)), which is true. We then pretend that this request has been fulfilled and arrive at the
following new state:

161
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

We must determine whether this new system state is safe. To do so we execute our safety algorithm
and find out that the sequence <P2, P4, P5, P1, P3> satisfies our safety requirement. Hence we can
immediately grant the request of process P2.
You should be able to see, however, that in this state, a request for (3, 3, 0) by process P5 cannot be
granted, since the resources are not available. A request for (1, 2, 0) by process P4 cannot be
granted, since it is greater than the Maximum need declared by the process. A request for (0, 2, 0).
By process P1 cannot be granted even though the resources are available, since the resulting state is
unsafe.
Deadlock Detection
A second technique is detection and recovery. When this technique is used, the system does not
attempt to prevent deadlocks from occurring. Instead, it lets them occur, tries to detect when this
happens, and then takes some action to recover after the fact.
Deadlock Detection with One Resource of each type.
When only one resource of each type exists, for the system we can construct a resource graph as
shown is Figure. If this graph contains one or more cycles, a deadlock exists. Any process that is part
of a cycle is deadlocked. If no cycles exist, the system is not deadlocked.
For example consider a system with seven processes, P0 to P6, and six resources R0 to R5. The state
of which resources are currently owned and which ones are
currently being requested is as follows:
1. Process P0 holds R0 and wants R1.
2. Process P1 holds nothing but wants R2.
3. Process P2 holds nothing but wants R1.
4. Process P3 holds R3 and wants R1 and R2.
5. Process P4 holds R2 and wants R4.
6. process P5 holds R5 and wants R1.
7. Process P6 holds R4 and wants R5.
We draw graph for the above. This graph contains one cycle, which can be seen by visual inspection.
The cycle is shown in Figure 7.12. From this cycle, we can see that processes P3, P4 & P6 are all
deadlocked. Process P0, P2 & P5. are not deadlock because R1q can be allocated to anyone of them,
which then finishes and returns it. Then the other two can take it in turn and also complete.

162
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

The algorithm for detecting deadlock. The algorithm operates by carrying out the following steps as
specified:
1. For each node, N in the graph, perform the following 5 steps with N as the starting node.
2. Initialize L to the empty list, and designate all the arcs as unmarked.
3. Add the current node to the end of L and check to see if the node now appears in L two times. If it
does, the graph contains a cycle and the algorithm terminates.
4. From the given node, see if there are any unmarked outgoing arcs. If so, go to step 5; if not, go to
step 6.
5. Pick an unmarked outgoing arc at random and mark it. Then follow it to the new current node and
go to step 3.
6. We have now reached a dead end. Remove it and go back to the previous node, that is, the one
that was current just before this one, make that one the current node, and go to step 3. If this node is
the initial node, the graph does not contain any cycle and the algorithm terminates.
Deadlock Recovery
Deadlock management has a direct effect on making a reliable connection between processing nodes

163
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

in parallel computers. Once we have discovered that there is a deadlock, what can be done for the
deadlock! One thing to do is simply reboot. A less drastic approach is to pull back a resource from a
process to break a cycle. If the resource is not preemptable, snatching it back from a process may do
irreparable harm to the process. Generally it is better to kill a process than crashing the whole
system.
Recovery through Rollback
If we checkpoint a process from time to time, we can roll it back to the latest checkpoint, hopefully
to a time before it grabbed the resource in question. Database systems use checkpoints, as well as a
technique called logging, allowing them to run processes “backwards,” undoing everything they
have done. Each time the process performs an action, it writes a log record containing enough
information to undo the action.
Starvation results due to continue rollback since a single process is always chosen as the victim. We
can avoid this problem by always choosing the youngest process in a cycle. After being rolled back
enough times, a process will grow old and will never get chosen as the victim.

Recovery through Killing Process


The simplest way to remove deadlock is to kill the current process. If deadlock recovery involves
killing a process altogether and restarting it, it is important to mark the “starting time” of the
restarted process so that it will look older than the new processes that have been started since then. It
is optimal to kill those processes that can be executed from the beginning with no side effects.

Recovery through Preemption


In some cases it is possible to temporarily take away resources that are currently owned by a process
and assign to another running process. The ability to take an allocated resource from a process
depends on the nature of the resource. The resource can only be snatched if and only if it is
preemptive.

Solved Examples

Example 1. Two concurrent processes P1 and P2 want to use two resources R1 and R2 in a mutually
exclusive manner. Initially R1 and R2 are free. The programs executed by the two processes are
given below :
Program for P1 :
S1 : while (R1 is busy)
S2 : Set R1 Busy ;
S3 : While (R2 is busy) do no-op;
S5 : Use R1 and R2;
S6 : Set R1  Free;
S7 : Set R2  Free;

164
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Program for P2 :
S1 : While (R2 busy) do no-op;
S2 : Set R2 busy ;
S3 : While (R2 is busy) do no-op;
S5 : Use R1 and R2;
S6 : Set R2  Free;
S7 : Set R1  Free;
(a) Is mutual exclusion guaranteed for R1 and R2? If not, show a possible interleaving of the
statements of P1 and P2 such that mutual exclusion is violated (i.e., both P1 and P2 use R1 or R2 at
the same time).
(b) Can deadlock occur in the above program ? If yes, show a possible interleaving of the statement
of P1 and P2 leading to deadlock.
Solution :
(a) No, mutual exclusion is not guaranteed for R1 and R2. P1 process uses R1 and R2 resources at
S5 : whereas process P2 uses resources R1 and R2 at Q5, i.e. both process P1 and P2 use the
resource at the same time.
(b) Yes, deadlock occurs. Because both process P1 and P2 use the resources R1 and R2 at the same
time i.e., S5 and Q5 respectively.
Example 2. Consider the following snap shot of the system :

Process P2 needs 1 (available is 5)


Process P3 needs 2 (available is 8)
Safe state is (P1, P2, P3)
Example 3. A system with following processes and resources exists :

165
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

(a) Check the system for safe state.


(b) Process P1 requests one more instances of resource type X and two instances of resource type Z.
Can the request be granted ?

(a) For getting safe stage, we apply Banker’s and Safety algorithms; by using this algorithm we first
get Need matrix.
The Need matrix is calculated as Needi = Maxi – Allocationi
Therefore, Need :

Now find i such that


Needi  worki and Finish [i] = False
Initially Finish [i] = False for all i
and work = Available = (3 3 2)
So i = 1 will satisfy
as Need of P1(1, 2, 2) < work (3 3 2)
Now work = Work + Allocation
= (3 3 2) + (2 0 0)
= (5 3 2)
Now find Next value of i which satisfies
Needi  Worki and Finish [i] = False is 3
Since Need3 (0 1 1) < Work (5 3 2)
Preceding in the same fashion we get the safe sequence as
< P1, P3, P4, P2, P0 >
Hence the system is in a safe mode.
(b) Check Requesti  Available
i.e., (1 0 2)  (3 3 2)
which is true.

166
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Assume the resources are granted to P1.


The new state of the system will be

Example 4. Is it possible to have deadlock when there is only one process ? Explain.
Solution : Yes, if a process is given the task of waiting for an event to occur and if the system
includes provision for signalling the event, then we have a one process deadlock.
Consider two process P1 and P2 and two resources A and B. Consider the following
situation :
(a) A is allocated to P1
(b) But P1 requests B
(c) But is allocated to P2
(d) But P2 requests A
(e) and B will be released only when P2 gets A and A will be released only when P1 gets B. Figure
shows the resources allocation graph :

167
Swami Keshvanand Institute of Technology, Management &Gramothan,
Ramnagaria, Jagatpura, Jaipur-302017, INDIA
Approved by AICTE, Ministry of HRD, Government of India
Recognized by UGC under Section 2(f) of the UGC Act, 1956
Tel. : +91-0141- 5160400Fax: +91-0141-2759555
E-mail: [email protected] Web: www.skit.ac.in

Example 5. Determine is the state in blow table is safe or unsafe?

Solution : Yes, the system is in safe state.


Allocating both available resource to P2 will came the completion of P2 heaving 1 R1 & 2 R2
resource instances.
This state will allow P4 to complete and leaving 3 R1 and two R2 instances.
Now P1 may run leaving 4R, and 4 R2 instances. Finally P3 will complete its execution.

168

You might also like