0% found this document useful (0 votes)
6 views112 pages

RTOS Course File 2024-25

The document outlines the course file for 'Real Time Operating Systems' (PE 522 EC) for the VIII semester, detailing the department's vision and mission, course description, prerequisites, marks distribution, and course outcomes. It includes a comprehensive syllabus covering various topics such as operating systems, scheduling algorithms, concurrency, VxWorks, and UNIX kernel, along with a course plan and assessment methods. Additionally, it highlights the program outcomes and specific outcomes relevant to the course, ensuring alignment with industry requirements.

Uploaded by

kelamkiranmai265
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views112 pages

RTOS Course File 2024-25

The document outlines the course file for 'Real Time Operating Systems' (PE 522 EC) for the VIII semester, detailing the department's vision and mission, course description, prerequisites, marks distribution, and course outcomes. It includes a comprehensive syllabus covering various topics such as operating systems, scheduling algorithms, concurrency, VxWorks, and UNIX kernel, along with a course plan and assessment methods. Additionally, it highlights the program outcomes and specific outcomes relevant to the course, ensuring alignment with industry requirements.

Uploaded by

kelamkiranmai265
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 112

COURSE FILE CONTENTS

Sl. No. Topic Page no.


1. Department Vision and Mission 3
2. Course Description 3
3. Course Overview 3
4. Course Pre-requisites 4
5. Marks Distribution 4
6. POs and PSOs 4
7. Course outcomes (COs) 5
8. CO mapping with POs and PSOs 5
9. Syllabus, Textbooks and Reference Books 6
10. Gaps in syllabus 7
11. Course plan/Lesson Plan 8
12. Lecture Notes
Unit-I Operating Systems 11
Unit-II Hard versus Soft Real Time Systems 28
Unit-III Concurrency 48
Unit-IV Elementary concepts of Vx works 61
Unit-V File System, UNIX Kernel 68
13. Unit wise Question Bank
a. Short answer questions 81
b. Long answer questions
14. Previous University Question papers 87
15. Unit wise Assignment Questions
16 Internal Question Paper with key 89
17. Content Beyond Syllabus 102
18. Methodology used to identify Weak and bright 111
students
 Support extended to weak students 111
 Efforts to engage bright students 111

1
CERTIFICATE

I, the undersigned, have completed the course allotted to me as shown below,


Sl.No. Semester Name of the Subject Course ID Total
Units
1 VIII REAL TIME OPERATING PE 522 EC 5
SYSTEMS

Date: 06.03.2025 Prepared by

Academic Year: 2024-2025 1. Mr. K. Shravan Kumar

Verifying authority:
1. Head of the Department: …………………………………..

2.

3.

PRINCIPAL

2
MATRUSRI ENGINEERING COLLEGE
Saidabad, Hyderabad-500 059.
(Approved by AICTE & Affiliated to Osmania University)
ELECTRONICS AND COMMUNICATION ENGINEERING

DEPARTMENT VISION

To become a reputed Centre of learning in Electronics and Communication


and transform the students into accomplished professionals.

DEPARTMENT MISSION

1. To provide the learning ambience to nurture the young minds with theoretical
and practical knowledge to produce employable and competent engineers.
2. To provide a strong foundation in fundamentals of electronics and
communication engineering to make students explore advances in research
for higher learning.
3. To inculcate awareness for societal needs, continuous learning and
professional practices.
4. To imbibe team spirit and leadership qualities among students.

COURSE DESCRIPTOR

Course Title REAL TIME OPERATING SYSTEMS

Course Code PE 522 EC

Programme BE

Semester VIII

Course Type Professional Elective


Regulation REAL TIME OPERATING SYSTEMS
Theory Practical
Course Structure Lectures Tutorials Credits Laboratory Credits
3 3 - -
Course Faculty Mr. K. Shravan Kumar

I. COURSE OVERVIEW:
An RTOS is an OS that supports applications that must meet time constraints
while providing logically correct results. RTOSs also provide the necessary
features to support real-time applications. Unit one describes the services and
functions of OS and various functions of OS and evolution of OS. Unit two
provides description of various scheduling algorithms. Unit three describes
principles of concurrency and various dead lock strategies. Unit four describes
the functions and applications of Vx works. Unit five describes about Unix
kernel and various Unix signals and POSIX standards.

3
II. COURSE PRE-REQUISITES:

Course
Level Semester Prerequisites Credits
Code
UG ES VII Embedded systems 3
PC701EC

III. MARKS DISTRIBUTION:

SEE CIA Total


Subject Examination Examination Marks

REAL TIME OPERATING SYSTEMS 70 30 100

IV. PROGRAM OUTCOMES (POs):

The students will be able to:


Engineering knowledge: Apply the knowledge of mathematics,
PO1 science, engineering fundamentals, and an engineering specialization
to the solution of complex engineering problems.
PO2 Problem analysis: Identify, formulate, review research literature, and
analyze complex engineering problems reaching substantiated
conclusions using first principles of mathematics, natural sciences,
and engineering sciences.
PO3 Design/development of solutions: Design solutions for complex
engineering problems and design system components or processes
that meet the specified needs with appropriate consideration for the
public health and safety, and the cultural, societal, and environmental
considerations.
PO4 Conduct investigations of complex problems: Use research-based
knowledge and research methods including design of experiments,
analysis and interpretation of data, and synthesis of the information to
provide valid conclusions.
PO5 Modern tool usage: Create, select, and apply appropriate techniques,
resources, and modern engineering and IT tools including prediction
and modeling to complex engineering activities with an understanding
of the limitations.
The engineer and society: Apply reasoning informed by the
contextual knowledge to assess societal, health, safety, legal and
PO6
cultural issues and the consequent responsibilities relevant to the
professional engineering practice.
PO7 Environment and sustainability: Understand the impact of the
professional engineering solutions in societal and environmental
contexts, and demonstrate the knowledge of, and need for sustainable
development.
PO8 Ethics: Apply ethical principles and commit to professional ethics and
responsibilities and norms of the engineering practice.
PO9 Individual and team work: Function effectively as an individual, and as
a member or leader in diverse teams, and in multidisciplinary settings.

4
PO10 Communication: Communicate effectively on complex engineering
activities with the engineering community and with society at large,
such as, being able to comprehend and write effective reports and
design documentation, make effective presentations, and give and
receive clear instructions.
Project management and finance: Demonstrate knowledge and
understanding of the engineering and management principles and
PO11
apply these to one’s own work, as a member and leader in a team, to
manage projects and in multidisciplinary environments.
Life-long learning: Recognize the need for, and have the preparation
PO12 and ability to engage in independent and life- long learning in the
broadest context of technological change.

V. PROGRAM SPECIFIC OUTCOMES (PSOs):

The students will be able to:


Professional Competence: Apply the knowledge of Electronics
PSO1 Communication Engineering principles in different domains like
VLSI, Signal Processing, Communication, Embedded Systems
Technical Skills: Able to design and implement products using
PSO2 the state of art Hardware and Software tools and hence provide
simple solutions to complex problems

VI. COURSE OUTCOMES (COs):

The course should enable the students to:

CO1 Classify various types of kernels and operating systems.

CO2 Analyze various scheduling algorithms related to RTOS

CO3 Summarize the inter process communication tools

CO4 Understand the elementary concepts of Vx works

CO5 Enumerate the fundamental concepts of UNIX operating system

VII. MAPPING COURSE OUTCOMES (COs) with POs and PSOs:


(3 = High; 2 = Medium; 1 = Low)
POs PSOs
Cos PO1 PO1 PO1 PSO
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PSO2
0 1 2 1
CO1 2 2 2 2 - - - - - - - 2 2

CO2 2 - 2 - - 1 - - - - - 2 2

CO3 2 - 2 2 2 - - - - - - 1 2 2

CO4 2 2 2 2 2 1 - - - - - 1 2 2

CO5 2 2 2 2 2 1 - - - - - 1 2 2

5
VIII. SYLLABUS:

UNIT I- No. of Hrs.


Operating Systems, Computer-system Organization, Computer- 6
system Architecture, Operating-system Structure, Operating-system
Operations, Process Management, Memory Management, Storage
Management, Protection and Security, Kernel Data Structures,
Computing Environments, Open-source Operating Systems,
Introduction to Real Time Operating Systems.
UNIT II-
Hard versus Soft Real-Time System: Jobs and Processors, 7
release time, deadlines, and timing constraints, hard and soft
timing constraints, hard real time systems, Uniprocessor
Scheduling: Types of scheduling, scheduling algorithms: FCFS,
SJF, Priority, Round Robin, UNIX Multi-level feedback queue
scheduling, Thread scheduling, Multiprocessor scheduling
concept, Real Time scheduling concept.
UNIT III-
Concurrency: Principles of Concurrency, Mutual Exclusion H/W 8
Support, Software approaches, Semaphores and Mutex, Message
passing, Monitors, Classical problems of Synchronization: Readers-
Writer’s problem, Producer Consumer problem, Dining Philosopher
problem. Deadlock: Principles of deadlock, Deadlock prevention,
Deadlock Avoidance, Deadlock detection, An Integrated Deadlock
Strategies.
UNIT IV-
Elementary Concepts of VxWorks: Multitasking, Task State 6
Transition, Task Control – Task Creation and Activation, Task Stack,
Task Names and IDs, Task Options, Task Information, Task Deletion
and Deletion Safety. Memory Management – Virtual to Physical
Address Mapping. Comparison of RTOS – VxWorks, μC/OS-II and
RT Linux for Embedded Applications.
UNIT V-
UNIX Kernel – File System, Concepts of –Process, Concurrent 9
Execution & Interrupts. Process Management – forks & execution.
Basic level Programming with System calls, Shell programming
and filters, UNIX Signals, POSIX Standards.
TEXT BOOKS:
1. Andrew S. Tanenbaum, “Modern Operating Systems,” 4/e, Pearson
Edition,2014.
2. Jane W.S.Liu, “Real Time Systems,” 1/e, Pearson Education, Asia, 2002
3. Jean J Labrose,” Embedded Systems Building Blocks Complete and Ready-
to-use Modules in C”, 2/e, CRC Press 1999.
REFERENCES:
1 Karim Yaghmour, Jon Masters, Gilad Ben-Yesset, Philippe Gerum”,
Building Embedded Linux Systems, 2/e,O’ Reilly Media, 2008
2. Wind River Systems, “VxWorks Programmers Guide 5.5”, Wind River
Systems Inc.2002

6
IX. GAPS IN THE SYLLABUS - TO MEET INDUSTRY / PROFESSION
REQUIREMENTS:

Proposed Relevance Relevance


S No Description
Actions With POs With PSOs
1 Issues in Real time E Material PO1, PO3, PSO1, PSO2
Design PO5, PO11
2 Asynchronous E Material PO1, PO3, PSO1, PSO2
Communication PO5, PO11

X. RSE PLAN/ LECTURE PLAN:

PPT/BB/ Text
No. Relevant
Lecture Topics to be covered OHP/ Book/Reference
of Hrs Cos
No. e-material Book
1 Structures of OS BB/PPT 1 Andrew S.
(Monolithic, CO1 Tanenbaum
Microkernel, Layered,
Exo-kernel and Hybrid
kernel structures),
2 Operating system BB/PPT 1 CO1 Andrew S.
objectives and function Tanenbaum
3 Virtual Computers BB/PPT 1 CO1 Andrew S.
Tanenbaum
4 Interaction of O. S. & BB/PPT 1 CO1 Andrew S.
hardware architecture Tanenbaum
5 Evolution of operating BB/PPT 1 CO1 Andrew S.
systems, Tanenbaum
6 Batch, multi e material 1 CO1 Andrew S.
programming, Tanenbaum
Multitasking, Multiuser,
parallel distributed &
real –time O.S.

7 jobs and processors, BB/PPT 1 CO2 Jane W.S.Liu,


release time
8 Dead line and timing BB/PPT 1 CO2 Jane W.S.Liu,
constraints, hard and
soft timing constraints
9 hard real time systems BB/PPT 1 CO2 Jane W.S.Liu,

10 Uni processor BB/PPT 1 CO2 Jane W.S.Liu,


scheduling, Types of
scheduling
11 Scheduling algorithms: BB/PPT 1 CO2 Jane W.S.Liu,
FCFS, SJF, Priority
12 Round robin, UNIX BB/PPT 1 CO2 Jane W.S.Liu,
multi-level feedback
queue scheduling,
Thread scheduling
13 Multiprocessor BB/PPT 1 CO2 Jane W.S.Liu,
processor scheduling
7
PPT/BB/ Text
No. Relevant
Lecture Topics to be covered OHP/ Book/Reference
of Hrs Cos
No. e-material Book
concept, Real time
scheduling concept
14 Concurrency: Principles BB/PPT 1 CO3 Andrew S.
of Concurrency, Mutual Tanenbaum
Exclusion H/W Support
15 software approaches, BB/PPT 1 CO3 Andrew S.
Semaphores and Mutex Tanenbaum
16 Message Passing, BB/PPT 1 CO3 Andrew S.
Monitors, Tanenbaum
17 Classical Problems of BB/PPT 1 CO3 Andrew S.
Synchronization: Tanenbaum
Readers-Writers
Problem
18 Producer Consumer BB/PPT 1 CO3 Andrew S.
Problem, Dining Tanenbaum
Philosopher problem
19 Deadlock: Principles of BB/PPT 1 CO3 Andrew S.
deadlock, Deadlock Tanenbaum
Prevention Deadlock
Avoidance,
20 An Integrated BB/PPT 2 CO3 Andrew S.
Deadlock Strategies. Tanenbaum
21 Multi-tasking, Task BB/PPT 1 CO4 VxWorks
state transition, Task Programmers
control Task creation Guide 5.5
and activation
22 Task stack, Task BB/PPT CO4 VxWorks
names and IDS Programmers
Guide 5.5
23 Task options, task BB/PPT 1 CO4 VxWorks
information Programmers
Guide 5.5
24 Task deletion and BB/PPT 1 CO4 VxWorks
deletion safety Programmers
Guide 5.5
25 memory management: BB/PPT 1 CO4 VxWorks
Virtual tom physical Programmers
address mapping Guide 5.5
26 Comparison and study BB/PPT 1 CO4 VxWorks
of RTOS: VxWorks and Programmers
µCOS and RT for Linux Guide 5.5
embedded applications
28 UNIX Kernel BB/PPT 1 CO5 Embedded
Linux Systems,
2/e,O’
29 File system, Concept of BB/PPT 1 CO5 Embedded
processes Linux Systems,
2/e,O’
30 concurrent execution BB/PPT 1 CO5 Embedded
and interrupts Linux Systems,
2/e,O’

8
PPT/BB/ Text
No. Relevant
Lecture Topics to be covered OHP/ Book/Reference
of Hrs Cos
No. e-material Book
31 Process management – BB/PPT 1 CO5 Embedded
Forks and execution Linux Systems,
2/e,O’
32 Basic level BB/PPT 1 CO5 Embedded
programming with Linux Systems,
system calls 2/e,O’
33 Shell programming and BB/PPT 1 CO5 Embedded
filters Linux Systems,
2/e,O’
31 Process management – BB/PPT 1 CO5 Embedded
Forks and execution Linux Systems,
2/e,O’
32 Basic level BB/PPT 2 CO5 Embedded
programming with Linux Systems,
system calls 2/e,O’

9
LECTURE NOTES

10
UNIT-I

 Operating Systems

 Computer-system Organization

 Computer-system Architecture

 Operating-system Structure

 Operating-system Operations

 Protection and Security

 Kernel Data Structures

 Computing Environments

 Open-source Operating Systems

 Introduction to Real Time Operating Systems.

11
UNIT-I
Operating Systems
What is an operating system
• A program that acts as an intermediary between a user of a
computer and the computer hardware
Operating system goals:
• Execute user programs and make solving user problems easier
• Make the computer system convenient to use
• Use the computer hardware in an efficient manner

Computer System Structure


Computer system can be divided into four components:
• Hardware – provides basic computing resources
CPU, memory, I/O devices
• Operating system - Controls and coordinates use of hardware
among various applications and users
• Application programs – define the ways in which the system
resources are used to solve the computing problems of the users
Word processors, compilers, web browsers, database systems,
video games
• Users
People, machines, other computers

12
What Operating Systems Do
Depends on the USER point of view
• Users want convenience, ease of use and good performance
Don’t care about resource utilization
• But shared computer such as mainframe or minicomputer must
keep all users happy
Operating system is a resource allocator and control program
making efficient use of HW and managing execution of user
programs
• Users of dedicate systems such as workstations have dedicated
resources but frequently use shared resources from servers
Mobile devices like smartphones and tablets are resource poor,
optimized for usability and battery life
• Mobile user interfaces such as touch screens, voice recognition
• Some computers have little or no user interface, such as embedded
computers in devices and automobiles
Run primarily without user intervention

Defining Operating Systems


• OS Is a Resource Allocator
Manges all resources
Decides between conflicting requests for efficient and fair
resource use.
• OS is a control program
Controls execution of programs to prevent errors and improper
use of the computer.
• No universally accepted definition

• “Everything a vendor ships when you order an operating system” is


a good approximation

But varies wildly

• “The one program running at all times on the computer” is the


kernel, part of the operating system

• Everything else is either

A system program (ships with the operating system, but not


part of the kernel), or

An application program, all programs not associated with the

13
operating system

• Today’s OSes for general purpose and mobile computing also


include middleware – a set of software frameworks that provide
additional services to application developers such as databases,
multimedia, graphics

Computer Startup.
Bootstrap Program is Loaded at Power Up or Reboot
• Typically stored in ROM, EPROM, generally known as firmware
• Initializes all aspects of system
• Loads operating system kernel and starts execution

Computer System Organization


Computer-system operation
• One or more CPUs, device controllers connect through common
bus providing access to shared memory

• Concurrent execution of CPUs and devices competing for memory


cycles

• I/O devices and the CPU can execute concurrently


• Each device controller is in charge of a particular device type
• Each device controller has a local buffer
• Each device controller type has an operating system device driver
to manage it
• CPU moves data from/to main memory to/from local buffers
• I/O is from the device to local buffer of controller
• Device controller informs CPU that it has finished its operation by
causing an interrupt
14
Common Functions of Interrupts
• Interrupt transfers control to the interrupt service routine
generally, through the interrupt vector, which contains the
addresses of all the service routines
• Interrupt architecture must save the address of the interrupted
instruction
• A trap or exception is a software-generated interrupt caused either
by an error or a user request
• An operating system is interrupt driven
Interrupt Handling
• The operating system preserves the state of the CPU by storing the
registers and the program counter
• Determines which type of interrupt has occurred:
• Polling
• Vectored interrupt system
• Separate segments of code determine what action should be taken
for each type of interrupt
Interrupt Timeline

I/O Structure
Two methods for handling I/O
• After I/O starts, control returns to user program only upon I/O
completion
• After I/O starts, control returns to user program without waiting
for I/O completion
Storage Structure

• Main memory – only large storage media that the CPU can access
directly
• Random access
• Typically, volatile

15
• Typically, random-access memory in the form of Dynamic
Random-access Memory (DRAM)
• Secondary storage – extension of main memory that provides large
nonvolatile storage capacity
• Hard Disk Drives (HDD) – rigid metal or glass platters covered with
magnetic recording material
• Disk surface is logically divided into tracks, which are
subdivided into sectors
• The disk controller determines the logical interaction between
the device and the computer
• Non-volatile memory (NVM) devices– faster than hard disks,
nonvolatile
• Various technologies
• Becoming more popular as capacity and performance
increases, price drops
Storage Hierarchy

• Storage systems organized in hierarchy


• Speed
• Cost
• Volatility
• Caching – copying information into faster storage system; main
memory can be viewed as a cache for secondary storage
• Device Driver for each device controller to manage I/O
• Provides uniform interface between controller and kernel

Caching
• Important principle, performed at many levels in a computer (in
hardware, operating system, software)
• Information in use copied from slower to faster storage temporarily

16
• Faster storage (cache) checked first to determine if information is
there
• If it is, information used directly from the cache (fast)
• If not, data copied to cache and used there
• Cache smaller than storage being cached
• Cache management important design problem
• Cache size and replacement policy
Computer-System Architecture
• Most systems use a single general-purpose processor
• Most systems have special-purpose processors as well
• Multiprocessors systems growing in use and importance
• Also known as parallel systems, tightly-coupled systems
• Advantages include:
1. Increased throughput
2. Economy of scale
3. Increased reliability – graceful degradation or fault
tolerance
• Two types:
1. Asymmetric Multiprocessing – each processor is assigned
a specific task.
2. Symmetric Multiprocessing – each processor performs all
tasks

How a Modern Computer Works

A von Neumann architecture

17
Symmetric Multiprocessing Architecture

Asymmetric Multiprocessing Architecture

Dual-Core Design
• Multi-chip and multicore
• Systems containing all chips
• Chassis containing multiple separate systems

18
Clustered Systems
 Like multiprocessor systems, but multiple systems working
together
 Usually sharing storage via a storage-area network (SAN)
 Provides a high-availability service which survives failures
 Asymmetric clustering has one machine in hot-standby mode
 Symmetric clustering has multiple nodes running applications,
monitoring each other
o Some clusters are for high-performance computing (HPC)
 Applications must be written to use parallelization
o Some have distributed lock manager (DLM) to avoid
conflicting operations

Operating System Structure


 Multiprogramming needed for efficiency
o Single user cannot keep CPU and I/O devices always busy
o Multiprogramming organizes jobs (code and data), so CPU
always has one to execute
o A subset of total jobs in system is kept in memory
o One job selected and run via job scheduling
o When it must wait (for I/O for example), OS switches to
another job
 Timesharing (multitasking) is logical extension in which CPU
switches jobs so frequently that users can interact with each job
while it is running, creating interactive computing
o Response time should be < 1 second
o Each user has at least one program executing in memory -
>process
o If several jobs ready to run at the same time ->CPU scheduling
o If processes don't fit in memory, swapping moves them in and
out to run
o Virtual memory allows execution of processes not completely
in memory

19
Memory Layout of Multiprogram System:

Operating-System Operations
 Interrupt driven by hardware
 Software error or request creates exception or trap Division by
zero, request for operating system service
 Other process problems include infinite loop, processes modifying
each other or the operating system
 Dual-mode operation allows OS to protect itself and other system
components
User mode and kernel mode
Mode bit provided by hardware
 Provides ability to distinguish when system is running
user code or kernel code
• When a user is running -> mode bit is “user”
• When kernel code is executing ->mode bit is
“kernel”
 Some instructions designated as privileged, only
executable in kernel mode
 System call changes mode to kernel, return from call
resets it to user
Transition from User to Kernel Mode:

20
Timer to prevent infinite loop (or process hogging resources)
 Timer is set to interrupt the computer after some time period
 Keep a counter that is decremented by the physical clock
 Operating system set the counter (privileged instruction)
 When counter zero generate an interrupt
 Set up before scheduling process to regain control or terminate
program that exceeds allotted time
Process Management
 A process is a program in execution. It is a unit of work within the
system. Program is a passive entity; process is an active entity.
 Process needs resources to accomplish its task
 CPU, memory, I/O, files
 Initialization data
 Process termination requires reclaim of any reusable resources
 Single-threaded process has one program counter specifying location
of next instruction to execute
 Process executes instructions sequentially, one at a time, until
completion
 Multi-threaded process has one program counter per thread
 Typically, system has many processes, some user, some operating
system running concurrently on one or more CPUs
 Concurrency by multiplexing the CPUs among the processes /
threads
Process Management Activities
 The operating system is responsible for the following activities in
connection with process management:
 Creating and deleting both user and system processes
 Suspending and resuming processes
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication
 Providing mechanisms for deadlock handling
Memory Management
 To execute a program all (or part) of the instructions must be in
memory
 All (or part) of the data that is needed by the program must be in
memory
 Memory management determines what is in memory and when
o Optimizing CPU utilization and computer response to users
 Memory management activities
o Keeping track of which parts of memory are currently being
used and by whom
o Deciding which processes (or parts thereof) and data to move
into and out of memory
o Allocating and deallocating memory space as needed

21
File-system Management
 OS provides uniform, logical view of information storage
o Abstracts physical properties to logical storage unit - file
o Each medium is controlled by device (i.e., disk drive, tape
drive)
 Varying properties include access speed, capacity, data-
transfer rate, access method (sequential or random)
 File-System management
o Files usually organized into directories
o Access control on most systems to determine who can access
what
o OS activities include
 Creating and deleting files and directories
 Primitives to manipulate files and directories
 Mapping files onto secondary storage
 Backup files onto stable (non-volatile) storage media
Mass-Storage Management
 Usually, disks used to store data that does not fit in main memory
or data that must be kept for a “long” period of time
 Proper management is of central importance
 Entire speed of computer operation hinges on disk subsystem and
its algorithms
 OS activities
o Mounting and unmounting
o Free-space management
o Storage allocation
o Disk scheduling
o Partitioning
o Protection
Caching
 Important principle, performed at many levels in a computer (in
hardware, operating system, software)
 Information in use copied from slower to faster storage temporarily
 Faster storage (cache) checked first to determine if information is
there
o If it is, information used directly from the cache (fast)
o If not, data copied to cache and used there
 Cache smaller than storage being cached
o Cache management important design problem
o Cache size and replacement policy

22
Characteristics of Various Types of Storage

Migration of data “A” from Disk to Register


 Multitasking environments must be careful to use most recent value,
no matter where it is stored in the storage hierarchy.

 Multiprocessor environment must provide cache coherency in


hardware such that all CPUs have the most recent value in their
cache
 Distributed environment situation even more complex
o Several copies of a datum can exist
I/O Subsystem

 One purpose of OS is to hide peculiarities of hardware devices from


the user
 I/O subsystem responsible for
o Memory management of I/O including
buffering (storing data temporarily while it is being transferred),
caching (storing parts of data in faster storage for performance),
spooling (the overlapping of output of one job with input of other
jobs)
o General device-driver interface
o Drivers for specific hardware devices

Protection and Security


 Protection – any mechanism for controlling access of processes or
users to resources defined by the OS
 Security – defense of the system against internal and external
attacks
o Huge range, including denial-of-service, worms, viruses,
23
identity theft, theft of service
 Systems generally first distinguish among users, to determine who
can do what
o User identities (user IDs, security IDs) include name and
associated number, one per user
o User ID then associated with all files, processes of that user to
determine access control
o Group identifier (group ID) allows set of users to be defined
and controls managed, then also associated with each process,
file
o Privilege escalation allows user to change to effective ID with
more rights
Computing Environments
 Traditional
 Mobile
 Client Server
 Peer-to-Peer
 Cloud computing
 Real-time Embedded
Traditional
 Stand-alone general-purpose machines
 But blurred as most systems interconnect with others (i.e., the
Internet)
 Portals provide web access to internal systems
 Network computers (thin clients) are like Web terminals
 Mobile computers interconnect via wireless networks
 Networking found everywhere – even home systems use firewalls
to protect home computers from Internet attacks
Mobile
 Handheld smartphones, tablets, etc.
o What is the functional difference between them and a
“traditional” laptop?
 Extra feature – more OS features (GPS, gyroscope)
 Allows new types of apps like augmented reality
 Use IEEE 802.11 wireless, or cellular data networks for connectivity
 Leaders are Apple iOS and Google Android
Client Server
 Client-Server Computing
• Dumb terminals supplanted by smart PCs
• Many systems now servers, responding to requests generated by
clients
 Compute-server system provides an interface to client
to request services (i.e., database)
 File-server system provides interface for clients to store
and retrieve files
24
Peer-to-Peer
• Another model of distributed system
• P2P does not distinguish clients and servers
– Instead, all nodes are considered peers
– May each act as client, server or both
– Node must join P2P network
• Registers its service with central lookup service on
network, or
• Broadcast request for service and respond to requests
for service via discovery protocol
– Examples include Napster and Gnutella, Voice over IP
(VoIP) such as Skype

Cloud Computing
• Delivers computing, storage, even apps as a service across a
network
• Logical extension of virtualization because it uses virtualization as
the base for its functionality.
– Amazon EC2(Elastic Compute Cloud) has thousands of
servers, millions of virtual machines, petabytes of storage
available across the Internet, pay based on usage
• Many types
– Public cloud – available via Internet to anyone willing to pay
– Private cloud – run by a company for the company’s own
use
– Hybrid cloud – includes both public and private cloud
components
25
– Software as a Service (SaaS) – one or more applications
available via the Internet (i.e., word processor)
– Platform as a Service (PaaS) – software stack ready for
application use via the Internet (i.e., a database server)
– Infrastructure as a Service (IaaS) – servers or storage
available over Internet (i.e., storage available for backup use)
• Cloud computing environments composed of traditional OSes, plus
VMMs, plus cloud management tools
– Internet connectivity requires security like firewalls
– Load balancers spread traffic across multiple applications

Real-Time Embedded Systems


• Real-time embedded systems most prevalent form of computers
– Vary considerable, special purpose, limited purpose OS, real-
time OS
– Use expanding
• Many other special computing environments as well
– Some have OSes, some perform tasks without an OS
• Real-time OS has well-defined fixed time constraints
– Processing must be done within constraint
– Correct operation only if constraints met

Free and Open-Source Operating Systems


• Operating systems made available in source-code format rather than
just binary closed-source and proprietary
• Counter to the copy protection and Digital Rights Management
(DRM) movement
• Started by Free Software Foundation (FSF), which has “copyleft”
GNU Public License (GPL)

26
– Free software and open-source software are two different ideas
championed by different groups of people
• https://www.gnu.org/philosophy/open-source-misses-
the-point.en.html
• Examples include GNU/Linux and BSD UNIX (including core of Mac
OS X), and many more
• Can use VMM like VMware Player (Free on Windows), Virtualbox
(open source and free on many platforms -
http://www.virtualbox.com)
– Use to run guest operating systems for exploration

Real Time operating System


A real time operating system is used, when there are rigid (strict) time
requirements on the operation of a processor or the flow of data.

It is often used as a control device in a dedicated application. Systems that


control scientific experiments, medical imaging systems, and industrial
control system are real time systems.

These applications also include some home appliance system, weapon


systems, and automobile engine fuel injection systems.

Real time operating system has well defined, fixed time constraints.
Processing must be done within defined constraints or the system will fail.

Since meeting strict deadlines is crucial in real time systems, sometimes


an operating is simply a library linked in with the application programs.

There are two types of real time operating system

Hard real system


This system guarantees that critical tasks complete on time.
Many of these are found in industrial process control, avionics, and
military and similar application areas.
This goal says that all delays in the system must be restricted.
Soft real system
In soft real-time system, missing an occasional deadline, while not
desirable, is acceptable and does not cause any permanent damage.
Digital audio or multimedia systems fall in this category.
An example of real time system is µcos.

27
UNIT-II

 Jobs & Processors

 Release time, Deadlines, Hard & Soft timing constraints

 Types of Scheduling Algorithms

 Multilevel feedback queue scheduling

 Thread Scheduling

 Multiprocessor Scheduling Concept

 Real Time Scheduling Concept

28
UNIT-II

Jobs and Processors


Each unit of work that is scheduled and executed by the system is a job. Set of
related jobs which jointly provides some system function is called task.
Computation of a control law is a job, so is the computation of FFT (Fast Fourier
Transform) of sensor data or transmission of data packet or file retrieval and so
on. Rather than using different verbs (E.g. Compute, transmit etc.) for different
types of jobs, we say job executes or is executed by system The jobs mentioned
above executes on CPU, network or disk. These all resources are called processor,
except in the case where we wanted to be specific.
Release Time
Release time of job is instant of time at which the job becomes available for
execution. Job can be scheduled and executed any time at or after its release
time whenever data and control dependency condition are met.
 Example: Consider a system which monitors and control several furnaces.
After it is initialized (at t=0), it samples and read sensor at every 100msec.
Suppose the system begins the first control law computation of 20msec
The release time of control law computation of job J0, J1……...Jk can be
calculated as 20+k*100 where k=0,1,2……. We say that all jobs have no
release time if all the jobs are released when system begins execution
Deadlines
The deadline of the job is the instant of time by which its execution is required to
be accomplished. In previous example, each control law computation job must be
completed by the release time of next job.
 So, their deadlines are 120,220 msec and so on. It is more natural to state
the timing requirements of a job in terms of response time i.e. the time from
release time to instant when it computes
 Maximum allowable response time of a job is relative deadline.
In above example relative deadline of every control law computation job
mentioned above is 100 msec. The deadline of a job sometimes is also called
absolute deadline is equal to release time plus its relative deadline.

Timing Constraints
Constraints imposed on the timing behavior of a job is timing constraints. Timing
constraint of a job can be specified in terms of release time and relative or
absolute deadlines
Hard and Soft Timing Constraints
It is common to divide timing constraints into two types: hard and soft.
Classification is based on the functions criticality of jobs, usefulness of late
result and deterministic
or probabilistic nature of the constraints

29
Common definition
 A timing constraint or deadline is hard if the failure to meet it is considered
to be a fatal fault.
 A hard deadline is imposed on a job if the late result produced by the job
after deadline may have disastrous consequences.
 A few misses of soft deadlines do no serious harm, only the system overall
performance decreases.
 The distinction between hard and soft timing constraints can also be made
in terms of usefulness of the result as the function of tardiness.
 Tardiness of jobs measures how late it completes respective to its deadline
(completion time-deadline). The usefulness of result produced by a soft real
time job decreased gradually as tardiness increases.
 The usefulness of hard real time fall abruptly may even become negative
when tardiness becomes larger than 0
 The question of whether timing constraint is hard or soft depend on the
seriousness of job
Scheduling Algorithms
Most Operating Systems today use very similar CPU time scheduling
algorithms, all based on the same basic ideas, but with Operating System-
specific adaptations and extensions.

CPU scheduling is the task of selecting a waiting process from the ready
queue and allocating the CPU to it. The CPU is allocated to the selected
process by the dispatcher.

A CPU scheduling algorithm should try to maximize the following:

1. CPU utilization
2. Throughput
A CPU scheduling algorithm should try to minimize the following:

1. Turnaround time
2. Waiting time
3. Response time

(1) Explain FCFS, Round Robin, Shortest Job First, Shortest Remaining Job First
and Priority Scheduling algorithms with illustration.
I FCFS (First Come First Serve):
 Selection criteria:
The process that requests first is served first. It means that processes are
served in the exact order of their arrival.
 Decision Mode:
Non preemptive: Once a process is selected, it runs until it is blocked
for an I/O or some event, or it is terminated.
 Implementation:

30
This strategy can be easily implemented by using FIFO queue, FIFO means
First in First Out. When CPU becomes free, a process from the first position
in a queue is selected to run.
 Example:
Consider the following set of four processes. Their arrival time and time
required to complete the execution are given in following table.
Consider all time values in milliseconds.

Process Arrival Time Time required for completion


(T0) (∆T) (CPU Burst Time)
P0 0 10
P1 1 6
P2 3 2
P3 5 4

 Gantt Chart:

0 10 16 18 22
 Initially only process P0 is present and it is allowed to run. But, when P0
completes, all other processes are present. So, next process P1 from ready
queue is selected and
allowed to run till it completes. This procedure is repeated till all processes
completed their execution.
 Statistics:
Proces Arrival CPU Burst Finish Turnaround Waiting
s Time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (WT=TAT-
T0) ∆T)
P0 0 1 1 1 0
0 0 0
P1 1 6 1 1 9
6 5
P2 3 2 1 1 1
8 5 3
P3 5 4 2 1 1
2 7 3

Average Turnaround (10+15+15+17) = = 14.25 ms.


Time: /4 57/4
Average Waiting Time: (0+9+13+13)/ = = 8.75 ms.
4 35/4

31
 Advantages:
 Simple, fair, no starvation.
 Easy to understand, easy to implement.
 Disadvantages:
 Not efficient. Average waiting time is too high.
 Convoy effect is possible. All small I/O bound processes wait for
one big CPU bound process to acquire CPU.
 CPU utilization may be less efficient especially when a CPU bound
process is running with many I/O bound processes.

II Shortest Job First (SJF):


 Selection Criteria:
The process, that requires shortest time to complete execution, is
served first.
 Decision Mode:
Non preemptive: Once a process is selected, it runs until either it is
blocked for an I/O or some event, or it is terminated.
 Implementation:
This strategy can be implemented by using sorted FIFO queue. All
processes in a queue are sorted in ascending order based on their
required CPU bursts. When CPU becomes free, a process from the first
position in a queue is selected to run.
 Example:
Consider the following set of four processes. Their arrival time and
time required to complete the execution are given in following table.
Consider all time values in milliseconds.

Proces Arrival Time required for


s Time (T0) completion (∆T) (CPU Burst
Time)
P0 0 10
P1 1 6
P2 3 2
P3 5 4

 Gantt Chart:

0 10 12 16 22
 Initially only process P0 is present and it is allowed to run. But, when
P0 completes, all other processes are present. So, process with shortest
CPU burst P2 is selected and allowed to run till it completes. Whenever
32
more than one process is available, such type of decision is taken.
This procedure us repeated till all process complete their execution.

 Statistics:
Process Arrival CPU Burst Finish Turnaround Waiting
Time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (Wt=TAT-∆T)
T0)
P 0 10 1 1 0
0 0 0
P 1 6 2 2 15
1 2 1
P 3 2 1 9 7
2 2
P 5 4 1 1 7
3 6 1

Average Turnaround (10+21+9+11)/ = 51/4 = 12.75 ms.


Time: 4
Average Waiting Time: (0+15+7+7) / = 29 / = 7.25 ms.
4 4
 Advantages:
 Less waiting time.
 Good response for short processes.
 Disadvantages:
 It is difficult to estimate time required to complete execution.
 Starvation is possible for long process. Long process may wait
forever.

III Shortest Remaining Time Next (SRTN):


 Selection criteria:
The process, whose remaining run time is shortest, is served first. This is a
preemptive version of SJF scheduling.
 Decision Mode:
Preemptive: When a new process arrives, its total time is compared to
the current process remaining run time. If the new job needs less time to
finish than the current process, the current process is suspended and
the new job is started.
 Implementation:
This strategy can also be implemented by using sorted FIFO queue. All
processes in a queue are sorted in ascending order on their remaining run
time. When CPU becomes free, a process from the first position in a

33
queue is selected to run.
 Example:
Consider the following set of four processes. Their arrival time and time
required to complete the execution are given in following table.
Consider all time values in milliseconds.
Proces Arrival Time Time required for completion
s (T0) (∆T) (CPU Burst Time)
P0 0 10
P1 1 6
P2 3 2
P3 5 4
Gantt Chart:

0
1 3 5 9 13 22
 Initially only process P0 is present and it is allowed to run. But, when P1
comes, it has shortest remaining run time. So, P0 is preempted and P1 is
allowed to run. Whenever new process comes or current process blocks,
such type of decision is taken. This procedure is repeated till all
processes complete their execution.

Statistics:
Process Arrival Completion Finish Turnaround Waiting
time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (WT=TAT-
T0) ∆T)
P 0 1 22 2 1
0 0 2 2
P 1 6 9 8 2
1
P 3 2 5 2 0
2
P 5 4 13 8 4
3

Average Turnaround (22+8+2+8) / 4 = 40/4 = 10 ms.


Time:
Average Waiting Time: (12+2+0+4)/4 = 18 / 4 = 4.5 ms.

 Advantages:
 Less waiting time.
 Quite good response for short processes.
34
 Disadvantages:
 Again, it is difficult to estimate remaining time necessary to complete
execution.
 Starvation is possible for long process. Long process may wait
forever.
 Context switch overhead is there.

IV Round Robin:
 Selection Criteria:
 Each selected process is assigned a time interval, called time quantum or
time slice. Process is allowed to run only for this time interval. Here,
two things are possible: First, Process is either blocked or terminated
before the quantum has elapsed. In this case the CPU switching is done
and another process is scheduled to run. Second, Process needs CPU
burst longer than time quantum. In this case, process is running at the
end of the time quantum. Now, it will be preempted and moved to the
end of the queue. CPU will be allocated to another process. Here, length
of time quantum is critical to determine.
 Decision Mode:
Preemptive:
 Implementation:
This strategy can be implemented by using circular FIFO queue. If any
process comes, or process releases CPU, or process is preempted. It is
moved to the end of the queue. When CPU becomes free, a process from
the first position in a queue is selected to run.
 Example:
Consider the following set of four processes. Their arrival time and time
required to complete the execution are given in the following table.
All time values are in milliseconds. Consider that time quantum is of 4
ms, and context switch overhead is of 1 ms.

Proces Arrival Time Time required for


s (T0) completion (∆T)
P0 0 1
0
P1 1 6
P2 3 2
P3 5 4

 Gantt Chart:
P0 P1 P2 P0 P3 P1 P0

35
0 45 9 10 12 13 17 18 22 23 25 26 28
 At 4ms, process P0 completes its time quantum. So, it preempted and
another process P1 is allowed to run. At 12 ms, process P2 voluntarily
releases CPU, and another process is selected to run. 1 ms is wasted on
each context switch as overhead. This
procedure is repeated till all process completes their execution.

 Statistics:
Process Arrival Completion Finish Turnaround Waiting
time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (WT=TAT-
T0) ∆T)
P 0 1 28 2 1
0 0 8 8
P 1 6 25 2 1
1 4 8
P 3 2 12 9 7
2
P 5 4 22 1 1
3 7 3

Average Turnaround (28+24+9+17)/ = 78 / 4 = 19.5 ms


Time: 4
Average Waiting Time: (18+18+7+13)/ = 56 / 4 = 14 ms
4

 Advantages:
 One of the oldest, simplest, fairest and most widely used
algorithms.
 Disadvantages:
 Context switch overhead is there.
 Determination of time quantum is too critical. If it is too short, it
causes frequent context switches and lowers CPU efficiency. If it is
too long, it causes poor response for short interactive process.

V Non-Preemptive Priority Scheduling:


 Selection criteria:
The process, that has highest priority, is served first.
 Decision Mode:
Non-Preemptive: Once a process is selected, it runs until it blocks for
an I/O or some event, or it terminates.
 Implementation:
This strategy can be implemented by using sorted FIFO queue. All
36
processes in a queue are sorted based on their priority with highest
priority process at front end. When CPU becomes free, a process from
the first position in a queue is selected to run.
 Example:
Consider the following set of four processes. Their arrival time, total
time required completing the execution and priorities are given in
following table. Consider all time values in millisecond and small values
for priority means higher priority of a process.
Proces Arrival Time Time required for Priorit
s (T0) completion (∆T) y
P0 0 10 5
P1 1 6 4
P2 3 2 2
P3 5 4 0
Here, process priorities are in this order: P3 > P2 > P1 > P0.
 Gantt Chart:

0 10 14 16 22
Initially only process P0 is present and it is allowed to run. But, when
P0 completes, all other processes are present. So, process with highest
priority P3 is selected and allowed to run till it completes. This
procedure is repeated till all processes complete their execution.
 Statistics:
Process Arrival Completion Finish Turnaround Waiting
time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (TAT-
T0) ∆T)
P 0 1 10 1 0
0 0 0
P 1 6 22 2 1
1 1 5
P 3 2 16 1 1
2 3 1
P 5 4 14 9 5
3

Average Turnaround (10+21+13+9) = 53 / 4 = 13.25 ms


Time: /4
Average Waiting Time: (0+15+11+5) / = 31 / 4 = 7.75 ms
4
 Advantages:

37
 Priority is considered. Critical processes can get even better
response time.
 Disadvantages:
 Starvation is possible for low priority processes. It can be
overcome by using technique called ‘Aging’.
 Aging: gradually increases the priority of processes that wait in the
system for a
long time.

Preemptive Priority Scheduling:


 Selection criteria:
The process, that has highest priority, is served first.
 Decision Mode:
Preemptive: When a new process arrives, its priority is compared
with current process priority. If the new job has higher priority than
the current, the current process is suspended and new job is
started.
 Implementation:
This strategy can be implemented by using sorted FIFO queue. All
processes in a queue are sorted based on priority with highest priority
process at front end. When CPU becomes free, a process from the first
position in a queue is selected to run.
 Example:
Consider the following set of four processes. Their arrival time, time
required completing the execution and priorities are given in following
table. Consider all time values in milliseconds and small value of priority
means higher priority of the process.

Proces Arrival Time Time required for Priorit


s (T0) completion (∆T) y
P0 0 1 5
0
P1 1 6 4
P2 3 2 2
P3 5 4 0

Here process priorities are in this order: P3>P2>P1>P0

 Gantt chart:

0 1 3 5 9 13 22
 Initially only process P0 is present and it is allowed to run. But when P1
38
comes, it has higher
priority. So, P0 is preempted and P1 is allowed to run. This process is
repeated till all processes complete their execution.

 Statistics:
Process Arrival Completion Finish Turnaround Waiting
time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (TAT-
T0) ∆T)
P0 0 1 2 2 1
0 2 2 2
P1 1 6 1 1 6
3 2
P2 3 2 5 2 0
P3 5 4 9 4 0

Average Turnaround (22+12+2+4) / = 40 / 4 = 10 ms


Time: 4
Average Waiting Time: (12+6+0+0) / = 18 / 4 = 4.5 ms
4

 Advantages:
 Priority is considered. Critical processes can get even better
response time.
 Disadvantages:
 Starvation is possible for low priority processes. It can be
overcome by using technique called ‘Aging’.
 Aging: gradually increases the priority of processes that wait in the
system for a long time.
 Context switch overhead is there.

(2) Five batch jobs A to E arrive at same time. They have estimated running times
10,6,2,4 and 8 minutes. Their priorities are 3,5,2,1 and 4 respectively with 5
being highest priority. For each of the following algorithm determine mean
process turnaround time. Ignore process swapping overhead.
Round Robin, Priority Scheduling, FCFS, SJF.

Process Running Time Priorit


Time required for y
completion (∆T)
A 10 3
B 6 5
C 2 2

39
D 4 1
E 8 4

First Come First Served:

0 10 16 18 22 30
Process Arrival Completion Finish Turnaround Waiting Time
time Time (∆T) Time Time (TAT-∆T)
(T0) (T1) (TAT=T1-
T0)
A 0 10 10 1 0
0
B 0 6 16 1 10
6
C 0 2 18 1 16
8
D 0 4 22 2 18
2
E 0 8 30 3 22
0

Average Turnaround (10+16+18+22+30) = 96 / 5 = 19.2 ms


Time: /5
Average Waiting Time: (0+10+16+18+22) = 56 / 5 = 13.2 ms
/5

Shortest Job
First:

0 2 6 12 20 30
Proces Arriva Completion Finish Turnaround Waiting
s l time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (TAT-
T0) ∆T)
A 0 1 3 3 20
0 0 0
B 0 6 1 1 6
2 2
C 0 2 2 2 0
D 0 4 6 6 2
E 0 8 2 2 12
40
0 0

Average Turnaround (30+12+2+6+20) = 70 / 5 = 14 ms


Time: /5
Average Waiting Time: (20+6+0+2+12) = 40 / 5 = 8 ms
/5

Priorit
y:

0 6 14 24 26 30

Proces Arriva Completion Finish Turnaround Waiting


s l time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (TAT-∆T)
T0)
A 0 1 2 2 14
0 4 4
B 0 6 6 6 0
C 0 2 2 2 24
6 6
D 0 4 3 3 26
0 0
E 0 8 1 1 6
4 4

Average Turnaround (24+6+26+30+14) = 100 / = 20 ms


Time: /5 5
Average Waiting Time: (14+0+24+26+6) = 70 / 5 = 14 ms
/5

Round Robin:
Time slice OR Quantum 2mi
time= n.

16 18 20 22 24 26 2 3
0 2 4 6 8 10 12 14 8 0
Proces Arrival Completion Finish Turnaround Waiting
s time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (TAT-
T0) ∆T)
A 0 1 3 3 2
0 0 0 0

41
B 0 6 2 2 1
2 2 6
C 0 2 6 6 4
D 0 4 1 1 1
6 6 2
E 0 8 2 2 2
8 8 0

Average Turnaround (30+22+6+16+28) = 102 / = 20.4 ms


Time: /5 5
Average Waiting Time: (20+16+4+12+20) = 72 / 5 = 14.4 ms
/5

(3) Suppose that the following processes arrive for the execution at the times
indicated. Each process will run the listed amount of time. Assume
preemptive scheduling.
Process Arrival Time (ms) Burst Time
(ms) P1 0.0 8
P2 0.4 4
P3 1.0 1
What is the turnaround time for these processes with Shortest Job First
scheduling algorithm?

Multilevel Feedback Queue Scheduling


In this CPU schedule a process is allowed to move between queues. If a process
uses too much CPU time, it will be moved to a lower priority queue. This scheme
leaves I/O bound and interactive processes in the higher priority queues.
Similarly, a process that waits too long in a lower priority queue may be moved to
a higher priority queue

Figure Multi-level Feedback Queue Scheduling

42
Thread Scheduling
The main approaches of threading scheduling are:
1. Load sharing
2. Gang scheduling
3. Dedicated processor assignment and Dynamic scheduling
Load Sharing
Processes are not assigned to a particular processor. A global queue of ready threads
is maintained and each processor, when idle select a thread from the queue.
There are three versions of load sharing are these are:
1. First come first served
2. Smallest number of threads first
3. Preemptive smallest number of threads first
First come first served: when a job arrives each of its threads is placed
consecutively at the end of the shared queue. When a processor becomes
idle it picks the next ready thread, which it executes until completion or
blocking.
Smallest number of threads first: The shared ready queue is organized as
a priority queue with highest priority given to threads from jobs with the
smallest number of unscheduled threads. Jobs of equal priority are ordered
according to which job arrives first.
1. Preemptive smallest number of threads first: Highest is given to jobs with
the smallest number of incomplete threads.

Advantages of load sharing are:


1. The load is distributed evenly across the processors assuring that no
processor is idle while work is available to do.
2. No centralized scheduler is required
3. The global queue can be organized and accessed by using any of the
schemes.
Disadvantages of load sharing are:
a. The central queue copies a region of memory that must be accessed
in a manner that enforces mutual exclusion.
2. Preempted threads are unlikely to resume execution on the same
processor.
3. If all threads are treated as a common pool of threads, it is unlikely that
all the threads of a program will gain access to processors at the same
time
Gang Scheduling
1. If closely related processes execute in parallel, synchronization
blocking may be reduced.
2. Set of related threads of scheduled to run on a set of processors.

43
3. Gang scheduling has three parts.
(a) Groups of related threads are scheduled as a unit, a gang
(b) All members of a gang run simultaneously on different timeshared
CPUs.
(c) All gang members start and end their time slices together.
4. The trick that makes gang scheduling work is that all CPU are
scheduled synchronously. This means that time is divided into
discrete quanta.
5. An example of how gang scheduling works is given in the Table 5.1.
Here you have a multiprocessor with six CPU being used by five
processes, A through E, with a total of 24 ready threads.
Table Gang Scheduling

CP
U
0 1 2 3 4 5
0 A0 A A A A4 A5
1 2 3
1 B0 B B C C1 C2
1 2 0
Time 2 D0 D D D D4 E0
slot 1 2 3
3 E1 E E E E5 E6
2 3 4
4 A0 A A A A4 A5
1 2 3
5 B0 B B C C1 C2
1 2 0
6 D0 D D D D4 E0
1 2 3
7 E1 E E E E5 E6
2 3 4

(a) During time slot 0, threads A0 through A5 are scheduled and run
(b) During time slot 1, threads B0, B1, B2, C0, C1, C2 are scheduled
and run
(c) During time slot 2, D’s five threads and E0 get to run
(d) The remaining six threads belonging to process E run in the time
slot 3. Then the cycle repeats, with slot 4 being the same as slot 0
and so on.
(e) Gang scheduling is useful for applications where performance
severely degrades when any part of the application is not running.

44
Dedicated Processor Assignment
1. When application is scheduled, its threads are assigned to a
processor.
2. Some processor may be idle and no multiprogramming of
processors.
3.Provides implicit scheduling defined by assignment of threads to
processors. For the duration of program execution, each program
is allocated a set of processors equal in number to the number of
threads in the program. Processors are chosen from the available
pool.
Dynamic Scheduling
1. Number of threads in a process are altered dynamically by the
application.
2. Operating system and the application are involved in making
scheduling decisions. The OS is responsible for partitioning the
processors among the jobs.
3. Operating system adjusts load to improve the use:
(a) Assign idle processors.
(b) New arrivals may be assigned to a processor that is used by
a job currently using more than one processor.
(c) Hold request until processor is available
(d) New arrivals will be given a processor before existing
running applications.

Multiprocessor scheduling.
The scheduling policy for a multiprocessor system usually embodies a mixture of
several of these criteria. Issues in Multiprocessor Scheduling Solutions to the
scheduling problem come in two general forms: algorithms and scheduling systems.
Algorithms concentrate on policy while scheduling systems provide mechanism to
implement the algorithms. Some scheduling systems run outside the operating
system kernel, while others are part of a tightly-integrated distributed or parallel
operating system.
Distributed systems communicate via message-passing, while parallel systems use
shared memory. A task is the unit of computation in computing systems, and a job
consists of one or more cooperating tasks. Global scheduling involves assigning a
task to a particular processor within the system.
Local scheduling determines which of the set of available tasks at a processor runs
next on that processor. Task migration can change the global mapping by moving
a task to a new processor. If you have several jobs, each composed of many tasks,
you can either assign several processors to a single job, or you can assign several
tasks to a single processor. The former is known as space sharing, and the latter is
called time sharing.
Global scheduling is often used to perform load sharing. Load sharing allows busy
processors to off-load some of their work to less busy processors. Load balancing is
45
a special case of load sharing, in which the goal is to keep the load even across all
processors. Sender-initiated load sharing occurs when busy processors try to find
idle processors to off-load some work. Receiver- initiated load sharing occurs when
idle processors seek busy processors. It is now accepted wisdom that full load
balancing is generally not worth doing, as the small gain in execution time over
simpler load sharing is more than offset by the effort expended in maintaining the
balanced load.
As the system runs, new tasks arrive while old tasks complete execution (or are
served). If the arrival rate is greater than the service rate then the system is said to
be unstable. If tasks are serviced as least as fast as they arrive, the system is said
to be stable. If the arrival rate is just slightly less than the service rate for a system,
an unstable scheduling policy can push the system into instability. A stable policy
will never make a stable system unstable.
Figure Multi-processor
Queue

RQ Relea
Adm

RQ Relea

RQ Relea

Real-time Scheduling
Real-time systems design is an increasingly important topic in systems research
communities as well as the software industries. Real-time applications and their
requirements can be found in almost every area of operating systems and
networking research. An incomplete list of such domains includes distributed
systems, embedded systems, network protocol processing, aircraft design,
spacecraft design..., and the list goes on.

One of the most important responsibilities of a real-time system is to schedule


tasks according to their deadlines in order to guarantee that all real-time activities
achieve the required service level. Many scheduling algorithms exist for a variety
of task models, but fundamental to many of these are the earliest deadline first
(EDF) and rate-monotonic (RM) scheduling policies.

A schedule for a set of tasks is said to be feasible if a proof exists that every task
instance in the set will complete processing by its associated deadline. Also, a
task set is schedulable if there exists a feasible schedule for the set.

46
47
UNIT-III
 Principles of concurrency

 Mutual Exclusion, Semaphores & Mutex

 Release time, Deadlines, Hard & Soft timing constraints

 Classical problems of synchronization

 Principles of Deadlock, Deadlock Prevention

 Deadlock Avoidance, Deadlock detection, An integrated

Deadock strategies

48
UNIT-III
Concurrency in Operating Systems

Process management in operating systems can be classified broadly into three


categories: Multiprogramming involves multiple processes on a system with a
single processor.

 Multiprocessing involves multiple processes on a system with multiple


processors.
 Distributed processing involves multiple processes on multiple systems.
 All of these involve cooperation, competition, and communication between
processes that either run simultaneously or are interleaved in arbitrary ways
to give the appearance of running simultaneously.
Concurrent processing is thus central to operating systems and their design

Principles and Problems in Concurrency


Concurrency is the interleaving of processes in time to give the appearance of
simultaneous
execution. Thus, it differs from parallelism, which offers genuine simultaneous
execution. However

the issues and difficulties raised by the two overlaps to a large extent:
• sharing global resources safely is difficult;

• optimal allocation of resources is difficult;

• locating program can be difficult, because the contexts in which errors occur
cannot always be reproduced easily

Parallelism also introduces the issue that different processors may run at
different speeds, but

again, this problem is mirrored in concurrency because different processes


progress at different

rates.

A Simple Example

The fundamental problem in concurrency is processes interfering with each other


while accessing a shared global resource. This can be illustrated with a
surprisingly simple example:

chin = getchar ();

chout = chin;

49
putchar(chout);

Imagine two processes P1 and P2 both executing this code at the “same” time,
with the following interleaving due to multi-programming.

1. P1 enters this code, but is interrupted after reading the character x into
chin.

2. P2 enters this code, and runs it to completion, reading and displaying the
character y.

3. P1 is resumed, but chin now contains the character y, so P1 displays the


wrong character.

The essence of the problem is the shared global variable chin.

P1 sets chin, but this write is subsequently lost during the execution of P2. T The
general 2 solution is to allow only one process at a time to enter the code that accesses
chin: such code is often called a critical section. When one process is inside a critical
section of code, other processes must be prevented from entering that section. This
requirement is known as mutual exclusion.

Mutual Exclusion
Mutual exclusion is in many ways the fundamental issue in concurrency. It is the
requirement that when a process P is accessing a shared resource R, no other process
should be able to access R until P has finished with R. Examples of such resources
include files, I/O devices such as printers, and shared data structures.

There are essentially three approaches to implementing mutual exclusion.

• Leave the responsibility with the processes themselves: this is the basis of most
software approaches. These approaches are usually highly error-prone and carry
high overheads.

• Allow access to shared resources only through special-purpose machine


instructions: i.e. a hardware approach. These approaches are faster but still do not
offer a complete solution to the problem, e.g. they cannot guarantee the absence of
deadlock and starvation.

• Provide support through the operating system, or through the programming


language. We shall outline three approaches in this category: semaphores,
monitors, and message passing.

Semaphores
The fundamental idea of semaphores is that processes “communicate” via global
counters that are initialized to a positive integer and that can be accessed only
through two atomic operations semSignal(x) increments the value of the semaphore

50
x. semWait(x) tests the value of the semaphore x: if x > 0, the process decrements x
and continues; if x = 0, the process is blocked until some other process performs a
semSignal, then it proceeds as above.

A critical code section is then protected by bracketing it between these two


operations:

semWait (x);

semSignal (x);

In general, the number of processes that can execute this critical section
simultaneously is determined by the initial value given to x. If more than this number
try to enter the critical section, the excess processes will be blocked until some
processes exit. Most often, semaphores are initialized to one.

Monitors
The principal problem with semaphores is that calls to semaphore operations tend to
be distributed across a program, and therefore these sorts of programs can be
difficult to get correct, and very difficult indeed to prove correct! Monitors address
this problem by imposing a higher-level structure on accesses to semaphore
variables.

A monitor is essentially an object (in the Java sense) which has the semaphore
variables as internal (private) data and the semaphore operations as (public)
operations. Mutual exclusion is provided by allowing only one process to execute
the monitor’s code at any given time.

Monitors are significantly easier to validate than “bare” semaphores for at least
two reasons:

• all synchronisation code is confined to the monitor; and

• once the monitor is correct, any number of processes sharing the resource will
operate correctly

Message Passing
With an approach based on message passing, processes operate in isolation from
each other (i.e. they do not share data), and they exchange information where
necessary by the sending and receiving of messages. Synchronization between
processes is defined by the blocking policy attached to the sending and receiving of
messages.

The most common combination is

Non-blocking sends: When a process sends a message, it continues executing


without waiting for the receiving process.

51
Blocking receives: When a process attempts to receive a message, it blocks until
the message is available.

With this blocking policy, mutual exclusion can be achieved for a set of processes
that share a mailbox box. Some number of messages (usually one) is sent to box
initially by the system, then each process executes the following code when it wants
to enter the critical section:

receive (box);

send (box);

The similarities to the semaphore approach are obvious

Producer-Consumer Problem using Semaphores


The Solution to producer-consumer problem uses three semaphores, namely,
full, empty and mutex.
The semaphore ‘full’ is used for counting the number of slots in the buffer that
are full. The ‘empty’ for counting the number of slots that are empty and
semaphore ‘mutex’ to make sure that the producer and consumer do not access
modifiable shared section of the buffer simultaneously.

1. Set full buffer slots


to 0. i.e.,
semaphore Full =
0.
2. Set empty buffer
slots to N. i.e.,
semaphore empty =
N.
3. For control access to critical section
set mutex to 1. i.e., semaphore
mutex = 1.
Producer ()

WHILE

(true)

produce-Item ();

P (empty);

52
P (mutex); enter-Item

V
(mutex) V (full);

Consumer ()

WHILE (true)

P (full)

P (mutex); remove-

Item (); V (mutex);

V (empty);

consume-Item (Item)
A semaphore is hardware or a software tag variable whose value indicates the status
of a common resource. Its purpose is to lock the resource being used. A process
which needs the resource will check the semaphore for determining the status of
the resource followed by the decision for proceeding. In multitasking operating
systems, the activities are synchronized by using the semaphore techniques.
Semaphore is a mechanism to resolve resources conflicts by tallying resource
seekers what is the state of sought resources, achieving a mutual exclusive access
to resources. Often semaphore operates as a type of mutual exclusive counters
(such as mutexes) where it holds a number of access keys to the resources. Process
that seeks the resources must obtain one of those access keys, one of semaphores,
before it proceeds further to utilize the resource. If there is no more such a key
available to the process, it has to wait for the current resource user to release the
key.
A semaphore could have the value 0, indicating that no wakeups were saved, or
some positive values if one or more wakeups were pending.
A semaphore s is an integer variable that apart from initialization, is accessed only
through two standard atomic operations, wait and signal. these operations were
originally termed p (for wait to test) and v (for signal to increment).
The classical definition of wait in pseudocode is:
wait(s)

while(s<=0)

;// no-op s--;

53
The classical definition of signal in pseudocode is:
signal(s)

{s++;

Modification to the integer value of semaphore in wait and signal operations


must be executed individually.
That is, when one process modifies the semaphore value no other process can
simultaneously modify that same semaphore value.

The Dining Philosophers


This semaphore solution to the readers-writers problem can let writers starve
because readers arriving after a now-waiting writer arrived earlier can still
access the database to read if enough readers continually trickle in and ‘‘keep
the database in a read state’’ then that waiting writer will never get to write
resource philosopher import dining server
body philosopher (i : int; dcap : cap dining server; thinking, eating: int)
write(“philosopher”, i, “alive, max think eat delays”, thinking, eating)
procedure think()

var napping: int

napping: = int(random(1000*thinking))

writes (“age=”, age (),”, philosopher “, i,” thinking for “,


napping,” ms\n”) nap(napping)

end think
procedure eat ()

var napping: int

napping: = int(random(1000*eating))

writes (“age=”, age (),”, philosopher “, i,” eating for “,


napping,” ms\n”) nap(napping)

end eat
process phil

resorce
ding_server
54
op take_forks(i : int), put_forks(i :
int) body dining_server(num_phil :
int)

write (“dining server for”, num_phil, “philosophers


are alive”) sem mutex: = 1

type states = enum(thinking, hungry, eating)

var state[1:num_phil]: states: = ([num_phil]


thinking) sem phil[1:num_phil]: = ([num_phil]
0)

procedure left (i: int) returns lft: int

if i=1 -> lft: = num_phil [] else -> lft := i-1 fi

end left

procedure right (i: int) returns rgh : int

if i=num_phil -> rgh: = 1 [] else -> rgh := i+1 fi end right

procedure test (i: int)

if state[i] = hungry and state[left(i)] ~= eating and state[right(i)] ~=


eating ->

state[i]: = eating V(phil[i])

Deadlock
Deadlock occurs when you have a set of processes [not necessarily all the
processes in the system], each holding some resources, each requesting some
resources, and none of them is able to obtain what it needs, i.e. to make
progress. Those processes are deadlocked because all the processes are
waiting. None of them will ever cause any of the events that could wake up any
of the other members of the set, and all the processes continue to wait forever.
For this model, I assume that processes have only a single thread and that
there are no interrupts possible to wake up a blocked process. The no-
interrupts condition is needed to prevent an otherwise deadlocked process
from being awakened by, say, an alarm, and then causing events that release
other processes in the set. In most cases, the event that each process is waiting
for is the release of some resource currently possessed by another member of
the set. In other words, each member of the set of deadlocked processes is
waiting for a resource that is owned by another deadlocked process. None of
55
the processes can run, none of them can release any resources, and none of
them can be awakened. The number of processes and the number and kind of
resources possessed and requested are unimportant. This result holds for any
kind of resource, including both hardware and software.

Figure Processes are in Deadlock Situation

Waiting Resource Owned


for X X by B

Process Process
A B

Resource Waiting
Owned
Y for Y
by A

Deadlock Characterization
Deadlock situation can arise if the following four conditions hold
simultaneously in a system:
1. Resources are used in mutual exclusion.
2. Resources are acquired piecemeal (i.e. not all the resources that are
needed to complete an activity are obtained at the same time in a single
indivisible action).
3. Resources are not preempted (i.e. a process does not take away resources
being held by another process).
4. Resources are not spontaneously given up by a process until it has
satisfied all its outstanding requests for resources (i.e. a process, being
that it cannot obtain some needed resource it does not kindly give up the
resources that it is currently holding).
Handling of Deadlocks

There are several ways to address the problem of deadlock in an operating


system.
1. Prevent
2. Avoid
3. Detection and recovery
4. Ignore

Deadlock Prevention
Deadlocks can be prevented by ensuring that at least one of the following
four conditions occur
56
1. Mutual exclusion: Removing the mutual exclusion condition means
that no process may have exclusive access to a resource. This proves
impossible for resources that cannot be spooled, and even with
spooled resources deadlock could still occur. Algorithms that avoid
mutual exclusion are called non-blocking synchronization algorithms.
2. Hold and wait: The “hold and wait” conditions may be removed by
requiring processes to request all the resources they will need before
starting up (or before embarking upon a particular set of operations);
this advance knowledge is frequently difficult to satisfy and, in any
case, is an inefficient use of resources. Another way is to require
processes to release all their resources before requesting all the
resources they will need. This too is often impractical. (Such
algorithms, such as serializing tokens, are known as the all-or- none
algorithms.
3. No preemption: A “no preemption” (lockout) condition may also be
difficult or impossible to avoid as a process has to be able to have a
resource for a certain amount of time, or the processing outcome may
be inconsistent or thrashing may occur. However, inability to enforce
preemption may interfere with a priority algorithm
4. Circular wait: The circular wait condition: Algorithms that avoid
circular waits include “disable interrupts during critical sections” ,
and “use a hierarchy to determine a partial ordering of resources”
(where no obvious hierarchy exists, even the memory address of
resources has been used to determine ordering) and Dijkstra’s
solution.
Deadlock Avoidance
Deadlock Avoidance, assuming that you are in a safe state (i.e. a state from
which there is a sequence of allocations and releases of resources that allows
all processes to terminate) and you are requested certain resources, simulates
the allocation of those resources and determines if the resultant state is safe.
If it is safe the request is satisfied, otherwise it is delayed until it becomes
safe.
The Banker’s Algorithm is used to determine if a request can be satisfied. It
uses requires knowledge of who are the competing transactions and what are
their resource needs. Deadlock avoidance is essentially not used in
distributed systems

Deadlock Detection and Recovery


Often neither deadlock avoidance nor deadlock prevention may be used.
Instead deadlock detection and recovery are used by employing an algorithm
that tracks resource allocation and process states, and rolls back and
restarts one or more of the processes in order to remove the deadlock.
Detecting a deadlock that has already occurred is easily possible since the
resources that each process has locked and/or currently requested are
known to the resource scheduler or OS.
Detecting the possibility of a deadlock before it occurs is much more difficult
57
and is, in fact, generally undecidable, because the halting problem can be
rephrased as a deadlock scenario. However, in specific environments, using
specific means of locking resources, deadlock detection may be decidable. In
the general case, it is not possible to distinguish between algorithms that are
merely waiting for a very unlikely set of circumstances to occur and
algorithms that will never finish because of deadlock.

Ignore Deadlock

In the Ostrich Algorithm it is hoped that deadlock doesn’t happen. In general,


this is a reasonable strategy. Deadlock is unlikely to occur very often; a
system can run for years without deadlock occurring. If the operating system
has a deadlock prevention or detection system in place, this will have a
negative impact on performance (slow the system down) because whenever a
process or thread requests a resource, the system will have to check whether
granting this request could cause a potential deadlock situation.
If deadlock does occur, it may be necessary to bring the system down, or at
least manually kill a number of processes, but even that is not an extreme
solution in most situations.

An Integrated Deadlock Strategies


The Banker’s Algorithm for Detecting/Preventing Deadlocks
Banker’s Algorithm for Single Resource
This is modeled on the way a small-town banker might deal with customers’
lines of credit. In the course of conducting business, our banker would
naturally observe that customers rarely draw their credit lines to their limits.
This, of course, suggests the idea of extending more credit than the amount
the banker actually has in her coffers.

Suppose we start with the following situation

Customer Credit Used Credit Line


Andy 0 6
Barb 0 5
Marv 0 4
Sue 0 7
Funds Available 10
Max Commitment 22

Our banker has 10 credits to lend, but a possible liability of 22. Her job is
to keep enough in
reserve so that ultimately each customer can be satisfied over time: That
is, that each customer
will be able to access his full credit line, just not all at the same time.
Suppose, after a while, the
bank’s credit line book shows.
58
Custo Credit Credit
mer Used Line
Andy 1 6
Barb 1 5
Marv 2 4
Sue 4 7
Funds Available 2
Max Commitment 22

Eight credits have been allocated to the various customers; two remain.
The question then is: Does a way exist such that each customer can be
satisfied? Can each be allowed their maximum credit line in some
sequence? We presume that, once a customer has been allocated up to
his limit, the banker can delay the others until that customer repays his
loan, at which point the credits become available to the remaining
customers. If we arrive at a state where no customer can get his maximum
because not enough credits remain, then a deadlock could occur, because
the first customer to ask to draw his credit to its maximum would be
denied, and all would have to wait.
To determine whether such a sequence exists, the banker finds the
customer closest to his limit: If the remaining credits will get him to that
limit, The banker then assumes that that loan is repaid, and proceeds to
the customer next closest to his limit, and so on. If all can be granted a
full credit, the condition is safe.
In this case, Marv is closest to his limit: assume his loan is repaid. This
frees up 4 credits. After Marv, Barb is closest to her limit (actually, she’s
tied with Sue, but it makes no difference) and 3 of the 4 freed from Marv
could be used to award her maximum. Assume her loan is repaid; we
have now freed 6 credits. Sue is next, and her situation is identical to
Barb’s, so assume her loan is repaid. We have freed enough credits (6) to
grant Andy his limit; thus, this state safe
Suppose, however, that the banker proceeded to award Barb one more
credit after the credit book arrived at the state immediately above:

Customer Credit Used Credit Line


Andy 1 6
Barb 2 5
Marv 2 4
Sue 4 7
Funds Available 1
Max Commitment 22

Now it’s easy to see that the remaining credit could do no good toward
getting anyone to their maximum.

59
So, to recap, the banker’s algorithm looks at each request as it occurs,
and tests if granting it will lead to a safe state. If not, the request is
delayed. To test for a safe state, the banker checks to see if enough
resources will remain after granting the request to satisfy the customer
closest to his maximum. If so, that loan is assumed repaid, and the next
customer checked, and so on. If all loans can be repaid, then the request
leads to a safe state, and can be granted. In this case, we see that if Barb
is awarded another credit, Marv, who is closest to his maximum, cannot
be awarded enough credits, hence Barb’s request can’t be granted —it will
lead to an unsafe state

60
UNIT-IV

 Multitasking

 Task State Transition, Task Control, Task Creation and

Task Activation

 Task names and ID

 Task Deletion and Deletion safety

 Memory management

 Comparison of RTOS-Vx works-uC/OS II

 RT Linux for Embedded Applications

61
UNIT-IV

Introduction

VxWorks is a real-time operating system developed as proprietary software


by Wind River Systems. It is a high performance, Unix like, scalable RTOS,
and supports ARM, Pentium, Intel X-Scale, Super H and other popular
processors for embedded system design. VxWorks design is hierarchical and
well suited for hard real time applications. It supports kernel mode
execution of tasks for fast execution of application codes. VxWorks is
supported with powerful development tools that make it easy and efficient
to use. Many simulation tools, time-performance analysis tools, debug and
test tools are provided, making VxWorks as an RTOS that supports
development of almost any embedded application, providing a complete
solution for all stages of the design cycle. The latest version VxWorks 6.9 is
the first commercial-grade RTOS to fully support both 32-bit and 64-bit
processing on Intel Architecture.

Basic Features: Some important features of VxWorks RTOS are,

1. Multitasking environment using standard POSIX scheduler


2. Ability to run two concurrent Operating systems on a single
processing layer
3. Multiple file systems and systems that enable advanced
multimedia functionality
4. Synchronization using a full range of IPC options
5. Different context saving mechanisms for the tasks and ISRs
6. Virtual IO devices including pipes and sockets
7. Virtual memory management functions
8. Power management functions to enhance the ability to control
power consumption
9. Interconnecting functions that support large number of protocols.
10. Pipe drivers for IPCs
11. Network transparent sockets
12. Networks drivers for shared memory and Ethernet
13. RAM disk drivers for memory-resident files
14. Processor abstraction layer to enable application system design by
user when using new versions of processor architecture

Architecture

VxWorks was initially a development and network environment for VRTX


(Versatile Real- Time Executive). Later Wind River systems developed their
own microkernel. So, the VxWorks is of "client-server" architecture from the
beginning. The heart of the VxWorks run-time system is the wind

62
microkernel. This microkernel supports a full range of real-time features
including multi-tasking, scheduling, inter task
synchronization/communication and memory management. All the other
functionality is implemented as processes.

VxWorks is highly scalable. By including or excluding various modules,


VxWorks can be configured for the use in small embedded system with
tough memory constraints to complex systems where more functions are
needed. Furthermore, individual modules themselves are scalable.
Individual functions may be removed from the library, or specific kernel
synchronization objects may be omitted if they are not required by the
application.

Task management

The VxWorks real-time kernel provides a basic multitasking environment.


VxWorks offers both POSIX and a proprietary scheduling mechanism (wind
scheduling). Both preemptive priority and round-robin scheduling
mechanism are available. The difference between POSIX and wind
scheduling is that wind scheduling applies the scheduling algorithm on a
system wide basis, whereas POSIX scheduling algorithms are applied on a
process-by-process basis.

TASK STATES
IN VxWorks, the states encountered by the task are of 8 different types:
1. Suspended: idle state just after creation or state where execution is
inhibited.
2. Ready: waiting for running and CPU access in case scheduled by the
scheduler but not waiting for a message through IPC.
3. Pending: The task is blocked as it waits for a message from the IPC or from
a resource; only then will the CPU be able to process further.
4. Delayed: sent to sleep for a certain time interval.
5. Delayed + suspended: delayed and then suspended if it is not pre-empted
during the delay period.
63
6. Pended for an IPC [Inter process Communication] + suspended: Pended
and then suspended if the blocked state does not change.
7. Pended for an IPC + delayed: Pended and then pre-empted after the delayed
time interval.
8. Pended for an IPC + suspended + delayed: Pended and suspended after
delayed time interval.
Kernel library functions are included in the header files ‘vxWorks.h’ and
‘kernelLib.h’. Task and system library functions are included in ‘taskLib.h’
and ‘sysLib.h’. User task priorities are between 101 and 255. Lowest
priority means task of highest priority number (255). System tasks have
the priorities from 0 to 99. For tasks, the highest priority is 100 by default

TASK CREATION AND ACTIVATION, TASK NAME IDS, TASK


OPTIONS, TASK DELETION
1. Task Spawn function: It is used for creating and activating a task.
Prototypeis: Unsigned int taskId=taskSpawn (name, priority, options,
stacksize, main, arg0, arg1, arg2…., arg9)
Task suspending and resuming functions:
taskSuspend(taskId): inhibits the execution of task identified by taskId.
taskResume(taskId): resumes the execution of the task identified by taskId.
taskRestart(taskId): first terminates a task and tehn spawn again with its
original assigned arguments.
2.Task deletion and deletion protection functions:
taskDelete(taskId): this permanently inhibits the execution of the task
identified by taskId and cancels the allocations of the memory block for the
task stack and TCB.
Many times, each task should itself execute the codes for the following:
1. Memory de-allocation
2. Ensure that the waiting task gets the desired IPC.
3. Close a file, which was opened before.
4. Delete child tasks when the parent task executes the exit ()
function.
3.Delaying a task to let a lower priority task get access:
‘Int sysClkRateGet ()’ returns the frequency of the system ticks. Therefore,
to delay by
0.25 seconds, the function taskDelay(sysClkRateGet ( )/4) is used.

Memory management
In VxWorks, all systems and all application tasks share the same address
space. This means that faulty applications could accidentally access system
resources and compromise the stability of the entire system. An optional tool
named VxVMI is available that can be used to allow each task to have its own
address space. Default physical page size used is 8KB. Virtual memory
support is available with VxVMI tool. VxWorks does not offer privilege
64
protection. The privilege level is always 0 (supervisor mode).

Interrupts:
To achieve the fastest possible response to external interrupts, interrupt
service routines (ISRs) in VxWorks run in a special context outside of any
thread's context, so that there are no thread context switches involved. The
C function that the user attaches to a interrupt vector is not the actual ISR.
Interrupts cannot directly vector to C functions.
The ISR's address is stored in the interrupt vector table and is called directly
from the hardware. The ISR performs some initial work (e.g. saving registers
and setting up stack) and then calls the C function that was attached by the
user. For this reason, we use the term interrupt handler (instead of ISR) to
designate the user installed C handler function. VxWorks uses an ISR design
that is different from a task design. The features of the ISR in VxWorks are:
1. ISRs have the highest priorities and can pre-empt any running task.
2. An ISR inhibits the execution of tasks till return.
3. An ISR does not execute like a task and does not have regular task
context.
4. An ISR should not use mutex semaphore.
5. ISR should just write the required data at the memory or buffer.
ISR should not use floating-point functions as these take longer time to
execute

Performance

1. Real-time performance: Capable of dealing with the most demanding


time constraints, VxWorks is a high-performance RTOS tuned for both
determinism and responsiveness.

2. Reliability: A high-reliability RTOS, VxWorks provides certification


evidence required by strict security standards. Even for non-safety-
critical systems, VxWorks is counted on to run forever, error free.

3. Scalability: An indispensable RTOS foundation for very small-scale


devices, large-scale networking systems, and everything in between,
VxWorks is the first RTOS to provide full 64-bit processing to support
the ever-growing data requirements for embedded real- time systems.
VxWorks is scalable in terms of memory footprint and functionality so
that it can be tuned as per the requirements of the project.

4. Interrupt latencies: The time elapsed between the execution of the


last instruction of the interrupted thread and the first instruction in
the interrupt handler is Interrupt latency. The time needed to go from
the last instruction in the interrupt handler to the next task scheduled
to run is Interrupt dispatch latency. VxWorks 5.3.1 exhibits an
interrupt latency of 1.4 to 2.6 micro seconds and a dispatch latency

65
of 1.6 to 2.4 micro seconds.
5. Priority inheritance: VxWorks has a priority inheritance mechanism
that exhibits
an optimal performance, which is essential for an RTOS
6. Footprint: VxWorks has a completely configurable and tunable small
memory footprint for today's memory-constrained systems. The user
can control how much of the operating system he needs.
Applications:
VxWorks RTOS is widely used in the market, for a great variety of
applications. Its reliability makes it a popular choice for safety critical
applications. VxWorks has been successfully used in both military and
civilian avionics, including the Apache Attack Helicopter, Boeing 787, 747-8
and Airbus A400M. It is also used in on ground avionic systems such as in
both civilian and military Radar stations. Another safety critical application
that entrusts VxWorks is BMW’s i- Drive system. However, VxWorks is also
widely used in non-safety-critical applications where performance is at
premium. The Xerox Phaser, a post-script printer is controlled by a VxWorks
powered platform. Link Sys wireless routers use VxWorks for operating
switches

VxWorks has been used in several space applications. In Space crafts, where
design challenges are greatly increased by the need of extremely low power
consumption and lack of access to regular maintenance, VxWorks RTOS can
be chosen as the operating system for On Board Computer [OBC]. For
example, ‘Clementine’ launched in 1994 is running VxWorks 5.1 on a MIPS-
based CPU responsible for the Star Tracker and image processing algorithms.
The ‘Spirit’ and ‘Opportunity’ Mars Exploration Rovers were installed with
VxWorks. VxWorks is also used as operating system in several industrial
robots and distributed control systems.

Comparison between VxWorks and RT Linux


Since RT Linux is based on regular Linux kernel and VxWorks is a real-time
system from the beginning the handling of interrupts are managed in
different ways. When there is an interrupt in RT Linux for the regular kernel,
it is first the real-time kernel that gets it. Then the interrupt is analyzed by
the real-time kernel and then queued to the regular kernel. When there is no
real-time tasks to run, the regular kernel reads the queue and can handle
the interrupt.
In VxWorks when there is an interrupt that has low priority, the interrupt is
placed on a queue. When the kernel is not running any real-time processes,
the interrupt is processed. So the big difference is the communication
between the two kernels in RT Linux. VxWorks can receive the interrupts and
handle them directly. RT Linux on the other hand has to go through both
kernels if there is a low priority interrupt through a queue. But if there is a
66
real-time interrupt RT Linux handles it by itself without the regular kernel.
Another difference is that VxWorks handles all its interrupts in an interrupt-
context separated from the regular task-context, which lowers the interrupt
latency. But in general, the real-time handling of the two operating systems
is the same.

67
UNIT-V

 File System

 Concepts of Process

 Concurrent execution & Interrupts

 Process management-forks

 Basic level programming with system calls

 Unix signals

68
UNIT-V
Kernel

The kernel is a computer program at the core of a computer's operating system with
complete control over everything in the system. It is an integral part of any operating
system. It is the "portion of the operating system code that is always resident in
memory”. It facilitates interactions between hardware and software components. On
most systems, it is one of the first programs loaded on start-up (after the bootloader).
It handles the rest of start up as well as input/output requests from software,
translating them into data-processing instructions for the central processing unit. It
handles memory and peripherals like keyboards, monitors, printers, and speakers

A kernel connects the application software to the hardware of a computer.


Unix Kernel
A Unix kernel — the core or key components of the operating system — consists of
many kernel subsystems like process management, scheduling, file management,
device management and network management, memory management, dealing with
interrupts from hardware devices.
 Concurrency: As Unix is a multiprocessing OS, many processes run
concurrently to improve the performance of the system.
 Virtual memory (VM): Memory management subsystem implements the
virtual memory concept and users need not worry about the executable
program size and the RAM size.
 Paging: It is a technique to minimize the internal as well as the external
fragmentation in the physical memory.
 Virtual file system (VFS): A VFS is a file system used to help the user to hide
the different file systems complexities. A user can use the same standard file
system related calls to access different file systems.

69
File System
Unix file system is a logical method of organizing and storing large amounts of
information in a way that makes it easy to manage. A file is a smallest unit in which
the information is stored. Unix file system has several important features. All data in
Unix is organized into files. All files are organized into directories. These directories
are organized into a tree-like structure called the file system.

Files in Unix System are organized into multi-level hierarchy structure known as a
directory tree. At the very top of the file system is a directory called “root” which is
represented by a “/”. All other files are “descendants” of root.

Directories or Files and their description


 /: The slash / character alone denotes the root of the filesystem tree.
 /Bin: Stands for “binaries” and contains certain fundamental utilities,
such as ls or cp, which are generally needed by all users.
 /Boot: Contains all the files that are required for successful booting
process.
 /Dev: Stands for “devices”. Contains file representations of peripheral
devices and pseudo-devices.
 /Etc: Contains system-wide configuration files and system databases.
Originally also contained “dangerous maintenance utilities” such as
init,but these have typically been moved to /sbin or elsewhere.
 /home : Contains the home directories for the users.
 /lib : Contains system libraries, and some critical files such as kernel
modules or device drivers.
 /media : Default mount point for removable devices, such as USB
sticks, media players, etc.
 /mnt : Stands for “mount”. Contains filesystem mount points. These
are used, for example, if the system uses multiple hard disks or hard
disk partitions. It is also often used for remote (network) filesystems,
CD-ROM/DVD drives, and so on.

70
 /proc : procfs virtual filesystem showing information about processes
as files.
 /root : The home directory for the superuser “root” – that is, the
system administrator. This account’s home directory is usually on the
initial filesystem, and hence not in /home (which may be a mount point
for another filesystem) in case specific maintenance needs to be
performed, during which other filesystems are not available. Such a case
could occur, for example, if a hard disk drive suffers physical failures
and cannot be properly mounted.
 /tmp : A place for temporary files. Many systems clear this directory
upon startup; it might have tmpfs mounted atop it, in which case its
contents do not survive a reboot, or it might be explicitly cleared by a
startup script at boot time.
 /usr : Originally the directory holding user home directories,its use
has changed. It now holds executables, libraries, and shared resources
that are not system critical, like the X Window System, KDE, Perl, etc.
However, on some Unix systems, some user accounts may still have a
home directory that is a direct subdirectory of /usr, such as the default
as in Minix. (on modern systems, these user accounts are often related
to server or system use, and not directly used by a person).
 /usr/bin : This directory stores all binary programs distributed with
the operating system not residing in /bin, /sbin or (rarely) /etc.
 /usr/include : Stores the development headers used throughout the
system. Header files are mostly used by the #include directive in C/C++
programming language.
 /usr/lib : Stores the required libraries and data files for programs
stored within /usr or elsewhere.
 /var : A short for “variable.” A place for files that may change often –
especially in size, for example e-mail sent to users on the system, or
process-ID lock files.
 /var/log : Contains system log files.
 /var/mail : The place where all the incoming mails are stored. Users
(other than root) can access their own mail only. Often, this directory is
a symbolic link to /var/spool/mail.
 /var/spool : Spool directory. Contains print jobs, mail spools and
other queued tasks.
 /var/tmp : A place for temporary files which should be preserved
between system reboots.

Types of Unix files – The UNIX files system contains several different types
of files

71
1. Ordinary files – An ordinary file is a file on the system that contains data,
text, or program instructions.
 Used to store your information, such as some text you have written or
an image you have drawn. This is the type of file that you usually work
with.
 Always located within/under a directory file.
 Do not contain other files.
 In long-format output of ls -l, this type of file is specified by the “-”
symbol.
2. Directories – Directories store both special and ordinary files. For users
familiar with Windows or Mac OS, UNIX directories are equivalent to folders.
A directory file contains an entry for every file and subdirectory that it
houses. If you have 10 files in a directory, there will be 10 entries in the
directory. Each entry has two components.
(1) The Filename A unique identification number for the file or directory
(called the inode number)2. Branching points in the hierarchical tree. Used
to organize groups of files. May contain ordinary files, special files or other
directories. Never contain “real” information which you would work with
(such as text). Basically, just used for organizing files. All files are
descendants of the root directory, (named /) located at the top of the tree.
In long-format output of ls –l , this type of file is specified by the “d” symbol.
3. Special Files – Used to represent a real physical device such as a printer,
tape drive or terminal, used for Input/Ouput (I/O) operations. Device or
special files are used for device Input/Output(I/O) on UNIX and Linux
systems. They appear in a file system just like an ordinary file or a directory.
On UNIX systems there are two flavors of special files for each device,
character special files and block special files:
 When a character special file is used for device Input/Output(I/O), data
is transferred one character at a time. This type of access is called raw
device access.
 When a block special file is used for device Input/Output(I/O), data is
transferred in large fixed-size blocks. This type of access is called block
device access.
For terminal devices, it’s one character at a time. For disk devices though,
raw access means reading or writing in whole chunks of data – blocks, which
are native to your disk.
 In long-format output of ls -l, character special files are marked by the
“c” symbol.
 In long-format output of ls -l, block special files are marked by the “b”
symbol.
4. Pipes – UNIX allows you to link commands together using a pipe. The pipe
acts a temporary file which only exists to hold data from one command until
it is read by another. A Unix pipe provides a one-way flow of data. The output
or result of the first command sequence is used as the input to the second
command sequence. To make a pipe, put a vertical bar (|) on the command
line between two commands. For example: who | wc -l
In long-format output of ls –l, named pipes are marked by the “p” symbol

72
5. Sockets – A Unix socket (or Inter-process communication socket) is a
special file which allows for advanced inter-process communication. A Unix
Socket is used in a client-server application framework. In essence, it is a
stream of data, very similar to network stream (and network sockets), but all
the transactions are local to the filesystem.
In long-format output of ls -l, Unix sockets are marked by “s” symbol.
6. Symbolic Link – Symbolic link is used for referencing some other file of
the file system. Symbolic link is also known as soft link. It contains a text
form of the path to the file it references. To an end user, symbolic link will
appear to have its own name, but when you try reading or writing data to
this file, it will instead reference these operations to the file it points to. If we
delete the soft link itself, the data file would still be there. If we delete the
source file or move it to a different location, symbolic file will not function
properly.
In long-format output of ls –l, Symbolic link are marked by the “l” symbol
(that’s a lower-case L).

PROCESS MANAGEMENT
Every process in a UNIX system has the following attributes:
 some code (a.k.a. text)
 some data
 a stack
 a unique process ID number (PID)
When UNIX is first started, there’s only one visible process in the system.
This process is called “init”, and it has a process ID of 1. The only way to
create a new process in UNIX is to duplicate an existing process, so “init” is
the ancestor of all subsequent processes.

PROCESS STATES
Every process in the system can be in one of six states. The six possible
states are
1) Running, which means that the process is currently using the CPU.
2) Runnable, which means that the process can make use of the CPU
as soon as it
becomes available.
3) Sleeping, which means that the process is waiting for an even to
occur. For example, if a process executes a “read ()” system call, it
sleeps until th I/O request completes.
4) Suspended, which means that the process has been “frozen” by a
signal such as
SIGSTOP.
It will resume only when sent a SIGCONT signal. For example, a
Control- Z from
The keyboard suspends all of the processes in the foreground job.
5) Idle, which means that the process is being created by a “fork ()
system calls and
is not yet runnable.
73
6) Zombified, which means that the process has terminated but has
not yet re
returned its exit code to its parent.
A process remains a zombie until its parent accepts its return code using
the “wait ()” system call.

PROCESS COMPOSITION
Every process is composed of several different pieces:
 a code area, which contains the executable(text) portion of a
process
 a data area, which is used by a process to contain static data
 a stack area, which is used by a process to store temporary data
 a user area, which holds housekeeping information about a
process
 page tables, which are used by the memory management system

User Area
Every process in the system has some associated “housekeeping”
information that is used by the kernel for process management. This
information is stored in a data structure called a user area. Every process
has its own user area. User areas are created in the kernel’s data region
and are only accessible by the kernel; user processes may not access their
user areas.
Fields within a process’ user area include:
 a record of how the process should react to each kind of signal
 a record of the process’ open file descriptors
 a record of how much CPU time the process has used recently
The Process Table
There is a single kernel data structure of fixed size called the process table
that contains one entry for every process in the system. The process table is
created in the kernel’s data region and is accessible only by the kernel.
Each entry contains the following information about each process:
 its process ID(PID) and parent process ID(PPID)
 its real and effective user ID(UID) and group ID(GID)
 its state (running, runnable, sleeping, suspended, idle, or
zombified)
 the location of its code, data, stack, and user areas
 a list of all pending signals
74
PROCESS MANAGEMENT
When a process duplicates, the parent and child processes are virtually
identical (except for aspects like PIDs, PPIDs, and runtimes); the child’s
code, data, and stack are a copy of the parents, and the processes even
continue to execute the same code.

A child process may replace its code with that of another executable file, there
differentiating itself from its parent. When “init” starts executing, it quickly
duplicates several times. Each of the duplicate child processes then replaces
its code from the executable file called “getty”, which is responsible for
handling user logins.

(1) with the next shell prompt.

75
PROCESS MANAGEMENT SYSTEM CALLS
Name Function
Fork duplicates a process
Getpid obtains a process’ ID number
Getppid obtains a parent process’ID number
Exit terminates a process
Wait waits for a child process
Exec replaces the code, data, and stack of a process.

Creating a New Process: fork ()


A process may duplicate itself by using “fork ()”, which works like this:
System Call: pid_t fork(void)
“Fork ()” causes a process to duplicate.
The child process is an almost-exact duplicate of the original parent process;
it inherits a copy of its parent’s code, data, stack, open file descriptors, and
signal table the parent and child
processes have different process ID numbers and parent process ID
numbers.
If “fork ()” succeeds, it returns the PID of the child to the parent process and
returns a value of 0 to the child process.

A process may obtain its own process ID and parent process ID numbers
busing the “getpid()” and “getppid()” system calls, respectively.
Here’s a synopsis of these system calls: System Call: pid_t getpid(void)
pid_t getppid(void)

“getpid()” and “getppid()” return a process’ID number and parent process’


ID number, respectively.
The parent process ID number of PID 1 (i.e., “init”) is 1.

cat myfork.c ---> list the program.


#include <stdio.h> main()
{
int pid;
76
printf(“I’m the original process with PID %d and PPID %d. \n”, getpid(),
getppid() );
pid = fork(); /* Duplicate. Child and parent continue from here */ if (
pid!= 0 ) /* pid is non-zero, so I must be the parent */
{
printf(“I’m the parent process with PID %d and PPID %d. \n”, getpid(),
getppid() );

printf(“My child’s PID is %d \n”, pid );


}
else /* pid is zero, so I must be the child */
{
printf(“I’m the child process with PID %d and PPID %d. \n”, getpid(),
getppid() );
}
printf(“PID %d terminates. \n”, getpid() ); /* Both processes */
/* execute this */
}
$ myfork ---> run the program.
I’m the original process with PID 13292 and PPID 13273. I’m the parent
process with PID 13292 and PPID 13273. My child’s PID is 13293.
I’m the child process with PID 13293 and PPID 13292. PID 13293
terminates. ---> child terminates.
PID 13292 terminates. ---> parent terminates.
- The PPID of the parent process refers to the PID of the shell that
executed the “myfork” program.
WARNING:
It is dangerous for a parent process to terminate without waiting for the death
of its child The only reason our program doesn’t wait for its child to terminate
is because we haven’t yet described the “wait ()” system call!

Basic level of programming using System call


The interface between the operating system and the user programs is defined
by the set of system calls that the operating system provides.
 The system calls available in the interface vary from operating
system to operating system.
 Any single-CPU computer can execute only one instruction at a time.
 If a process is running a user program in user mode and needs a system
service, such as reading data from a file, it has to execute a trap or
system call instruction to transfer control to the operating system.
 The operating system then figures out what the calling process wants by
inspecting the parameters.
 Then it carries out the system call and returns control to the
instruction following the system call.

77
Following steps describe how a system call is handled by an operating
system.
To understand how OS handles system calls, let us take an example of read
system call.
Read system call has three parameters: the first one specifying the file, the second
one pointing to the buffer, and the third one giving the number of bytes to read.
 Like nearly all system calls, it is invoked from C programs by calling a
library procedure with the same name as the system call: read.
 A call from a C program might look like this:
count = read (fd, buffer, nbytes);
 The system call returns the number of bytes actually read in count.
 This value is normally the same as nbytes, but may be smaller, if, for
example, end of- file is encountered while reading.
 If the system call cannot be carried out, either due to an invalid
parameter or a disk error, count is set to -1, and the error number is
put in a global variable, errno.
 Programs should always check the results of a system call to see if an
error occurred.
 System calls are performed in a series of steps.
 To make this concept clearer, let us examine the read call discussed
above.
 In preparation for calling the read library procedure, which actually
makes the read system call, the calling program first pushes the
parameters onto the stack, as shown in steps 1-3 in Fig.
 The first and third parameters are called by value, but the second
parameter is passed by reference, meaning that the address of the
buffer (indicated by &) is passed, not the contents of the buffer.
 Then comes the actual call to the library procedure (step 4). This
instruction is the normal procedure call instruction used to call all
procedures.
 The library procedure, possibly written in assembly language, typically
puts the system call number in a place where the operating system
 expects it, such as a register (step 5).

78
Figure The 11 steps in making the system call read(fd, buffer, nbytes).

 Then it executes a TRAP instruction to switch from user mode to kernel


modeand start execution at a fixed address within the kernel (step 6).
 The kernel code that starts examines the system call number and then
dispatches to the correct system call handler, usually via a table of
pointers to system call handlers indexed on system call number (step
7).
 At that point the system calls handler runs (step 8).

Process management
Call Description
pid = for k () Create a child process identical to the
parent
pid = waitpid(pid, &statloc, Wait for a child to terminate
options)
s = execve(name, argv, environp) Replace a process’ core image
exit(status) Terminate process execution and return
status
File management
Call Description
fd = open (file, how, ...) Open a file for reading, writing, or both
s = close(fd) Close an open file
n = read (fd, buffer, nbytes) Read data from a file into a buffer
n = write (fd, buffer, nbytes) Write data from a buffer into a file
position = lseek(fd, offset, whence) Move the file pointer
s = stat (name, &buf) Get a file’s status information
Director and file system management
Call Description
s = mkdir(name,mode) Create a new directory
s = rmdir(name) Remove an empty directory

79
s = link(name1, name2) Create a new entry, name2, pointing to
name1
s = unlink(name) Remove a directory entry
s = mount(special, name, flag) Mount a file system
s = umount(special) Unmount a file system
Miscellaneous
Call Description
s = chdir(dir name) Change the working directory
s = chmod(name,mode) Change a file’s protection bits
s = kill(pid, signal) Send a signal to a process
seconds = time(&seconds) Get the elapsed time since Jan. 1, 1970
Table Some of the major POSIX system calls.

 Once the system call handler has completed its work, control may be
returned to the user-space library procedure at the instruction following the
TRAP instruction (step 9).
 This procedure then returns to the user program in the usual way procedure
calls return (step 10).
 To finish the job, the user program has to clean up the stack, as it does
after any procedure call (step 11).

UNIX SIGNALS
Value Name Description
01 SIGHUP Hang up; sent to process when kernel assumes that the user
of that process is doing no useful work
02 SIGINT Interrupt
03 SIGQUIT Quit; sent by user to induce halting of process and
production of core dump
04 SIGILL Illegal instruction
05 SIGTRAP Trace trap; triggers the execution of code for process tracing
06 SIGIOT IOT instruction
07 SIGEMT EMT instruction
08 SIGFPE Floating-point exception
09 SIGKILL Kill; terminate process
10 SIGBUS Bus error
11 SIGSEGV Segmentation violation; process attempts to access location
outside its virtual address space
12 SIGSYS Bad argument to system call
13 SIGPIPE Write on a pipe that has no readers attached to it
14 SIGALRM Alarm clock; issued when a process wishes to receive a signal
after a period of time
15 SIGTERM Software termination
16 SIGUSR1 User-defined signal 1
17 SIGUSR2 User-defined signal 2
18 SIGCHLD Death of a child
19 SIGPWR Power failure
80
QUESTION BANK
UNIT -I
PART-A: SHORT ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Define an Operating system? L1 CO1
2 Why is the Operating System viewed as a resource L2 CO1
allocator & control program?
3 What is the Kernel? L1 CO1
4 What are Batch systems? L1 CO1
5 Discuss the advantages of multilevel feedback queue L2 CO1
scheduling?
6 What do you mean by Time-sharing systems? LI CO1
7 Describe multiprocessor systems & give their L2 CO1
advantages? What are the different types of
multiprocessing
8 Discuss the fundamental goals of OS L2 CO1
PART-B: LONG ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Explain the various types of computer systems. L2 CO1

2 Explain how protection is provided for the hardware L5 CO1


resources by the operating system.
3 What are the system components of an operating L1 CO1
system & explain them?
4 Explain briefly the functions of OS L2 CO1

5 Discuss about monolithic and Layered kernel structure L2 CO1


OS
6 Describe in brief about interaction of operating system L2 CO1
with the underlying hardware.
7 What is RTOS. Differentiate between General purpose L1 CO1
and RTOS.

UNIT –II

PART-A: SHORT ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1. What is the use of job & process? L1 CO2

81
2. What is a process. L1 CO2

3. Define release time and deadlines. L1 CO2

4 Define CPU scheduling L1 CO2

5 Discuss about timing constraints L2 CO2

6 Describe about hard real time systems L2 CO2

7 What is Real Time Scheduling concept. L1 CO2

PART-B: LONG ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Discuss about the various scheduling algorithms. L2 CO2

2 Explain FCFS, Shortest Job First scheduling L2 CO2


algorithms
3 Explain Round robin scheduling algorithm. L2 CO2

4 Analyze about UNIX Multilevel feedback Queue L4 CO2


Scheduling
5 Explain about Thread Scheduling. L2 CO2

6 Compare multiple-processor scheduling and real-time L4 CO2


scheduling

UNIT -III

PART-A: SHORT ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 What is critical section problem? L1 CO3

2 Explain the requirements that a solution to the critical L2 CO3


section problem must satisfy?
3 What are the principles of concurrency. L1 CO3

4 List two hardware instructions and their definitions L1 CO3


which can be used for implementing mutual exclusion.
5 What are Semaphores? LI CO3

6 Define CPU scheduling L1 CO3

7 What is Deadlock L1 CO3

8 What is message passing L1 CO3

82
9 What is Mutual Exclusion. L1 CO3

PART-B: LONG ANSWER QUESTIONS

Blooms Course
S.NO Taxonomy Outcome
QUESTION Level
1 Explain implementation of producers/Consumers L2 CO3
problem using monitor
2 Examine the classic problems of synchronization L4 CO3

3 Give a detailed description about deadlocks and its L1 CO3


characterization
4 Write in detail about deadlock avoidance L2 CO3

5 Explain the Banker's algorithm for deadlock L2 CO3


avoidance?
6 Describe about deadlock detection L2 CO3

7 Explain about dining philosopher problem and strategy L2 CO3


being adopted.
8 Explain about Reader writer problem in detail. L2 CO3

UNIT –IV

PART-A: SHORT ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1. What is multitasking? L1 CO4

2. What are the Task names and IDs L1 CO4

3. What is task state transition. L1 CO4

4 How Task is created and activated in Vx works. L1 CO4

5 Explain about Task Deletion and Task Deletion safety. L2 CO4

6 Explain the architecture of Vx works L2 CO4

7 What are the performance metrics of Vx works L1 CO4

PART-B: LONG ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Discuss in detail about memory management in Vx L2 CO4
works.
2 Explain in detail about Virtual to Physical Address L2 CO4
mapping in Vx works
83
3 Compare RTOS and Vx works. L4 CO4

4 Compare RTOS and ucos II L4 CO4

5 Explain about how RT Linux can be used for L2 CO4


embedded applications.
6 Explain the features of Vx works. L2 CO4

UNIT -V

PART-A: SHORT ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 What is Unix Kernel? L1 CO5

2 Explain the concept of process L2 CO5

3 What is process concurrent execution L1 CO5

4 List the process states in Unix. L1 CO5

5 Explain interrupts in Unix. L1 CO5

6 Explain file system L2 CO5

7 What are the special files and ordinary files in unix L1 CO5
system.
8 What is the use of pipes and sockets in unix L1 CO5

PART-B: LONG ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Explain process management in Unix? L2 CO5

2 Explain the concept of parent and fork process in Unix L5 CO5

3 What is process execution in Unix L1 CO5

4 Explain system calls with basic level programming L2 CO5

5 Explain Shell programming and filters. L2 CO5

6 Explain file system L2 CO5

7 Discuss about various Unix signals. L2 CO5

8 Explain in detail about POSIX standards. L2 CO5

84
ASSIGNMENT QUESTIONS
UNIT -I & II

PART-A: SHORT ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 What is operating System? Explain the abstract view of L1 CO1
the components of a computer system
2 Explain the view of OS as a Resource Manager L2 CO1

3 State the objectives and functions of OS L1 CO1

4 What is Process? Give the difference between Process and L1 CO2


Program
5 What is process state? Explain state transition diagram L1 CO2

6 What is thread? Explain thread structure LI CO2

7 What is multi programming L1 CO2

PART-B: LONG ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Explain different types of operating system structure L2 CO1

2 Explain the evolution of OS L5 CO1

3 List different services provided by operating system L1 CO1

4 Explain FCFS scheduling algorithm L2 CO2

5 Explain Shortest Job First scheduling algorithm L2 CO2

6 Explain the thread scheduling L2 CO2

7 Discuss about Multiprocessor concept and real time L2 CO2


scheduling concept
UNIT –III, IV & V

PART-A: SHORT ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1. What are the problems of synchronization L1 CO3

85
2. What is a Semaphore L1 CO3

3. Define Dead lock L1 CO3

4 What is message passing L1 CO3

5 Explain multi-tasking L1 CO4

6 What is the significance of Task Spawn function L1 CO4

7 What is the significance of Task Delete function L1 CO4

8 What is file system L1 CO5

9 Explain the function of pipes and sockets L2 CO5

10 What are POSIX standards L2 CO5

PART-B: LONG ANSWER QUESTIONS

Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Explain about Dining Philosopher Problem L2 CO3

2 Explain Dead lock detection and avoidance L2 CO3

3 Explain Producer Consumer problem L2 CO3

4 Analyze Memory management in Vx works L4 CO4

5 Discuss about Comparison of RTOS and Vxworks L2 CO4

6 Describe about RT Linux for embedded Applications L2 CO4

7 Explain the Process management execution L2 CO5

8 Analyze the system programming with system calls L4 CO5

9 Discuss about UNIX signals L2 CO5

86
PREVIOUS UNIVERSITY QUESTION PAPERS

(Attach image of 05 old University Question Papers)

87
88
INTERNAL QUESTION PAPERS WITH KEY
Subject Code: PE522EC AY:2024-25
MATRUSRI ENGINEERING COLLEGE
16-1-486, Saidabad, Hyderabad-500059
(Approved by AICTE, Affiliated to Osmania University)

Department of Electronics and Communication Engineering


B.E. VIII-SEM Internal Examination-1, APR 2025
Duration: 1Hr Marks: 20 M Date: 29-04-2025(AN) Section: A, B
Subject: REAL TIME OPERATING SYSTEMS
Name of the faculty: Dr. K. Ashok Kumar, Mr. K. Shravan Kumar
PART-A
NOTE: Answer all the following Questions 3 * 2 = 6 Marks
S.No. Questions Marks BL CO
1 What are operating system goals? 2M L1 CO1
2 Define Deadline 2M L1 CO2
3 What is semaphore? 2M L1 CO3
PART-B
NOTE: Answer any two of the following Question2 * 7 = 14 Marks
4 Explain in detail about computer system 7M L2 CO1
architecture.
5 Make us of a diagram to describe the First-Come, 7M L3 CO2
First-Served (FCFS) scheduling algorithm.
6 Explain Reader Writers problem. 7M L2 CO3

Bloom's level wise Marks Distribution Course Outcome


wise Marks Distribution

Marks Marks
15 10
10
14 5 9 9 9
5
6 7
0 0
Level 1 Level 2 Level 3 CO 1 CO 2 CO 3

BL- Bloom's Taxonomy Levels [1-Remember, 2-Understand, 3-Apply, 4-Analyze, 5-


Evaluate, 6- Create]

CO- Course Outcomes.

89
Subject Code: PE522EC AY:2024-25
MATRUSRI ENGINEERING COLLEGE
16-1-486, Saidabad, Hyderabad-500059
(Approved by AICTE, Affiliated to Osmania University)

Department of Electronics and Communication Engineering


B.E. VIII-SEM Internal Examination-1, APR 2025
Duration: 1Hr Marks: 20 M Date: 29-04-2025(FN) Section: A, B
Subject: REAL TIME OPERATING SYSTEMS
Name of the faculty: Dr. K. Ashok Kumar, Mr. K. Shravan Kumar
KEY PAPER

PART-A

1. What are operating system goals?


Ans: Operating system goals:

 Execute user programs and make solving user problems easier


 Make the computer system convenient to use
 Use the computer hardware in an efficient manner
2. Define Deadline
Ans: The deadline of the job is the instant of time by which its execution is required
to be accomplished.

3. What is semaphore?
Ans: Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for process
synchronization.

PART-B

4. Explain in detail about computer system architecture.


Ans: Most systems use a single general-purpose processor

 Most systems have special-purpose processors as well


Multiprocessors systems growing in use and importance. Also known as parallel
systems, tightly-coupled systems

Advantages include:

 Increased throughput
 Economy of scale
 Increased reliability – graceful degradation or fault tolerance
Two types:

 Asymmetric Multiprocessing – each processor is assigned a specific task.

90
 Symmetric Multiprocessing – each processor performs all tasks

Dual-Core Design

 Multi-chip and multicore


 Systems containing all chips
o Chassis containing multiple separate systems

Clustered Systems

 Like multiprocessor systems, but multiple systems working together


o Usually sharing storage via a storage-area network (SAN)
o Provides a high-availability service which survives failures
 Asymmetric clustering has one machine in hot-standby mode
 Symmetric clustering has multiple nodes running applications,
monitoring each other

91
 Some clusters are for high-performance computing (HPC)
o Applications must be written to use parallelization
 Some have distributed lock manager (DLM) to avoid conflicting operations

5. Make us of a diagram to describe the First-Come, First-Served (FCFS)


scheduling algorithm.
Ans: FCFS (First Come First Serve):

 Selection criteria: The process that requests first is served first. It means
that processes are served in the exact order of their arrival.
 Decision Mode: Non preemptive: Once a process is selected, it runs until it is
blocked for an I/O or some event, or it is terminated.
 Implementation: This strategy can be easily implemented by using FIFO
queue, FIFO means First in First Out. When CPU becomes free, a process
from the first position in a queue is selected to run.
 Example: Consider the following set of four processes. Their arrival time and
time required to complete the execution are given in following table. Consider
all time values in milliseconds.
Process Burst Time

P1 24

P2 3

P3 3

Suppose that the processes arrive in the order: P1, P2, P3


The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27

Average waiting time: (0 + 24 + 27)/3 = 17

Suppose that the processes arrive in the order: P2, P3, P1

The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

92
Waiting time for P1 = 6; P2 = 0; P3 = 3

Average waiting time: (6 + 0 + 3)/3 = 3

Much better than previous case

Convoy effect - short process behind long process

Consider one CPU-bound and many I/O-bound processes

6. Explain Reader Writers problem.


Ans: The Readers Writers Problem:

• In this problem there are some processes (called readers) that only read the
shared data, and never change it, and there are other processes
(called writers) who may change the data in addition to reading, or instead of
reading it.
• There is various type of readers-writers problem, most centered on relative
priorities of readers and writers.
• The main complexity with this problem occurs from allowing more than one
writer to access the data at the same time.
The Problem Statement:

• There is a shared resource which should be accessed by multiple processes.


There are two types of processes in this context.
• They are reader and writer. Any number of readers can read from the shared
resource simultaneously, but only one writer can write to the shared resource.
• When a writer is writing data to the resource, no other process can access the
resource.
• A writer cannot write to the resource if there are non-zero number of readers
accessing the resource at that time.
The Solution:

 From the above problem statement, it is evident that readers have higher
priority than writer. If a writer wants to write to the resource, it must wait
until there are no readers currently accessing that resource.
 Here, we use one mutex m and a semaphore w. An integer variable read
count is used to maintain the number of readers currently accessing the
resource. The variable read count is initialized to 0. A value of 1 is given
initially to m and w.
 Instead of having the process to acquire lock on the shared resource, we use
the mutex m to make the process to acquire and release lock whenever it is
updating the read count variable.
 The code for the writer process looks like this:
while (TRUE)
93
{

wait(w);

/* perform the write operation */

signal(w);

And, the code for the reader process looks like this:

while (TRUE)

//acquire lock

wait(m);

read_count++;

if (read_count == 1) wait(w);

//release lock

signal(m);

/* perform the reading operation */

// acquire lock

wait(m);

read_count--;

if (read_count == 0)

signal(w);

// release lock

signal(m);

94
Subject Code: PE522EC AY:2024-25
MATRUSRI ENGINEERING COLLEGE
16-1-486, Saidabad, Hyderabad-500059
(Approved by AICTE, Affiliated to Osmania University)

Department of Electronics and Communication Engineering


B.E. VIII-SEM Internal Examination-2, JUNE 2025
Duration: 1Hr Marks: 20 M Date: 10-06-2025 (FN) Section: A, B
Subject: REAL TIME OPERATING SYSTEMS
Name of the faculty: Dr K. Ashok Kumar, Mr. K. Shravan Kumar
PART-A
NOTE: Answer all the following Questions 3 * 2 = 6 Marks
S.No. Questions Marks BL CO
1 Define Mutex. 2M L2 CO3
2 Write the features of Vx works. 2M L2 CO4
3 What is Unix Kernel? 2M L2 CO5
PART-B
NOTE: Answer any two of the following Question2 * 7 = 14 Marks
4 Explain about deadlock avoidance with resource 7M L2 CO3
allocation graphs.
5 Explain the process of Task creation and activation in 7 M L3 CO4
Vx works. Highlight the role of Task names and IDs.
6 (a) Discuss concurrent execution and interrupts. 3M L2 CO5

(b) Explain basic level programming with system 4M L3 CO5


calls.

Bloom's level wise Marks Distribution Course Outcome


wise Marks Distribution

Marks Marks
20 10

10 5 9 9 9
16
11
0 0
Level 2 Level 3 CO 3 CO 4 CO 5

BL- Bloom's Taxonomy Levels [1-Remember, 2-Understand, 3-Apply, 4-Analyze, 5-


Evaluate, 6- Create]

CO- Course Outcomes.

95
Subject Code: PE522EC AY:2024-25
MATRUSRI ENGINEERING COLLEGE
16-1-486, Saidabad, Hyderabad-500059
(Approved by AICTE, Affiliated to Osmania University)

Department of Electronics and Communication Engineering


B.E. VIII-SEM Internal Examination-2, JUNE 2025
Duration: 1Hr Marks: 20 M Date: 10-06-2025 (FN) Section: A, B
Subject: REAL TIME OPERATING SYSTEMS
Name of the faculty: Dr K. Ashok Kumar, Mr. K. Shravan Kumar
Key Paper

PART-A

1. Define Mutex.
A mutex (mutual exclusion) is a synchronization mechanism in computer
programming that allows multiple threads to access a shared resource, but
only one thread at a time. It's a type of lock that protects data and other
resources from concurrent access. Essentially, a mutex ensures that a critical
section of code, where shared resources are manipulated, is only executed by
one thread at a time, preventing race conditions and data corruption.

2. Write the features of Vx works.


VxWorks is a real-time operating system (RTOS) known for its deterministic
behavior, multitasking, and support for various hardware architectures.

(i) Multitasking:
VxWorks allows multiple tasks to run concurrently, ensuring that time-critical
operations are executed promptly.

(ii) Preemptive and Round-Robin Scheduling:


It supports both preemptive scheduling, where tasks can interrupt each other,
and round-robin scheduling, where tasks are assigned time slices.

(iii) Interrupt Handling:


VxWorks provides mechanisms for efficient interrupt handling, enabling rapid
responses to external events.

(iv) Inter-Task Communication (IPC):


VxWorks offers various IPC methods like shared memory, message queues,
and semaphores to facilitate communication between tasks.

3. What is Unix Kernel?


The Unix kernel is the core of the Unix operating system, a program that
manages the hardware and software resources of a computer. It acts as an
interface between user programs and the underlying hardware, providing
essential services like process management, memory allocation, and file
system access.

96
PART-B
4. Explain about deadlock avoidance with resource allocation graphs.
Deadlock avoidance using a resource allocation graph (RAG) involves
preventing potential cycles in the graph, which can indicate a deadlock. This
is achieved by requiring all processes to claim their future resource requests
before any actual requests are made. By tracking these claim edges, the
system can ensure that granting a request won't lead to a circular wait.

(i) Claim Edges:


In the RAG, a claim edge (dashed line) represents a process's future request
for a resource. All claim edges must be present in the graph initially,
indicating the system knows all potential future requests.
(ii) Request Edges:
When a process requests a resource, a request edge is added (solid line).
(iii) Avoidance Algorithm:
The system can avoid deadlock by:
Ensuring that all claim edges are present for a process before it can make any
requests.
Only allowing requests for resources that have already been claimed.
Checking for cycles in the graph after granting a request. If a cycle is detected,
the request is denied.
(iv) Cycle Detection:
The absence of cycles in the RAG indicates that the system is in a safe state,
meaning no deadlock is possible.
In essence, deadlock avoidance with RAGs focuses on proactive planning and
control of resource allocation to prevent the formation of circular wait
conditions. By ensuring that all future requests are known and that no
process is granted a resource that would create a cycle, the system can
maintain a safe state and avoid deadlocks.
5. Explain the process of Task creation and activation in Vx works.
Highlight the role of Task names and IDs.
In VxWorks, task creation and activation involve allocating memory, initializing a
Task Control Block (TCB), and activating the TCB. Task creation is initiated by a user
task using functions like taskSpawn(). Task names and IDs are crucial for identifying
and managing tasks. Task names are user-defined character strings for debugging
and identifying tasks, while task IDs are system-generated unique identifiers used
for task control functions.

1. Task Creation:
 When a task needs to be created, a user task uses
the taskSpawn() function (or similar).
 This function allocates memory for the task's stack and TCB (Task
Control Block).

97
 The TCB is initialized with information about the task, including its
priority, stack size, and entry point.
 The task's code is loaded into the memory space.
 Resources are allocated for the task, such as memory and access to
system objects.
2. Task Activation:
 After creation, the task needs to be activated to start executing.
 This involves initializing the task's context within its TCB.
 The TCB is placed onto the ready queue, where the scheduler will select
it for execution based on its priority.
3. Task Names and IDs:
 Task Names: Task names are character strings that can be used for
debugging and identifying tasks.
 Task IDs: Task IDs are unique system-generated identifiers assigned to
each task. They are used in many task control functions.
 Both names and IDs are essential for managing and interacting with
tasks in the VxWorks system.
6. (a) Discuss concurrent execution and interrupts.
Concurrent execution and interrupts are fundamental concepts in computing,
allowing systems to handle multiple tasks or processes seemingly
simultaneously. Concurrent execution enables a system to run multiple instructions
or processes at the same time, while interrupts provide a mechanism for a processor
to pause its current task and respond to external events or requests.

Concurrent Execution:

 Definition:
Concurrent execution refers to the ability of a system to perform multiple tasks or
processes in a way that they appear to be happening at the same time, even if they
are not physically happening simultaneously on a single core.

 Examples:
 Multiple applications running on a computer.
 Multiple processes running in an operating system.
 Threads within a single process executing concurrently.
 Benefits:
 Improved resource utilization and system efficiency.
 Enhanced responsiveness and user experience.
 Ability to handle background tasks without blocking the user
interface.

98
 Implementation:
 Multithreading: Allows a single process to execute multiple threads
concurrently.
 Operating Systems: Provide mechanisms for scheduling and managing
multiple processes, allowing them to be interleaved in time.
 Parallel processing: Multiple processors or cores can execute different
parts of a program simultaneously, truly achieving parallel execution.
Interrupts:

 Definition:
An interrupt is a signal that causes a processor to pause its current execution and
switch to a different code section, typically an Interrupt Service Routine (ISR), to
handle a specific event or request.

 Types:
 Hardware interrupts: Generated by external devices like keyboard,
mouse, or network card.
 Software interrupts: Generated by program code, often used to
call Operating System services.
 Purpose:
 Responsive systems: Interrupts allow systems to react to events in a
timely manner.
 Resource management: Interrupts enable efficient utilization of
resources by allowing the processor to switch between tasks.
 Multitasking: Interrupts are crucial for enabling multitasking in
operating systems.
 Example:
 A computer receiving a message from a network card (hardware
interrupt).
 A program calling a function from the operating system (software
interrupt).
Relationship between Concurrent Execution and Interrupts:

 Interrupts are often used to manage and enable concurrency in operating


systems and other systems.
 Multiprogramming or timesharing relies on interrupts (like timer interrupts)
to switch between different processes.
 Interrupts allow the processor to respond to events from different processes
concurrently, ensuring efficient execution of multiple tasks.

99
(b) Explain basic level programming with system calls.

At its core, basic-level programming with system calls involves using special
instructions (system calls) to request services from the operating system
kernel. These system calls act as an interface between a user program and the kernel,
allowing the program to interact with hardware and manage system resources.

Elaboration:

1. System Calls as an Interface:


System calls are the primary way for user programs to access and utilize the services
provided by the operating system kernel. They are the entry points for user programs
to request operations like creating files, reading from disks, sending data over
networks, or terminating processes.

2. Context Switching:
When a program makes a system call, the processor switches from user mode (where
the program typically runs) to kernel mode (where the kernel runs). This is known
as a context switch. The system call handler in the kernel then handles the requested
service.

3. Examples of System Calls:


 File Access: System calls like open(), read(), write(), and close() allow
programs to create, open, read, write to, and close files.
 Process Management: System calls like fork(), wait(), and exit() enable
programs to create new processes, wait for processes to terminate, and
terminate themselves, respectively.
 Networking: System calls like socket(), connect(), send(),
and recv() provide the foundation for network programming, allowing
programs to establish connections, send and receive data over
networks.
 Hardware Access: System calls can also be used to access hardware
devices like printers, scanners, and other peripherals.
4. How System Calls Work:
 Setting up Arguments: A user program prepares the necessary
arguments (e.g., file path, data buffer, network address) for a system
call.
 Executing the System Call: The program then executes a special
instruction (the system call instruction) that triggers a kernel
interrupt.
 Kernel Handling: The kernel intercepts the interrupt, saves the
program's state, and executes the corresponding system call handler.
 Returning to User Program: After the kernel completes the requested
service, it restores the program's state and returns control to the user
program.

100
5. Importance:
System calls are essential for building robust and versatile programs that can
interact with the operating system and utilize its resources effectively. They provide
a standardized interface for programs to access the kernel's capabilities.

101
CONTENT BEYOND SYLLABUS

Sl.No. TOPIC Mode of Relevance with POs and


Teaching PSOs
1 Issues in real E material PO1, PO3, PO5, PO11,
time system PSO1, PSO2
design
2 Asynchronous E material PO1, PO3, PO5, PO11,
Communication PSO1, PSO2
3 RTOS E material PO1, PO3, PO5, PO11,
Applications PSO1, PSO2

ISSUES IN REAL TIME DESIGN

Designing Realtime systems is a challenging task. Most of the challenge comes


from the fact that Realtime systems have to interact with real world entities.
These interactions can get fairly complex. A typical Realtime system might be
interacting with thousands of such entities at the same time. For example, a
telephone switching system routinely handles calls from tens of thousands of
subscribers. The system has to connect each call differently. Also, the exact
sequence of events in the call might vary a lot.

Realtime Response
Realtime systems have to respond to external interactions in a predetermined
amount of time. Successful completion of an operation depends upon the
correct and timely operation of the

system. Design the hardware and the software in the system to meet the
Realtime requirements. For example, a telephone switching system must feed
dial tone to thousands of subscribers within a recommended limit of one
second. To meet these requirements, the off-hook detection mechanism and
the software message communication involved have to work within the limited
time budget. The system has to meet these requirements for all the calls being
set up at any given time.

The designers have to focus very early on the Realtime response requirements.
During the architecture design phase, the hardware and software engineers
work together to select the right system architecture that will meet the
requirements. This involves deciding inter connectivity of the processors, link
speeds, processor speeds, etc. The main questions to be asked are:

Is the architecture suitable? If message communication involves too many


nodes, it is likely that the system may not be able to meet the Realtime

102
requirement due to even mild congestion. Thus, a simpler architecture has a
better chance of meeting the Realtime requirements.

Are the link speeds adequate? Generally, loading a link more than 40-50% is
a bad idea. A higher link utilization causes the queues to build up on different
nodes, thus causing variable amounts of delays in message communication.

Are the processing components powerful enough? A CPU with really high
utilization will lead to unpredictable Realtime behavior. Also, it is possible
that the high priority tasks in the system will starve the low priority tasks of
any CPU time. This can cause the low priority tasks to misbehave. As with
link, keep the peak CPU utilization below 50 %.

Is the Operating System suitable? Assign high priority to tasks that are
involved in processing Realtime critical events. Consider preemptive
scheduling if Realtim requirements are stringent. When choosing the
operating system, the interrupt latency and scheduling variance should be
verified.

Scheduling variance refers to the predictability in task scheduling times. For


example, a telephone switching system is expected to feed dialtone in less
than 500 ms. This would typically involve scheduling three to five tasks within
the stipulated time. Most operating systems would easily meet these numbers
as far as the mean dialtone delay is concerned. But general-purpose operating
systems would have much higher standard deviation in the dialtone numbers.

Interrupt Latency refers to the delay with which the operating system can
handle interrupts and schedule tasks to respond to the interrupt. Again, real-
time operating systems would have much lower interrupt latency

Real time Response

Realtime systems have to respond to external interactions in a predetermined


amount of time. Successful completion of an operation depends upon the
correct and timely operation of the

system. Design the hardware and the software in the system to meet the
Realtime requirements. For example, a telephone switching system must feed
dial tone to thousands of subscribers within a recommended limit of one
second. To meet these requirements, the off-hook detection mechanism and
the software message communication involved have to work within the limited
time budget. The system has to meet these requirements for all the calls being
set up at any given time.

The designers have to focus very early on the Realtime response requirements.
During the architecture design phase, the hardware and software engineers
work together to select the right system architecture that will meet the

103
requirements. This involves deciding inter connectivity of the processors, link
speeds, processor speeds, etc. The main questions to be asked are:

Is the architecture suitable? If message communication involves too many


nodes, it is likely that the system may not be able to meet the Realtime
requirement due to even mild congestion. Thus, a simpler architecture has a
better chance of meeting the Realtime requirements.

Are the link speeds adequate? Generally, loading a link more than 40-50% is
a bad idea. A

higher link utilization causes the queues to build up on different nodes, thus
causing variable amounts of delays in message communication.

Are the processing components powerful enough? A CPU with really high
utilization will lead to unpredictable Realtime behavior. Also, it is possible
that the high priority tasks in system will starve the low priority tasks of any
CPU time. This can cause the low priority tasks to misbehave. As with link,
keep the peak CPU utilization below 50 %.

Is the Operating System suitable? Assign high priority to tasks that are
involved in processing Realtime critical events. Consider preemptive
scheduling if Realtime requirements are stringent. When choosing the
operating system, the interrupt latency and scheduling variance should be
verified.

Scheduling variance refers to the predictability in task scheduling times. For


example, a telephone switching system is expected to feed dialtone in less
than 500 ms. This would typically involve scheduling three to five tasks within
the stipulated time. Most operating systems would easily meet these numbers
as far as the mean dialtone delay is concerned. But general-purpose operating
systems would have much higher standard deviation in the dialtone numbers.

Interrupt Latency refers to the delay with which the operating system can
handle interrupts and schedule tasks to respond to the interrupt. Again, real-
time operating systems would have much lower interrupt latency.

Recovering from Failures

Realtime systems must function reliably in event of failures. These failures can be
internal as well as external. The following sections discuss the issues involved in
handling these failures.

Internal Failures
Internal failures can be due to hardware and software failures in the system. The
different types of failures you would typically expect are:

104
Software Failures in a Task: Unlike desktop applications, Realtime
applications do not have the luxury of popping a dialog box and exiting on
detecting a failure. Design the tasks to safeguard against error conditions.
This becomes even more important in a Realtime system because sequence of
events can result in a large number of scenarios. It may not be possible to test
all the cases in the laboratory environment. Thus, apply defensive checks to
recover from error conditions. Also, some software error conditions might lead
to a task hitting a processor exception. In such cases, it might sometimes be
possible to just rollback the task to its previous saved state.

Processor Restart: Most Realtime systems are made up of multiple nodes. It


is not possible to bring down the complete system on failure of a single node
thus design the software to handle independent failure of any of the nodes.
This involves two activities:

1.Handling Processor Failure: When a processor fails, other processors have


to be notified about the failure. These processors will then abort any
interactions with the failed processor node. For example, if a control processor
fails, the telephone switch clears all calls involving that processor.

2.Recovering Context for the Failed Processor: When the failed processor
comes back up, it will have to recover all its lost context from other processors
in the system. There is always a chance of inconsistencies between different
processors in the system. In such cases, the system runs audits to resolve
any inconsistencies. Taking our switch example, once the control processor
comes up it will recover the status of subscriber ports from other processors.
To avoid any inconsistencies, the system initiates audits to crosscheck data-
structures on the different control processors.

Board Failure: Realtime systems are expected to recover from hardware


failures. The system should be able to detect and recover from board failures.
When a board fails, the system notifies the operator about the it. Also, the
system should be able to switch in a spare for the failed board. (If the board
has a spare)

Failure: Most of the communication in Realtime systems takes place over


links connecting the different processing nodes in the system. Again, the
system isolates a link failure and reroutes messages so that link failure does
not disturb the message communication.

External Failures

Realtime systems have to perform in the real world. Thus, they should recover
from failures in the external environment. Different types of failures that can
take place in the environment are:

105
Invalid Behavior of External Entities: When a Realtime system interacts with
external entities, it should be able to handle all possible failure conditions
from these entities. A good example of this is the way a telephone switching
systems handle calls from subscribers. In this case, the system is interacting
with humans, so it should handle all kinds of failures, like:

Subscriber goes off hook but does not dial

Toddler playing with the phone!

Subscriber hangs up before completing dialing.

Inter Connectivity Failure: Many times, a Realtime system is distributed


across several locations. External links might connect these locations.
Handling of these conditions is similar to handling of internal link failures.
The major difference is that such failures might be for an extended duration
and many times it might not be possible to reroute the messages.

Working with Distributed Architectures

Most Realtime systems involve processing on several different nodes. The


system itself distributes the processing load among several processors. This
introduces several challenges in design:

Maintaining Consistency: Maintaining data-structure consistency is a


challenge when multiple processors are involved in feature execution.
Consistency is generally maintained by running data-structure audits.

Initializing the System: Initializing a system with multiple processors is far


more complicated than bringing up a single machine. In most systems the
software release is resident on the OMC. The node that is directly connected
to the OMC will initialize first. When this node finishes initialization, it will
initiate software downloads for the child nodes directly connected to it. This
process goes on in a hierarchical fashion till the complete system is initialized.

Inter-Processor Interfaces: One of the biggest headaches in Realtime systems


is defining and maintaining message interfaces. Defining of interfaces is
complicated by different byte ordering and padding rules in processors.
Maintenance of interfaces is complicated by backward compatibility issues.
For example, if a cellular system changes the air interface protocol for a new
breed of phones, it will still have to support interfaces with older phones.

Load Distribution: When multiple processors and links are involved in


message interactions distributing the load evenly can be a daunting task. If
the system has evenly balanced load, the capacity of the system can be
increased by adding more processors. Such systems are said to scale linearly
with increasing processing power. But often designers find themselves in a

106
position where a single processor or link becomes a bottle neck. This leads to
costly redesign of the features to improve system scalability.

Centralized Resource Allocation: Distributed systems may be running on


multiple processors, but they have to allocate resources from a shared pool.
Shared pool allocation is typically managed by a single processor allocating
resources from the shared pool. If the system is not designed carefully, the
shared resource allocator can become a bottle neck in achieving full system
capacity

Asynchronous Communication

Remote procedure calls (RPC) are used in computer systems to simplify


software design. RPC allows a programmer to call procedures on a remote
machine with the same semantics as local procedure calls. RPCs really
simplify the design and development of conventional systems, but they are of
very limited use in Realtime systems. The main reason is that most
communication in the real world is asynchronous in nature, i.e. very few
message interactions can be classified into the query response paradigm that
works so well using RPCs.

Thus, most Realtime systems support state machine-based design where


multiple messages can be received in a single state. The next state is
determined by the contents of the received message. State machines provide
a very flexible mechanism to handle asynchronous message interactions. The
flexibility comes with its own complexities. We will be covering state machine
design issues in future additions to the Realtime Mantra.

Race Conditions and Timing

It is said that the three most important things in Realtime system design are
timing, timing and timing. A brief look at any protocol will underscore the
importance of timing. All the steps in a protocol are described with exact
timing specification for each stage. Most protocols will also specify how the
timing should vary with increasing load. Realtime systems deal with timing
issues by using timers. Timers are started to monitor the progress of events.
If the expected event takes place, the timer is stopped. If the expected event
does not take place, the timer will timeout and recovery action will be
triggered.

A race condition occurs when the state of a resource depends on timing factors
that are not predictable. This is best explained with an example. Telephone
exchanges have two-way trunks which can be used by any of the two
exchanges connected by the trunk. The problem is that both ends can allocate
the trunk at more or less the same time, thus resulting in a race condition.
Here the same trunk has been allocated for a incoming and an outgoing call.
This race condition can be easily resolved by defining rules on who gets to
107
keep the resource when such a clash occurs. The race condition can be
avoided by requiring the two exchanges to work from different ends of the
pool. Thus, there will be no clashes under low load. Under high load race
conditions will be hit which will be resolved by the pre-defined rules.

A more conservative design would partition the two-way trunk pool into two
one-way pools. This would avoid the race condition but would fragment the
resource pool.

The main issue here is identifying race conditions. Most race conditions are
not as simple as this one. Some of them are subtle and can only be identified
by careful examination of the design.

RTOS APPLICATIONS

RTOS find applications in various fields of engineering and technology. Some


of them include Control systems, Image processing, Fault Tolerance systems
etc.

RTOS for control systems

Many real-time systems are embedded in sensors and actuators and function
as digital controllers. The term plant in the block diagram below refers to a
controlled system, for example, an engine, a brake, an aircraft, a patient. The
state of the plant is monitored by sensors and can be changed by actuators.
The real-time (computing) system estimates from the sensor readings the
current state of the plant and computes a control output based on the
difference between the current state and the desired state which is the
reference input.

Embedded RTOS for voice over IP(voIP)

Voice over IP (VOIP) uses the Internet Protocol (IP) to transmit voice as packets
over an IP network. So VOIP can be achieved on any data network that uses
IP, like Internet, Intranets and Local Area Networks (LAN). Here the voice
signal is digitized, compressed and converted to IP packets and then
transmitted over the IP network. It is an advancing technology that is used to
108
transmit voice over the internet or a local area network using internet protocol
(IP). This technology provides enhanced features such as low cost compared
to the traditional Public Switched Telephone Network (PSTN). VoIP system
costs as much as half the traditional PSTN system in the field of voice
transmission. This is because of the efficient use of bandwidth requiring
fewer long-distance trunks between switches. The voice over internet protocol
system is found to be the successful alternative to the traditional PSTN
communication system due to its advanced features. The voice signal is
processed through the internet-based network during the communication.
The conceptual diagram of VoIP system is shown in Fig. below.

The basic steps in derivation of the designed VoIP system are:

The original speech signal is fed in to the system and the speech samples are
taken from.

The speech signal is then encoded with G.711a and Speex speech encoders,
which is the compressed version of the input signal. G.711a is the standard
used for the communication purpose and is a high bit rate Pulse Code
Modulation codec. It works at sampling rate of 8 kHz and uses and
compresses the 16-bit audio samples into 8bits. The Code Excited Linear
Prediction (CELP) Speex codec is an open-source codec developed for the
packet network and VoIP applications. The Speex supports three different
sampling rates narrowband (8 kHz), wideband (16 kHz) and ultra-wideband
(32 kHz).

The compressed signal is then packetized into VoIP packets to transfer it to


the IP network.

The speech signal is degraded due to the various network impairments


including delay, jitter and packet loss during VoIP communications. The
network impairments are introduced through the lab WANem emulator.

The degraded VoIP signal is depacketized and then decoded with G.711a and
Speex decoders.

109
The performance is evaluated with Perceptual Evaluation of Speech Quality
(PESQ) measurement defined by ITU-T recommendation P.862. After
comparing the degraded

signal with the original one, the PESQ measurement gives the subjective
measurement as Mean Opinion Scores (MOS) value from -0.5 to 4.5.

The VoIP signal is processed through various signal processing algorithms to


evaluate the performance of the system

110
METHODOLOGY USED TO IDENTIFY WEAK AND BRIGHT
STUDENTS:
Student performance; social network analysis; educational data mining; prediction;
university information system

a. SUPPORT TO WEAK STUDENTS:


1.Encourage and motivate
2.Repition and revision
3.Remedial classes
b. EFFORTS TO ENGAGE BRIGHT STUDENTS:

1.Assignments related to related topics


2.Providing Gate material

3.E material

111
END

112

You might also like