RTOS Course File 2024-25
RTOS Course File 2024-25
1
CERTIFICATE
Verifying authority:
1. Head of the Department: …………………………………..
2.
3.
PRINCIPAL
2
MATRUSRI ENGINEERING COLLEGE
Saidabad, Hyderabad-500 059.
(Approved by AICTE & Affiliated to Osmania University)
ELECTRONICS AND COMMUNICATION ENGINEERING
DEPARTMENT VISION
DEPARTMENT MISSION
1. To provide the learning ambience to nurture the young minds with theoretical
and practical knowledge to produce employable and competent engineers.
2. To provide a strong foundation in fundamentals of electronics and
communication engineering to make students explore advances in research
for higher learning.
3. To inculcate awareness for societal needs, continuous learning and
professional practices.
4. To imbibe team spirit and leadership qualities among students.
COURSE DESCRIPTOR
Programme BE
Semester VIII
I. COURSE OVERVIEW:
An RTOS is an OS that supports applications that must meet time constraints
while providing logically correct results. RTOSs also provide the necessary
features to support real-time applications. Unit one describes the services and
functions of OS and various functions of OS and evolution of OS. Unit two
provides description of various scheduling algorithms. Unit three describes
principles of concurrency and various dead lock strategies. Unit four describes
the functions and applications of Vx works. Unit five describes about Unix
kernel and various Unix signals and POSIX standards.
3
II. COURSE PRE-REQUISITES:
Course
Level Semester Prerequisites Credits
Code
UG ES VII Embedded systems 3
PC701EC
4
PO10 Communication: Communicate effectively on complex engineering
activities with the engineering community and with society at large,
such as, being able to comprehend and write effective reports and
design documentation, make effective presentations, and give and
receive clear instructions.
Project management and finance: Demonstrate knowledge and
understanding of the engineering and management principles and
PO11
apply these to one’s own work, as a member and leader in a team, to
manage projects and in multidisciplinary environments.
Life-long learning: Recognize the need for, and have the preparation
PO12 and ability to engage in independent and life- long learning in the
broadest context of technological change.
CO2 2 - 2 - - 1 - - - - - 2 2
CO3 2 - 2 2 2 - - - - - - 1 2 2
CO4 2 2 2 2 2 1 - - - - - 1 2 2
CO5 2 2 2 2 2 1 - - - - - 1 2 2
5
VIII. SYLLABUS:
6
IX. GAPS IN THE SYLLABUS - TO MEET INDUSTRY / PROFESSION
REQUIREMENTS:
PPT/BB/ Text
No. Relevant
Lecture Topics to be covered OHP/ Book/Reference
of Hrs Cos
No. e-material Book
1 Structures of OS BB/PPT 1 Andrew S.
(Monolithic, CO1 Tanenbaum
Microkernel, Layered,
Exo-kernel and Hybrid
kernel structures),
2 Operating system BB/PPT 1 CO1 Andrew S.
objectives and function Tanenbaum
3 Virtual Computers BB/PPT 1 CO1 Andrew S.
Tanenbaum
4 Interaction of O. S. & BB/PPT 1 CO1 Andrew S.
hardware architecture Tanenbaum
5 Evolution of operating BB/PPT 1 CO1 Andrew S.
systems, Tanenbaum
6 Batch, multi e material 1 CO1 Andrew S.
programming, Tanenbaum
Multitasking, Multiuser,
parallel distributed &
real –time O.S.
8
PPT/BB/ Text
No. Relevant
Lecture Topics to be covered OHP/ Book/Reference
of Hrs Cos
No. e-material Book
31 Process management – BB/PPT 1 CO5 Embedded
Forks and execution Linux Systems,
2/e,O’
32 Basic level BB/PPT 1 CO5 Embedded
programming with Linux Systems,
system calls 2/e,O’
33 Shell programming and BB/PPT 1 CO5 Embedded
filters Linux Systems,
2/e,O’
31 Process management – BB/PPT 1 CO5 Embedded
Forks and execution Linux Systems,
2/e,O’
32 Basic level BB/PPT 2 CO5 Embedded
programming with Linux Systems,
system calls 2/e,O’
9
LECTURE NOTES
10
UNIT-I
Operating Systems
Computer-system Organization
Computer-system Architecture
Operating-system Structure
Operating-system Operations
Computing Environments
11
UNIT-I
Operating Systems
What is an operating system
• A program that acts as an intermediary between a user of a
computer and the computer hardware
Operating system goals:
• Execute user programs and make solving user problems easier
• Make the computer system convenient to use
• Use the computer hardware in an efficient manner
12
What Operating Systems Do
Depends on the USER point of view
• Users want convenience, ease of use and good performance
Don’t care about resource utilization
• But shared computer such as mainframe or minicomputer must
keep all users happy
Operating system is a resource allocator and control program
making efficient use of HW and managing execution of user
programs
• Users of dedicate systems such as workstations have dedicated
resources but frequently use shared resources from servers
Mobile devices like smartphones and tablets are resource poor,
optimized for usability and battery life
• Mobile user interfaces such as touch screens, voice recognition
• Some computers have little or no user interface, such as embedded
computers in devices and automobiles
Run primarily without user intervention
13
operating system
Computer Startup.
Bootstrap Program is Loaded at Power Up or Reboot
• Typically stored in ROM, EPROM, generally known as firmware
• Initializes all aspects of system
• Loads operating system kernel and starts execution
I/O Structure
Two methods for handling I/O
• After I/O starts, control returns to user program only upon I/O
completion
• After I/O starts, control returns to user program without waiting
for I/O completion
Storage Structure
• Main memory – only large storage media that the CPU can access
directly
• Random access
• Typically, volatile
15
• Typically, random-access memory in the form of Dynamic
Random-access Memory (DRAM)
• Secondary storage – extension of main memory that provides large
nonvolatile storage capacity
• Hard Disk Drives (HDD) – rigid metal or glass platters covered with
magnetic recording material
• Disk surface is logically divided into tracks, which are
subdivided into sectors
• The disk controller determines the logical interaction between
the device and the computer
• Non-volatile memory (NVM) devices– faster than hard disks,
nonvolatile
• Various technologies
• Becoming more popular as capacity and performance
increases, price drops
Storage Hierarchy
Caching
• Important principle, performed at many levels in a computer (in
hardware, operating system, software)
• Information in use copied from slower to faster storage temporarily
16
• Faster storage (cache) checked first to determine if information is
there
• If it is, information used directly from the cache (fast)
• If not, data copied to cache and used there
• Cache smaller than storage being cached
• Cache management important design problem
• Cache size and replacement policy
Computer-System Architecture
• Most systems use a single general-purpose processor
• Most systems have special-purpose processors as well
• Multiprocessors systems growing in use and importance
• Also known as parallel systems, tightly-coupled systems
• Advantages include:
1. Increased throughput
2. Economy of scale
3. Increased reliability – graceful degradation or fault
tolerance
• Two types:
1. Asymmetric Multiprocessing – each processor is assigned
a specific task.
2. Symmetric Multiprocessing – each processor performs all
tasks
17
Symmetric Multiprocessing Architecture
Dual-Core Design
• Multi-chip and multicore
• Systems containing all chips
• Chassis containing multiple separate systems
18
Clustered Systems
Like multiprocessor systems, but multiple systems working
together
Usually sharing storage via a storage-area network (SAN)
Provides a high-availability service which survives failures
Asymmetric clustering has one machine in hot-standby mode
Symmetric clustering has multiple nodes running applications,
monitoring each other
o Some clusters are for high-performance computing (HPC)
Applications must be written to use parallelization
o Some have distributed lock manager (DLM) to avoid
conflicting operations
19
Memory Layout of Multiprogram System:
Operating-System Operations
Interrupt driven by hardware
Software error or request creates exception or trap Division by
zero, request for operating system service
Other process problems include infinite loop, processes modifying
each other or the operating system
Dual-mode operation allows OS to protect itself and other system
components
User mode and kernel mode
Mode bit provided by hardware
Provides ability to distinguish when system is running
user code or kernel code
• When a user is running -> mode bit is “user”
• When kernel code is executing ->mode bit is
“kernel”
Some instructions designated as privileged, only
executable in kernel mode
System call changes mode to kernel, return from call
resets it to user
Transition from User to Kernel Mode:
20
Timer to prevent infinite loop (or process hogging resources)
Timer is set to interrupt the computer after some time period
Keep a counter that is decremented by the physical clock
Operating system set the counter (privileged instruction)
When counter zero generate an interrupt
Set up before scheduling process to regain control or terminate
program that exceeds allotted time
Process Management
A process is a program in execution. It is a unit of work within the
system. Program is a passive entity; process is an active entity.
Process needs resources to accomplish its task
CPU, memory, I/O, files
Initialization data
Process termination requires reclaim of any reusable resources
Single-threaded process has one program counter specifying location
of next instruction to execute
Process executes instructions sequentially, one at a time, until
completion
Multi-threaded process has one program counter per thread
Typically, system has many processes, some user, some operating
system running concurrently on one or more CPUs
Concurrency by multiplexing the CPUs among the processes /
threads
Process Management Activities
The operating system is responsible for the following activities in
connection with process management:
Creating and deleting both user and system processes
Suspending and resuming processes
Providing mechanisms for process synchronization
Providing mechanisms for process communication
Providing mechanisms for deadlock handling
Memory Management
To execute a program all (or part) of the instructions must be in
memory
All (or part) of the data that is needed by the program must be in
memory
Memory management determines what is in memory and when
o Optimizing CPU utilization and computer response to users
Memory management activities
o Keeping track of which parts of memory are currently being
used and by whom
o Deciding which processes (or parts thereof) and data to move
into and out of memory
o Allocating and deallocating memory space as needed
21
File-system Management
OS provides uniform, logical view of information storage
o Abstracts physical properties to logical storage unit - file
o Each medium is controlled by device (i.e., disk drive, tape
drive)
Varying properties include access speed, capacity, data-
transfer rate, access method (sequential or random)
File-System management
o Files usually organized into directories
o Access control on most systems to determine who can access
what
o OS activities include
Creating and deleting files and directories
Primitives to manipulate files and directories
Mapping files onto secondary storage
Backup files onto stable (non-volatile) storage media
Mass-Storage Management
Usually, disks used to store data that does not fit in main memory
or data that must be kept for a “long” period of time
Proper management is of central importance
Entire speed of computer operation hinges on disk subsystem and
its algorithms
OS activities
o Mounting and unmounting
o Free-space management
o Storage allocation
o Disk scheduling
o Partitioning
o Protection
Caching
Important principle, performed at many levels in a computer (in
hardware, operating system, software)
Information in use copied from slower to faster storage temporarily
Faster storage (cache) checked first to determine if information is
there
o If it is, information used directly from the cache (fast)
o If not, data copied to cache and used there
Cache smaller than storage being cached
o Cache management important design problem
o Cache size and replacement policy
22
Characteristics of Various Types of Storage
Cloud Computing
• Delivers computing, storage, even apps as a service across a
network
• Logical extension of virtualization because it uses virtualization as
the base for its functionality.
– Amazon EC2(Elastic Compute Cloud) has thousands of
servers, millions of virtual machines, petabytes of storage
available across the Internet, pay based on usage
• Many types
– Public cloud – available via Internet to anyone willing to pay
– Private cloud – run by a company for the company’s own
use
– Hybrid cloud – includes both public and private cloud
components
25
– Software as a Service (SaaS) – one or more applications
available via the Internet (i.e., word processor)
– Platform as a Service (PaaS) – software stack ready for
application use via the Internet (i.e., a database server)
– Infrastructure as a Service (IaaS) – servers or storage
available over Internet (i.e., storage available for backup use)
• Cloud computing environments composed of traditional OSes, plus
VMMs, plus cloud management tools
– Internet connectivity requires security like firewalls
– Load balancers spread traffic across multiple applications
26
– Free software and open-source software are two different ideas
championed by different groups of people
• https://www.gnu.org/philosophy/open-source-misses-
the-point.en.html
• Examples include GNU/Linux and BSD UNIX (including core of Mac
OS X), and many more
• Can use VMM like VMware Player (Free on Windows), Virtualbox
(open source and free on many platforms -
http://www.virtualbox.com)
– Use to run guest operating systems for exploration
Real time operating system has well defined, fixed time constraints.
Processing must be done within defined constraints or the system will fail.
27
UNIT-II
Thread Scheduling
28
UNIT-II
Timing Constraints
Constraints imposed on the timing behavior of a job is timing constraints. Timing
constraint of a job can be specified in terms of release time and relative or
absolute deadlines
Hard and Soft Timing Constraints
It is common to divide timing constraints into two types: hard and soft.
Classification is based on the functions criticality of jobs, usefulness of late
result and deterministic
or probabilistic nature of the constraints
29
Common definition
A timing constraint or deadline is hard if the failure to meet it is considered
to be a fatal fault.
A hard deadline is imposed on a job if the late result produced by the job
after deadline may have disastrous consequences.
A few misses of soft deadlines do no serious harm, only the system overall
performance decreases.
The distinction between hard and soft timing constraints can also be made
in terms of usefulness of the result as the function of tardiness.
Tardiness of jobs measures how late it completes respective to its deadline
(completion time-deadline). The usefulness of result produced by a soft real
time job decreased gradually as tardiness increases.
The usefulness of hard real time fall abruptly may even become negative
when tardiness becomes larger than 0
The question of whether timing constraint is hard or soft depend on the
seriousness of job
Scheduling Algorithms
Most Operating Systems today use very similar CPU time scheduling
algorithms, all based on the same basic ideas, but with Operating System-
specific adaptations and extensions.
CPU scheduling is the task of selecting a waiting process from the ready
queue and allocating the CPU to it. The CPU is allocated to the selected
process by the dispatcher.
1. CPU utilization
2. Throughput
A CPU scheduling algorithm should try to minimize the following:
1. Turnaround time
2. Waiting time
3. Response time
(1) Explain FCFS, Round Robin, Shortest Job First, Shortest Remaining Job First
and Priority Scheduling algorithms with illustration.
I FCFS (First Come First Serve):
Selection criteria:
The process that requests first is served first. It means that processes are
served in the exact order of their arrival.
Decision Mode:
Non preemptive: Once a process is selected, it runs until it is blocked
for an I/O or some event, or it is terminated.
Implementation:
30
This strategy can be easily implemented by using FIFO queue, FIFO means
First in First Out. When CPU becomes free, a process from the first position
in a queue is selected to run.
Example:
Consider the following set of four processes. Their arrival time and time
required to complete the execution are given in following table.
Consider all time values in milliseconds.
Gantt Chart:
0 10 16 18 22
Initially only process P0 is present and it is allowed to run. But, when P0
completes, all other processes are present. So, next process P1 from ready
queue is selected and
allowed to run till it completes. This procedure is repeated till all processes
completed their execution.
Statistics:
Proces Arrival CPU Burst Finish Turnaround Waiting
s Time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (WT=TAT-
T0) ∆T)
P0 0 1 1 1 0
0 0 0
P1 1 6 1 1 9
6 5
P2 3 2 1 1 1
8 5 3
P3 5 4 2 1 1
2 7 3
31
Advantages:
Simple, fair, no starvation.
Easy to understand, easy to implement.
Disadvantages:
Not efficient. Average waiting time is too high.
Convoy effect is possible. All small I/O bound processes wait for
one big CPU bound process to acquire CPU.
CPU utilization may be less efficient especially when a CPU bound
process is running with many I/O bound processes.
Gantt Chart:
0 10 12 16 22
Initially only process P0 is present and it is allowed to run. But, when
P0 completes, all other processes are present. So, process with shortest
CPU burst P2 is selected and allowed to run till it completes. Whenever
32
more than one process is available, such type of decision is taken.
This procedure us repeated till all process complete their execution.
Statistics:
Process Arrival CPU Burst Finish Turnaround Waiting
Time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (Wt=TAT-∆T)
T0)
P 0 10 1 1 0
0 0 0
P 1 6 2 2 15
1 2 1
P 3 2 1 9 7
2 2
P 5 4 1 1 7
3 6 1
33
queue is selected to run.
Example:
Consider the following set of four processes. Their arrival time and time
required to complete the execution are given in following table.
Consider all time values in milliseconds.
Proces Arrival Time Time required for completion
s (T0) (∆T) (CPU Burst Time)
P0 0 10
P1 1 6
P2 3 2
P3 5 4
Gantt Chart:
0
1 3 5 9 13 22
Initially only process P0 is present and it is allowed to run. But, when P1
comes, it has shortest remaining run time. So, P0 is preempted and P1 is
allowed to run. Whenever new process comes or current process blocks,
such type of decision is taken. This procedure is repeated till all
processes complete their execution.
Statistics:
Process Arrival Completion Finish Turnaround Waiting
time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (WT=TAT-
T0) ∆T)
P 0 1 22 2 1
0 0 2 2
P 1 6 9 8 2
1
P 3 2 5 2 0
2
P 5 4 13 8 4
3
Advantages:
Less waiting time.
Quite good response for short processes.
34
Disadvantages:
Again, it is difficult to estimate remaining time necessary to complete
execution.
Starvation is possible for long process. Long process may wait
forever.
Context switch overhead is there.
IV Round Robin:
Selection Criteria:
Each selected process is assigned a time interval, called time quantum or
time slice. Process is allowed to run only for this time interval. Here,
two things are possible: First, Process is either blocked or terminated
before the quantum has elapsed. In this case the CPU switching is done
and another process is scheduled to run. Second, Process needs CPU
burst longer than time quantum. In this case, process is running at the
end of the time quantum. Now, it will be preempted and moved to the
end of the queue. CPU will be allocated to another process. Here, length
of time quantum is critical to determine.
Decision Mode:
Preemptive:
Implementation:
This strategy can be implemented by using circular FIFO queue. If any
process comes, or process releases CPU, or process is preempted. It is
moved to the end of the queue. When CPU becomes free, a process from
the first position in a queue is selected to run.
Example:
Consider the following set of four processes. Their arrival time and time
required to complete the execution are given in the following table.
All time values are in milliseconds. Consider that time quantum is of 4
ms, and context switch overhead is of 1 ms.
Gantt Chart:
P0 P1 P2 P0 P3 P1 P0
35
0 45 9 10 12 13 17 18 22 23 25 26 28
At 4ms, process P0 completes its time quantum. So, it preempted and
another process P1 is allowed to run. At 12 ms, process P2 voluntarily
releases CPU, and another process is selected to run. 1 ms is wasted on
each context switch as overhead. This
procedure is repeated till all process completes their execution.
Statistics:
Process Arrival Completion Finish Turnaround Waiting
time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (WT=TAT-
T0) ∆T)
P 0 1 28 2 1
0 0 8 8
P 1 6 25 2 1
1 4 8
P 3 2 12 9 7
2
P 5 4 22 1 1
3 7 3
Advantages:
One of the oldest, simplest, fairest and most widely used
algorithms.
Disadvantages:
Context switch overhead is there.
Determination of time quantum is too critical. If it is too short, it
causes frequent context switches and lowers CPU efficiency. If it is
too long, it causes poor response for short interactive process.
0 10 14 16 22
Initially only process P0 is present and it is allowed to run. But, when
P0 completes, all other processes are present. So, process with highest
priority P3 is selected and allowed to run till it completes. This
procedure is repeated till all processes complete their execution.
Statistics:
Process Arrival Completion Finish Turnaround Waiting
time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (TAT-
T0) ∆T)
P 0 1 10 1 0
0 0 0
P 1 6 22 2 1
1 1 5
P 3 2 16 1 1
2 3 1
P 5 4 14 9 5
3
37
Priority is considered. Critical processes can get even better
response time.
Disadvantages:
Starvation is possible for low priority processes. It can be
overcome by using technique called ‘Aging’.
Aging: gradually increases the priority of processes that wait in the
system for a
long time.
Gantt chart:
0 1 3 5 9 13 22
Initially only process P0 is present and it is allowed to run. But when P1
38
comes, it has higher
priority. So, P0 is preempted and P1 is allowed to run. This process is
repeated till all processes complete their execution.
Statistics:
Process Arrival Completion Finish Turnaround Waiting
time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (TAT-
T0) ∆T)
P0 0 1 2 2 1
0 2 2 2
P1 1 6 1 1 6
3 2
P2 3 2 5 2 0
P3 5 4 9 4 0
Advantages:
Priority is considered. Critical processes can get even better
response time.
Disadvantages:
Starvation is possible for low priority processes. It can be
overcome by using technique called ‘Aging’.
Aging: gradually increases the priority of processes that wait in the
system for a long time.
Context switch overhead is there.
(2) Five batch jobs A to E arrive at same time. They have estimated running times
10,6,2,4 and 8 minutes. Their priorities are 3,5,2,1 and 4 respectively with 5
being highest priority. For each of the following algorithm determine mean
process turnaround time. Ignore process swapping overhead.
Round Robin, Priority Scheduling, FCFS, SJF.
39
D 4 1
E 8 4
0 10 16 18 22 30
Process Arrival Completion Finish Turnaround Waiting Time
time Time (∆T) Time Time (TAT-∆T)
(T0) (T1) (TAT=T1-
T0)
A 0 10 10 1 0
0
B 0 6 16 1 10
6
C 0 2 18 1 16
8
D 0 4 22 2 18
2
E 0 8 30 3 22
0
Shortest Job
First:
0 2 6 12 20 30
Proces Arriva Completion Finish Turnaround Waiting
s l time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (TAT-
T0) ∆T)
A 0 1 3 3 20
0 0 0
B 0 6 1 1 6
2 2
C 0 2 2 2 0
D 0 4 6 6 2
E 0 8 2 2 12
40
0 0
Priorit
y:
0 6 14 24 26 30
Round Robin:
Time slice OR Quantum 2mi
time= n.
16 18 20 22 24 26 2 3
0 2 4 6 8 10 12 14 8 0
Proces Arrival Completion Finish Turnaround Waiting
s time Time (∆T) Time Time Time
(T0) (T1) (TAT=T1- (TAT-
T0) ∆T)
A 0 1 3 3 2
0 0 0 0
41
B 0 6 2 2 1
2 2 6
C 0 2 6 6 4
D 0 4 1 1 1
6 6 2
E 0 8 2 2 2
8 8 0
(3) Suppose that the following processes arrive for the execution at the times
indicated. Each process will run the listed amount of time. Assume
preemptive scheduling.
Process Arrival Time (ms) Burst Time
(ms) P1 0.0 8
P2 0.4 4
P3 1.0 1
What is the turnaround time for these processes with Shortest Job First
scheduling algorithm?
42
Thread Scheduling
The main approaches of threading scheduling are:
1. Load sharing
2. Gang scheduling
3. Dedicated processor assignment and Dynamic scheduling
Load Sharing
Processes are not assigned to a particular processor. A global queue of ready threads
is maintained and each processor, when idle select a thread from the queue.
There are three versions of load sharing are these are:
1. First come first served
2. Smallest number of threads first
3. Preemptive smallest number of threads first
First come first served: when a job arrives each of its threads is placed
consecutively at the end of the shared queue. When a processor becomes
idle it picks the next ready thread, which it executes until completion or
blocking.
Smallest number of threads first: The shared ready queue is organized as
a priority queue with highest priority given to threads from jobs with the
smallest number of unscheduled threads. Jobs of equal priority are ordered
according to which job arrives first.
1. Preemptive smallest number of threads first: Highest is given to jobs with
the smallest number of incomplete threads.
43
3. Gang scheduling has three parts.
(a) Groups of related threads are scheduled as a unit, a gang
(b) All members of a gang run simultaneously on different timeshared
CPUs.
(c) All gang members start and end their time slices together.
4. The trick that makes gang scheduling work is that all CPU are
scheduled synchronously. This means that time is divided into
discrete quanta.
5. An example of how gang scheduling works is given in the Table 5.1.
Here you have a multiprocessor with six CPU being used by five
processes, A through E, with a total of 24 ready threads.
Table Gang Scheduling
CP
U
0 1 2 3 4 5
0 A0 A A A A4 A5
1 2 3
1 B0 B B C C1 C2
1 2 0
Time 2 D0 D D D D4 E0
slot 1 2 3
3 E1 E E E E5 E6
2 3 4
4 A0 A A A A4 A5
1 2 3
5 B0 B B C C1 C2
1 2 0
6 D0 D D D D4 E0
1 2 3
7 E1 E E E E5 E6
2 3 4
(a) During time slot 0, threads A0 through A5 are scheduled and run
(b) During time slot 1, threads B0, B1, B2, C0, C1, C2 are scheduled
and run
(c) During time slot 2, D’s five threads and E0 get to run
(d) The remaining six threads belonging to process E run in the time
slot 3. Then the cycle repeats, with slot 4 being the same as slot 0
and so on.
(e) Gang scheduling is useful for applications where performance
severely degrades when any part of the application is not running.
44
Dedicated Processor Assignment
1. When application is scheduled, its threads are assigned to a
processor.
2. Some processor may be idle and no multiprogramming of
processors.
3.Provides implicit scheduling defined by assignment of threads to
processors. For the duration of program execution, each program
is allocated a set of processors equal in number to the number of
threads in the program. Processors are chosen from the available
pool.
Dynamic Scheduling
1. Number of threads in a process are altered dynamically by the
application.
2. Operating system and the application are involved in making
scheduling decisions. The OS is responsible for partitioning the
processors among the jobs.
3. Operating system adjusts load to improve the use:
(a) Assign idle processors.
(b) New arrivals may be assigned to a processor that is used by
a job currently using more than one processor.
(c) Hold request until processor is available
(d) New arrivals will be given a processor before existing
running applications.
Multiprocessor scheduling.
The scheduling policy for a multiprocessor system usually embodies a mixture of
several of these criteria. Issues in Multiprocessor Scheduling Solutions to the
scheduling problem come in two general forms: algorithms and scheduling systems.
Algorithms concentrate on policy while scheduling systems provide mechanism to
implement the algorithms. Some scheduling systems run outside the operating
system kernel, while others are part of a tightly-integrated distributed or parallel
operating system.
Distributed systems communicate via message-passing, while parallel systems use
shared memory. A task is the unit of computation in computing systems, and a job
consists of one or more cooperating tasks. Global scheduling involves assigning a
task to a particular processor within the system.
Local scheduling determines which of the set of available tasks at a processor runs
next on that processor. Task migration can change the global mapping by moving
a task to a new processor. If you have several jobs, each composed of many tasks,
you can either assign several processors to a single job, or you can assign several
tasks to a single processor. The former is known as space sharing, and the latter is
called time sharing.
Global scheduling is often used to perform load sharing. Load sharing allows busy
processors to off-load some of their work to less busy processors. Load balancing is
45
a special case of load sharing, in which the goal is to keep the load even across all
processors. Sender-initiated load sharing occurs when busy processors try to find
idle processors to off-load some work. Receiver- initiated load sharing occurs when
idle processors seek busy processors. It is now accepted wisdom that full load
balancing is generally not worth doing, as the small gain in execution time over
simpler load sharing is more than offset by the effort expended in maintaining the
balanced load.
As the system runs, new tasks arrive while old tasks complete execution (or are
served). If the arrival rate is greater than the service rate then the system is said to
be unstable. If tasks are serviced as least as fast as they arrive, the system is said
to be stable. If the arrival rate is just slightly less than the service rate for a system,
an unstable scheduling policy can push the system into instability. A stable policy
will never make a stable system unstable.
Figure Multi-processor
Queue
RQ Relea
Adm
RQ Relea
RQ Relea
Real-time Scheduling
Real-time systems design is an increasingly important topic in systems research
communities as well as the software industries. Real-time applications and their
requirements can be found in almost every area of operating systems and
networking research. An incomplete list of such domains includes distributed
systems, embedded systems, network protocol processing, aircraft design,
spacecraft design..., and the list goes on.
A schedule for a set of tasks is said to be feasible if a proof exists that every task
instance in the set will complete processing by its associated deadline. Also, a
task set is schedulable if there exists a feasible schedule for the set.
46
47
UNIT-III
Principles of concurrency
Deadock strategies
48
UNIT-III
Concurrency in Operating Systems
the issues and difficulties raised by the two overlaps to a large extent:
• sharing global resources safely is difficult;
• locating program can be difficult, because the contexts in which errors occur
cannot always be reproduced easily
Parallelism also introduces the issue that different processors may run at
different speeds, but
rates.
A Simple Example
chout = chin;
49
putchar(chout);
Imagine two processes P1 and P2 both executing this code at the “same” time,
with the following interleaving due to multi-programming.
1. P1 enters this code, but is interrupted after reading the character x into
chin.
2. P2 enters this code, and runs it to completion, reading and displaying the
character y.
P1 sets chin, but this write is subsequently lost during the execution of P2. T The
general 2 solution is to allow only one process at a time to enter the code that accesses
chin: such code is often called a critical section. When one process is inside a critical
section of code, other processes must be prevented from entering that section. This
requirement is known as mutual exclusion.
Mutual Exclusion
Mutual exclusion is in many ways the fundamental issue in concurrency. It is the
requirement that when a process P is accessing a shared resource R, no other process
should be able to access R until P has finished with R. Examples of such resources
include files, I/O devices such as printers, and shared data structures.
• Leave the responsibility with the processes themselves: this is the basis of most
software approaches. These approaches are usually highly error-prone and carry
high overheads.
Semaphores
The fundamental idea of semaphores is that processes “communicate” via global
counters that are initialized to a positive integer and that can be accessed only
through two atomic operations semSignal(x) increments the value of the semaphore
50
x. semWait(x) tests the value of the semaphore x: if x > 0, the process decrements x
and continues; if x = 0, the process is blocked until some other process performs a
semSignal, then it proceeds as above.
semWait (x);
semSignal (x);
In general, the number of processes that can execute this critical section
simultaneously is determined by the initial value given to x. If more than this number
try to enter the critical section, the excess processes will be blocked until some
processes exit. Most often, semaphores are initialized to one.
Monitors
The principal problem with semaphores is that calls to semaphore operations tend to
be distributed across a program, and therefore these sorts of programs can be
difficult to get correct, and very difficult indeed to prove correct! Monitors address
this problem by imposing a higher-level structure on accesses to semaphore
variables.
A monitor is essentially an object (in the Java sense) which has the semaphore
variables as internal (private) data and the semaphore operations as (public)
operations. Mutual exclusion is provided by allowing only one process to execute
the monitor’s code at any given time.
Monitors are significantly easier to validate than “bare” semaphores for at least
two reasons:
• once the monitor is correct, any number of processes sharing the resource will
operate correctly
Message Passing
With an approach based on message passing, processes operate in isolation from
each other (i.e. they do not share data), and they exchange information where
necessary by the sending and receiving of messages. Synchronization between
processes is defined by the blocking policy attached to the sending and receiving of
messages.
51
Blocking receives: When a process attempts to receive a message, it blocks until
the message is available.
With this blocking policy, mutual exclusion can be achieved for a set of processes
that share a mailbox box. Some number of messages (usually one) is sent to box
initially by the system, then each process executes the following code when it wants
to enter the critical section:
receive (box);
send (box);
WHILE
(true)
produce-Item ();
P (empty);
52
P (mutex); enter-Item
V
(mutex) V (full);
Consumer ()
WHILE (true)
P (full)
P (mutex); remove-
V (empty);
consume-Item (Item)
A semaphore is hardware or a software tag variable whose value indicates the status
of a common resource. Its purpose is to lock the resource being used. A process
which needs the resource will check the semaphore for determining the status of
the resource followed by the decision for proceeding. In multitasking operating
systems, the activities are synchronized by using the semaphore techniques.
Semaphore is a mechanism to resolve resources conflicts by tallying resource
seekers what is the state of sought resources, achieving a mutual exclusive access
to resources. Often semaphore operates as a type of mutual exclusive counters
(such as mutexes) where it holds a number of access keys to the resources. Process
that seeks the resources must obtain one of those access keys, one of semaphores,
before it proceeds further to utilize the resource. If there is no more such a key
available to the process, it has to wait for the current resource user to release the
key.
A semaphore could have the value 0, indicating that no wakeups were saved, or
some positive values if one or more wakeups were pending.
A semaphore s is an integer variable that apart from initialization, is accessed only
through two standard atomic operations, wait and signal. these operations were
originally termed p (for wait to test) and v (for signal to increment).
The classical definition of wait in pseudocode is:
wait(s)
while(s<=0)
53
The classical definition of signal in pseudocode is:
signal(s)
{s++;
napping: = int(random(1000*thinking))
end think
procedure eat ()
napping: = int(random(1000*eating))
end eat
process phil
resorce
ding_server
54
op take_forks(i : int), put_forks(i :
int) body dining_server(num_phil :
int)
end left
Deadlock
Deadlock occurs when you have a set of processes [not necessarily all the
processes in the system], each holding some resources, each requesting some
resources, and none of them is able to obtain what it needs, i.e. to make
progress. Those processes are deadlocked because all the processes are
waiting. None of them will ever cause any of the events that could wake up any
of the other members of the set, and all the processes continue to wait forever.
For this model, I assume that processes have only a single thread and that
there are no interrupts possible to wake up a blocked process. The no-
interrupts condition is needed to prevent an otherwise deadlocked process
from being awakened by, say, an alarm, and then causing events that release
other processes in the set. In most cases, the event that each process is waiting
for is the release of some resource currently possessed by another member of
the set. In other words, each member of the set of deadlocked processes is
waiting for a resource that is owned by another deadlocked process. None of
55
the processes can run, none of them can release any resources, and none of
them can be awakened. The number of processes and the number and kind of
resources possessed and requested are unimportant. This result holds for any
kind of resource, including both hardware and software.
Process Process
A B
Resource Waiting
Owned
Y for Y
by A
Deadlock Characterization
Deadlock situation can arise if the following four conditions hold
simultaneously in a system:
1. Resources are used in mutual exclusion.
2. Resources are acquired piecemeal (i.e. not all the resources that are
needed to complete an activity are obtained at the same time in a single
indivisible action).
3. Resources are not preempted (i.e. a process does not take away resources
being held by another process).
4. Resources are not spontaneously given up by a process until it has
satisfied all its outstanding requests for resources (i.e. a process, being
that it cannot obtain some needed resource it does not kindly give up the
resources that it is currently holding).
Handling of Deadlocks
Deadlock Prevention
Deadlocks can be prevented by ensuring that at least one of the following
four conditions occur
56
1. Mutual exclusion: Removing the mutual exclusion condition means
that no process may have exclusive access to a resource. This proves
impossible for resources that cannot be spooled, and even with
spooled resources deadlock could still occur. Algorithms that avoid
mutual exclusion are called non-blocking synchronization algorithms.
2. Hold and wait: The “hold and wait” conditions may be removed by
requiring processes to request all the resources they will need before
starting up (or before embarking upon a particular set of operations);
this advance knowledge is frequently difficult to satisfy and, in any
case, is an inefficient use of resources. Another way is to require
processes to release all their resources before requesting all the
resources they will need. This too is often impractical. (Such
algorithms, such as serializing tokens, are known as the all-or- none
algorithms.
3. No preemption: A “no preemption” (lockout) condition may also be
difficult or impossible to avoid as a process has to be able to have a
resource for a certain amount of time, or the processing outcome may
be inconsistent or thrashing may occur. However, inability to enforce
preemption may interfere with a priority algorithm
4. Circular wait: The circular wait condition: Algorithms that avoid
circular waits include “disable interrupts during critical sections” ,
and “use a hierarchy to determine a partial ordering of resources”
(where no obvious hierarchy exists, even the memory address of
resources has been used to determine ordering) and Dijkstra’s
solution.
Deadlock Avoidance
Deadlock Avoidance, assuming that you are in a safe state (i.e. a state from
which there is a sequence of allocations and releases of resources that allows
all processes to terminate) and you are requested certain resources, simulates
the allocation of those resources and determines if the resultant state is safe.
If it is safe the request is satisfied, otherwise it is delayed until it becomes
safe.
The Banker’s Algorithm is used to determine if a request can be satisfied. It
uses requires knowledge of who are the competing transactions and what are
their resource needs. Deadlock avoidance is essentially not used in
distributed systems
Ignore Deadlock
Our banker has 10 credits to lend, but a possible liability of 22. Her job is
to keep enough in
reserve so that ultimately each customer can be satisfied over time: That
is, that each customer
will be able to access his full credit line, just not all at the same time.
Suppose, after a while, the
bank’s credit line book shows.
58
Custo Credit Credit
mer Used Line
Andy 1 6
Barb 1 5
Marv 2 4
Sue 4 7
Funds Available 2
Max Commitment 22
Eight credits have been allocated to the various customers; two remain.
The question then is: Does a way exist such that each customer can be
satisfied? Can each be allowed their maximum credit line in some
sequence? We presume that, once a customer has been allocated up to
his limit, the banker can delay the others until that customer repays his
loan, at which point the credits become available to the remaining
customers. If we arrive at a state where no customer can get his maximum
because not enough credits remain, then a deadlock could occur, because
the first customer to ask to draw his credit to its maximum would be
denied, and all would have to wait.
To determine whether such a sequence exists, the banker finds the
customer closest to his limit: If the remaining credits will get him to that
limit, The banker then assumes that that loan is repaid, and proceeds to
the customer next closest to his limit, and so on. If all can be granted a
full credit, the condition is safe.
In this case, Marv is closest to his limit: assume his loan is repaid. This
frees up 4 credits. After Marv, Barb is closest to her limit (actually, she’s
tied with Sue, but it makes no difference) and 3 of the 4 freed from Marv
could be used to award her maximum. Assume her loan is repaid; we
have now freed 6 credits. Sue is next, and her situation is identical to
Barb’s, so assume her loan is repaid. We have freed enough credits (6) to
grant Andy his limit; thus, this state safe
Suppose, however, that the banker proceeded to award Barb one more
credit after the credit book arrived at the state immediately above:
Now it’s easy to see that the remaining credit could do no good toward
getting anyone to their maximum.
59
So, to recap, the banker’s algorithm looks at each request as it occurs,
and tests if granting it will lead to a safe state. If not, the request is
delayed. To test for a safe state, the banker checks to see if enough
resources will remain after granting the request to satisfy the customer
closest to his maximum. If so, that loan is assumed repaid, and the next
customer checked, and so on. If all loans can be repaid, then the request
leads to a safe state, and can be granted. In this case, we see that if Barb
is awarded another credit, Marv, who is closest to his maximum, cannot
be awarded enough credits, hence Barb’s request can’t be granted —it will
lead to an unsafe state
60
UNIT-IV
Multitasking
Task Activation
Memory management
61
UNIT-IV
Introduction
Architecture
62
microkernel. This microkernel supports a full range of real-time features
including multi-tasking, scheduling, inter task
synchronization/communication and memory management. All the other
functionality is implemented as processes.
Task management
TASK STATES
IN VxWorks, the states encountered by the task are of 8 different types:
1. Suspended: idle state just after creation or state where execution is
inhibited.
2. Ready: waiting for running and CPU access in case scheduled by the
scheduler but not waiting for a message through IPC.
3. Pending: The task is blocked as it waits for a message from the IPC or from
a resource; only then will the CPU be able to process further.
4. Delayed: sent to sleep for a certain time interval.
5. Delayed + suspended: delayed and then suspended if it is not pre-empted
during the delay period.
63
6. Pended for an IPC [Inter process Communication] + suspended: Pended
and then suspended if the blocked state does not change.
7. Pended for an IPC + delayed: Pended and then pre-empted after the delayed
time interval.
8. Pended for an IPC + suspended + delayed: Pended and suspended after
delayed time interval.
Kernel library functions are included in the header files ‘vxWorks.h’ and
‘kernelLib.h’. Task and system library functions are included in ‘taskLib.h’
and ‘sysLib.h’. User task priorities are between 101 and 255. Lowest
priority means task of highest priority number (255). System tasks have
the priorities from 0 to 99. For tasks, the highest priority is 100 by default
Memory management
In VxWorks, all systems and all application tasks share the same address
space. This means that faulty applications could accidentally access system
resources and compromise the stability of the entire system. An optional tool
named VxVMI is available that can be used to allow each task to have its own
address space. Default physical page size used is 8KB. Virtual memory
support is available with VxVMI tool. VxWorks does not offer privilege
64
protection. The privilege level is always 0 (supervisor mode).
Interrupts:
To achieve the fastest possible response to external interrupts, interrupt
service routines (ISRs) in VxWorks run in a special context outside of any
thread's context, so that there are no thread context switches involved. The
C function that the user attaches to a interrupt vector is not the actual ISR.
Interrupts cannot directly vector to C functions.
The ISR's address is stored in the interrupt vector table and is called directly
from the hardware. The ISR performs some initial work (e.g. saving registers
and setting up stack) and then calls the C function that was attached by the
user. For this reason, we use the term interrupt handler (instead of ISR) to
designate the user installed C handler function. VxWorks uses an ISR design
that is different from a task design. The features of the ISR in VxWorks are:
1. ISRs have the highest priorities and can pre-empt any running task.
2. An ISR inhibits the execution of tasks till return.
3. An ISR does not execute like a task and does not have regular task
context.
4. An ISR should not use mutex semaphore.
5. ISR should just write the required data at the memory or buffer.
ISR should not use floating-point functions as these take longer time to
execute
Performance
65
of 1.6 to 2.4 micro seconds.
5. Priority inheritance: VxWorks has a priority inheritance mechanism
that exhibits
an optimal performance, which is essential for an RTOS
6. Footprint: VxWorks has a completely configurable and tunable small
memory footprint for today's memory-constrained systems. The user
can control how much of the operating system he needs.
Applications:
VxWorks RTOS is widely used in the market, for a great variety of
applications. Its reliability makes it a popular choice for safety critical
applications. VxWorks has been successfully used in both military and
civilian avionics, including the Apache Attack Helicopter, Boeing 787, 747-8
and Airbus A400M. It is also used in on ground avionic systems such as in
both civilian and military Radar stations. Another safety critical application
that entrusts VxWorks is BMW’s i- Drive system. However, VxWorks is also
widely used in non-safety-critical applications where performance is at
premium. The Xerox Phaser, a post-script printer is controlled by a VxWorks
powered platform. Link Sys wireless routers use VxWorks for operating
switches
VxWorks has been used in several space applications. In Space crafts, where
design challenges are greatly increased by the need of extremely low power
consumption and lack of access to regular maintenance, VxWorks RTOS can
be chosen as the operating system for On Board Computer [OBC]. For
example, ‘Clementine’ launched in 1994 is running VxWorks 5.1 on a MIPS-
based CPU responsible for the Star Tracker and image processing algorithms.
The ‘Spirit’ and ‘Opportunity’ Mars Exploration Rovers were installed with
VxWorks. VxWorks is also used as operating system in several industrial
robots and distributed control systems.
67
UNIT-V
File System
Concepts of Process
Process management-forks
Unix signals
68
UNIT-V
Kernel
The kernel is a computer program at the core of a computer's operating system with
complete control over everything in the system. It is an integral part of any operating
system. It is the "portion of the operating system code that is always resident in
memory”. It facilitates interactions between hardware and software components. On
most systems, it is one of the first programs loaded on start-up (after the bootloader).
It handles the rest of start up as well as input/output requests from software,
translating them into data-processing instructions for the central processing unit. It
handles memory and peripherals like keyboards, monitors, printers, and speakers
69
File System
Unix file system is a logical method of organizing and storing large amounts of
information in a way that makes it easy to manage. A file is a smallest unit in which
the information is stored. Unix file system has several important features. All data in
Unix is organized into files. All files are organized into directories. These directories
are organized into a tree-like structure called the file system.
Files in Unix System are organized into multi-level hierarchy structure known as a
directory tree. At the very top of the file system is a directory called “root” which is
represented by a “/”. All other files are “descendants” of root.
70
/proc : procfs virtual filesystem showing information about processes
as files.
/root : The home directory for the superuser “root” – that is, the
system administrator. This account’s home directory is usually on the
initial filesystem, and hence not in /home (which may be a mount point
for another filesystem) in case specific maintenance needs to be
performed, during which other filesystems are not available. Such a case
could occur, for example, if a hard disk drive suffers physical failures
and cannot be properly mounted.
/tmp : A place for temporary files. Many systems clear this directory
upon startup; it might have tmpfs mounted atop it, in which case its
contents do not survive a reboot, or it might be explicitly cleared by a
startup script at boot time.
/usr : Originally the directory holding user home directories,its use
has changed. It now holds executables, libraries, and shared resources
that are not system critical, like the X Window System, KDE, Perl, etc.
However, on some Unix systems, some user accounts may still have a
home directory that is a direct subdirectory of /usr, such as the default
as in Minix. (on modern systems, these user accounts are often related
to server or system use, and not directly used by a person).
/usr/bin : This directory stores all binary programs distributed with
the operating system not residing in /bin, /sbin or (rarely) /etc.
/usr/include : Stores the development headers used throughout the
system. Header files are mostly used by the #include directive in C/C++
programming language.
/usr/lib : Stores the required libraries and data files for programs
stored within /usr or elsewhere.
/var : A short for “variable.” A place for files that may change often –
especially in size, for example e-mail sent to users on the system, or
process-ID lock files.
/var/log : Contains system log files.
/var/mail : The place where all the incoming mails are stored. Users
(other than root) can access their own mail only. Often, this directory is
a symbolic link to /var/spool/mail.
/var/spool : Spool directory. Contains print jobs, mail spools and
other queued tasks.
/var/tmp : A place for temporary files which should be preserved
between system reboots.
Types of Unix files – The UNIX files system contains several different types
of files
71
1. Ordinary files – An ordinary file is a file on the system that contains data,
text, or program instructions.
Used to store your information, such as some text you have written or
an image you have drawn. This is the type of file that you usually work
with.
Always located within/under a directory file.
Do not contain other files.
In long-format output of ls -l, this type of file is specified by the “-”
symbol.
2. Directories – Directories store both special and ordinary files. For users
familiar with Windows or Mac OS, UNIX directories are equivalent to folders.
A directory file contains an entry for every file and subdirectory that it
houses. If you have 10 files in a directory, there will be 10 entries in the
directory. Each entry has two components.
(1) The Filename A unique identification number for the file or directory
(called the inode number)2. Branching points in the hierarchical tree. Used
to organize groups of files. May contain ordinary files, special files or other
directories. Never contain “real” information which you would work with
(such as text). Basically, just used for organizing files. All files are
descendants of the root directory, (named /) located at the top of the tree.
In long-format output of ls –l , this type of file is specified by the “d” symbol.
3. Special Files – Used to represent a real physical device such as a printer,
tape drive or terminal, used for Input/Ouput (I/O) operations. Device or
special files are used for device Input/Output(I/O) on UNIX and Linux
systems. They appear in a file system just like an ordinary file or a directory.
On UNIX systems there are two flavors of special files for each device,
character special files and block special files:
When a character special file is used for device Input/Output(I/O), data
is transferred one character at a time. This type of access is called raw
device access.
When a block special file is used for device Input/Output(I/O), data is
transferred in large fixed-size blocks. This type of access is called block
device access.
For terminal devices, it’s one character at a time. For disk devices though,
raw access means reading or writing in whole chunks of data – blocks, which
are native to your disk.
In long-format output of ls -l, character special files are marked by the
“c” symbol.
In long-format output of ls -l, block special files are marked by the “b”
symbol.
4. Pipes – UNIX allows you to link commands together using a pipe. The pipe
acts a temporary file which only exists to hold data from one command until
it is read by another. A Unix pipe provides a one-way flow of data. The output
or result of the first command sequence is used as the input to the second
command sequence. To make a pipe, put a vertical bar (|) on the command
line between two commands. For example: who | wc -l
In long-format output of ls –l, named pipes are marked by the “p” symbol
72
5. Sockets – A Unix socket (or Inter-process communication socket) is a
special file which allows for advanced inter-process communication. A Unix
Socket is used in a client-server application framework. In essence, it is a
stream of data, very similar to network stream (and network sockets), but all
the transactions are local to the filesystem.
In long-format output of ls -l, Unix sockets are marked by “s” symbol.
6. Symbolic Link – Symbolic link is used for referencing some other file of
the file system. Symbolic link is also known as soft link. It contains a text
form of the path to the file it references. To an end user, symbolic link will
appear to have its own name, but when you try reading or writing data to
this file, it will instead reference these operations to the file it points to. If we
delete the soft link itself, the data file would still be there. If we delete the
source file or move it to a different location, symbolic file will not function
properly.
In long-format output of ls –l, Symbolic link are marked by the “l” symbol
(that’s a lower-case L).
PROCESS MANAGEMENT
Every process in a UNIX system has the following attributes:
some code (a.k.a. text)
some data
a stack
a unique process ID number (PID)
When UNIX is first started, there’s only one visible process in the system.
This process is called “init”, and it has a process ID of 1. The only way to
create a new process in UNIX is to duplicate an existing process, so “init” is
the ancestor of all subsequent processes.
PROCESS STATES
Every process in the system can be in one of six states. The six possible
states are
1) Running, which means that the process is currently using the CPU.
2) Runnable, which means that the process can make use of the CPU
as soon as it
becomes available.
3) Sleeping, which means that the process is waiting for an even to
occur. For example, if a process executes a “read ()” system call, it
sleeps until th I/O request completes.
4) Suspended, which means that the process has been “frozen” by a
signal such as
SIGSTOP.
It will resume only when sent a SIGCONT signal. For example, a
Control- Z from
The keyboard suspends all of the processes in the foreground job.
5) Idle, which means that the process is being created by a “fork ()
system calls and
is not yet runnable.
73
6) Zombified, which means that the process has terminated but has
not yet re
returned its exit code to its parent.
A process remains a zombie until its parent accepts its return code using
the “wait ()” system call.
PROCESS COMPOSITION
Every process is composed of several different pieces:
a code area, which contains the executable(text) portion of a
process
a data area, which is used by a process to contain static data
a stack area, which is used by a process to store temporary data
a user area, which holds housekeeping information about a
process
page tables, which are used by the memory management system
User Area
Every process in the system has some associated “housekeeping”
information that is used by the kernel for process management. This
information is stored in a data structure called a user area. Every process
has its own user area. User areas are created in the kernel’s data region
and are only accessible by the kernel; user processes may not access their
user areas.
Fields within a process’ user area include:
a record of how the process should react to each kind of signal
a record of the process’ open file descriptors
a record of how much CPU time the process has used recently
The Process Table
There is a single kernel data structure of fixed size called the process table
that contains one entry for every process in the system. The process table is
created in the kernel’s data region and is accessible only by the kernel.
Each entry contains the following information about each process:
its process ID(PID) and parent process ID(PPID)
its real and effective user ID(UID) and group ID(GID)
its state (running, runnable, sleeping, suspended, idle, or
zombified)
the location of its code, data, stack, and user areas
a list of all pending signals
74
PROCESS MANAGEMENT
When a process duplicates, the parent and child processes are virtually
identical (except for aspects like PIDs, PPIDs, and runtimes); the child’s
code, data, and stack are a copy of the parents, and the processes even
continue to execute the same code.
A child process may replace its code with that of another executable file, there
differentiating itself from its parent. When “init” starts executing, it quickly
duplicates several times. Each of the duplicate child processes then replaces
its code from the executable file called “getty”, which is responsible for
handling user logins.
75
PROCESS MANAGEMENT SYSTEM CALLS
Name Function
Fork duplicates a process
Getpid obtains a process’ ID number
Getppid obtains a parent process’ID number
Exit terminates a process
Wait waits for a child process
Exec replaces the code, data, and stack of a process.
A process may obtain its own process ID and parent process ID numbers
busing the “getpid()” and “getppid()” system calls, respectively.
Here’s a synopsis of these system calls: System Call: pid_t getpid(void)
pid_t getppid(void)
77
Following steps describe how a system call is handled by an operating
system.
To understand how OS handles system calls, let us take an example of read
system call.
Read system call has three parameters: the first one specifying the file, the second
one pointing to the buffer, and the third one giving the number of bytes to read.
Like nearly all system calls, it is invoked from C programs by calling a
library procedure with the same name as the system call: read.
A call from a C program might look like this:
count = read (fd, buffer, nbytes);
The system call returns the number of bytes actually read in count.
This value is normally the same as nbytes, but may be smaller, if, for
example, end of- file is encountered while reading.
If the system call cannot be carried out, either due to an invalid
parameter or a disk error, count is set to -1, and the error number is
put in a global variable, errno.
Programs should always check the results of a system call to see if an
error occurred.
System calls are performed in a series of steps.
To make this concept clearer, let us examine the read call discussed
above.
In preparation for calling the read library procedure, which actually
makes the read system call, the calling program first pushes the
parameters onto the stack, as shown in steps 1-3 in Fig.
The first and third parameters are called by value, but the second
parameter is passed by reference, meaning that the address of the
buffer (indicated by &) is passed, not the contents of the buffer.
Then comes the actual call to the library procedure (step 4). This
instruction is the normal procedure call instruction used to call all
procedures.
The library procedure, possibly written in assembly language, typically
puts the system call number in a place where the operating system
expects it, such as a register (step 5).
78
Figure The 11 steps in making the system call read(fd, buffer, nbytes).
Process management
Call Description
pid = for k () Create a child process identical to the
parent
pid = waitpid(pid, &statloc, Wait for a child to terminate
options)
s = execve(name, argv, environp) Replace a process’ core image
exit(status) Terminate process execution and return
status
File management
Call Description
fd = open (file, how, ...) Open a file for reading, writing, or both
s = close(fd) Close an open file
n = read (fd, buffer, nbytes) Read data from a file into a buffer
n = write (fd, buffer, nbytes) Write data from a buffer into a file
position = lseek(fd, offset, whence) Move the file pointer
s = stat (name, &buf) Get a file’s status information
Director and file system management
Call Description
s = mkdir(name,mode) Create a new directory
s = rmdir(name) Remove an empty directory
79
s = link(name1, name2) Create a new entry, name2, pointing to
name1
s = unlink(name) Remove a directory entry
s = mount(special, name, flag) Mount a file system
s = umount(special) Unmount a file system
Miscellaneous
Call Description
s = chdir(dir name) Change the working directory
s = chmod(name,mode) Change a file’s protection bits
s = kill(pid, signal) Send a signal to a process
seconds = time(&seconds) Get the elapsed time since Jan. 1, 1970
Table Some of the major POSIX system calls.
Once the system call handler has completed its work, control may be
returned to the user-space library procedure at the instruction following the
TRAP instruction (step 9).
This procedure then returns to the user program in the usual way procedure
calls return (step 10).
To finish the job, the user program has to clean up the stack, as it does
after any procedure call (step 11).
UNIX SIGNALS
Value Name Description
01 SIGHUP Hang up; sent to process when kernel assumes that the user
of that process is doing no useful work
02 SIGINT Interrupt
03 SIGQUIT Quit; sent by user to induce halting of process and
production of core dump
04 SIGILL Illegal instruction
05 SIGTRAP Trace trap; triggers the execution of code for process tracing
06 SIGIOT IOT instruction
07 SIGEMT EMT instruction
08 SIGFPE Floating-point exception
09 SIGKILL Kill; terminate process
10 SIGBUS Bus error
11 SIGSEGV Segmentation violation; process attempts to access location
outside its virtual address space
12 SIGSYS Bad argument to system call
13 SIGPIPE Write on a pipe that has no readers attached to it
14 SIGALRM Alarm clock; issued when a process wishes to receive a signal
after a period of time
15 SIGTERM Software termination
16 SIGUSR1 User-defined signal 1
17 SIGUSR2 User-defined signal 2
18 SIGCHLD Death of a child
19 SIGPWR Power failure
80
QUESTION BANK
UNIT -I
PART-A: SHORT ANSWER QUESTIONS
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Define an Operating system? L1 CO1
2 Why is the Operating System viewed as a resource L2 CO1
allocator & control program?
3 What is the Kernel? L1 CO1
4 What are Batch systems? L1 CO1
5 Discuss the advantages of multilevel feedback queue L2 CO1
scheduling?
6 What do you mean by Time-sharing systems? LI CO1
7 Describe multiprocessor systems & give their L2 CO1
advantages? What are the different types of
multiprocessing
8 Discuss the fundamental goals of OS L2 CO1
PART-B: LONG ANSWER QUESTIONS
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Explain the various types of computer systems. L2 CO1
UNIT –II
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1. What is the use of job & process? L1 CO2
81
2. What is a process. L1 CO2
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Discuss about the various scheduling algorithms. L2 CO2
UNIT -III
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 What is critical section problem? L1 CO3
82
9 What is Mutual Exclusion. L1 CO3
Blooms Course
S.NO Taxonomy Outcome
QUESTION Level
1 Explain implementation of producers/Consumers L2 CO3
problem using monitor
2 Examine the classic problems of synchronization L4 CO3
UNIT –IV
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1. What is multitasking? L1 CO4
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Discuss in detail about memory management in Vx L2 CO4
works.
2 Explain in detail about Virtual to Physical Address L2 CO4
mapping in Vx works
83
3 Compare RTOS and Vx works. L4 CO4
UNIT -V
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 What is Unix Kernel? L1 CO5
7 What are the special files and ordinary files in unix L1 CO5
system.
8 What is the use of pipes and sockets in unix L1 CO5
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Explain process management in Unix? L2 CO5
84
ASSIGNMENT QUESTIONS
UNIT -I & II
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 What is operating System? Explain the abstract view of L1 CO1
the components of a computer system
2 Explain the view of OS as a Resource Manager L2 CO1
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Explain different types of operating system structure L2 CO1
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1. What are the problems of synchronization L1 CO3
85
2. What is a Semaphore L1 CO3
Blooms Course
Taxonomy Outcome
S.NO QUESTION Level
1 Explain about Dining Philosopher Problem L2 CO3
86
PREVIOUS UNIVERSITY QUESTION PAPERS
87
88
INTERNAL QUESTION PAPERS WITH KEY
Subject Code: PE522EC AY:2024-25
MATRUSRI ENGINEERING COLLEGE
16-1-486, Saidabad, Hyderabad-500059
(Approved by AICTE, Affiliated to Osmania University)
Marks Marks
15 10
10
14 5 9 9 9
5
6 7
0 0
Level 1 Level 2 Level 3 CO 1 CO 2 CO 3
89
Subject Code: PE522EC AY:2024-25
MATRUSRI ENGINEERING COLLEGE
16-1-486, Saidabad, Hyderabad-500059
(Approved by AICTE, Affiliated to Osmania University)
PART-A
3. What is semaphore?
Ans: Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for process
synchronization.
PART-B
Advantages include:
Increased throughput
Economy of scale
Increased reliability – graceful degradation or fault tolerance
Two types:
90
Symmetric Multiprocessing – each processor performs all tasks
Dual-Core Design
Clustered Systems
91
Some clusters are for high-performance computing (HPC)
o Applications must be written to use parallelization
Some have distributed lock manager (DLM) to avoid conflicting operations
Selection criteria: The process that requests first is served first. It means
that processes are served in the exact order of their arrival.
Decision Mode: Non preemptive: Once a process is selected, it runs until it is
blocked for an I/O or some event, or it is terminated.
Implementation: This strategy can be easily implemented by using FIFO
queue, FIFO means First in First Out. When CPU becomes free, a process
from the first position in a queue is selected to run.
Example: Consider the following set of four processes. Their arrival time and
time required to complete the execution are given in following table. Consider
all time values in milliseconds.
Process Burst Time
P1 24
P2 3
P3 3
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
92
Waiting time for P1 = 6; P2 = 0; P3 = 3
• In this problem there are some processes (called readers) that only read the
shared data, and never change it, and there are other processes
(called writers) who may change the data in addition to reading, or instead of
reading it.
• There is various type of readers-writers problem, most centered on relative
priorities of readers and writers.
• The main complexity with this problem occurs from allowing more than one
writer to access the data at the same time.
The Problem Statement:
From the above problem statement, it is evident that readers have higher
priority than writer. If a writer wants to write to the resource, it must wait
until there are no readers currently accessing that resource.
Here, we use one mutex m and a semaphore w. An integer variable read
count is used to maintain the number of readers currently accessing the
resource. The variable read count is initialized to 0. A value of 1 is given
initially to m and w.
Instead of having the process to acquire lock on the shared resource, we use
the mutex m to make the process to acquire and release lock whenever it is
updating the read count variable.
The code for the writer process looks like this:
while (TRUE)
93
{
wait(w);
signal(w);
And, the code for the reader process looks like this:
while (TRUE)
//acquire lock
wait(m);
read_count++;
if (read_count == 1) wait(w);
//release lock
signal(m);
// acquire lock
wait(m);
read_count--;
if (read_count == 0)
signal(w);
// release lock
signal(m);
94
Subject Code: PE522EC AY:2024-25
MATRUSRI ENGINEERING COLLEGE
16-1-486, Saidabad, Hyderabad-500059
(Approved by AICTE, Affiliated to Osmania University)
Marks Marks
20 10
10 5 9 9 9
16
11
0 0
Level 2 Level 3 CO 3 CO 4 CO 5
95
Subject Code: PE522EC AY:2024-25
MATRUSRI ENGINEERING COLLEGE
16-1-486, Saidabad, Hyderabad-500059
(Approved by AICTE, Affiliated to Osmania University)
PART-A
1. Define Mutex.
A mutex (mutual exclusion) is a synchronization mechanism in computer
programming that allows multiple threads to access a shared resource, but
only one thread at a time. It's a type of lock that protects data and other
resources from concurrent access. Essentially, a mutex ensures that a critical
section of code, where shared resources are manipulated, is only executed by
one thread at a time, preventing race conditions and data corruption.
(i) Multitasking:
VxWorks allows multiple tasks to run concurrently, ensuring that time-critical
operations are executed promptly.
96
PART-B
4. Explain about deadlock avoidance with resource allocation graphs.
Deadlock avoidance using a resource allocation graph (RAG) involves
preventing potential cycles in the graph, which can indicate a deadlock. This
is achieved by requiring all processes to claim their future resource requests
before any actual requests are made. By tracking these claim edges, the
system can ensure that granting a request won't lead to a circular wait.
1. Task Creation:
When a task needs to be created, a user task uses
the taskSpawn() function (or similar).
This function allocates memory for the task's stack and TCB (Task
Control Block).
97
The TCB is initialized with information about the task, including its
priority, stack size, and entry point.
The task's code is loaded into the memory space.
Resources are allocated for the task, such as memory and access to
system objects.
2. Task Activation:
After creation, the task needs to be activated to start executing.
This involves initializing the task's context within its TCB.
The TCB is placed onto the ready queue, where the scheduler will select
it for execution based on its priority.
3. Task Names and IDs:
Task Names: Task names are character strings that can be used for
debugging and identifying tasks.
Task IDs: Task IDs are unique system-generated identifiers assigned to
each task. They are used in many task control functions.
Both names and IDs are essential for managing and interacting with
tasks in the VxWorks system.
6. (a) Discuss concurrent execution and interrupts.
Concurrent execution and interrupts are fundamental concepts in computing,
allowing systems to handle multiple tasks or processes seemingly
simultaneously. Concurrent execution enables a system to run multiple instructions
or processes at the same time, while interrupts provide a mechanism for a processor
to pause its current task and respond to external events or requests.
Concurrent Execution:
Definition:
Concurrent execution refers to the ability of a system to perform multiple tasks or
processes in a way that they appear to be happening at the same time, even if they
are not physically happening simultaneously on a single core.
Examples:
Multiple applications running on a computer.
Multiple processes running in an operating system.
Threads within a single process executing concurrently.
Benefits:
Improved resource utilization and system efficiency.
Enhanced responsiveness and user experience.
Ability to handle background tasks without blocking the user
interface.
98
Implementation:
Multithreading: Allows a single process to execute multiple threads
concurrently.
Operating Systems: Provide mechanisms for scheduling and managing
multiple processes, allowing them to be interleaved in time.
Parallel processing: Multiple processors or cores can execute different
parts of a program simultaneously, truly achieving parallel execution.
Interrupts:
Definition:
An interrupt is a signal that causes a processor to pause its current execution and
switch to a different code section, typically an Interrupt Service Routine (ISR), to
handle a specific event or request.
Types:
Hardware interrupts: Generated by external devices like keyboard,
mouse, or network card.
Software interrupts: Generated by program code, often used to
call Operating System services.
Purpose:
Responsive systems: Interrupts allow systems to react to events in a
timely manner.
Resource management: Interrupts enable efficient utilization of
resources by allowing the processor to switch between tasks.
Multitasking: Interrupts are crucial for enabling multitasking in
operating systems.
Example:
A computer receiving a message from a network card (hardware
interrupt).
A program calling a function from the operating system (software
interrupt).
Relationship between Concurrent Execution and Interrupts:
99
(b) Explain basic level programming with system calls.
At its core, basic-level programming with system calls involves using special
instructions (system calls) to request services from the operating system
kernel. These system calls act as an interface between a user program and the kernel,
allowing the program to interact with hardware and manage system resources.
Elaboration:
2. Context Switching:
When a program makes a system call, the processor switches from user mode (where
the program typically runs) to kernel mode (where the kernel runs). This is known
as a context switch. The system call handler in the kernel then handles the requested
service.
100
5. Importance:
System calls are essential for building robust and versatile programs that can
interact with the operating system and utilize its resources effectively. They provide
a standardized interface for programs to access the kernel's capabilities.
101
CONTENT BEYOND SYLLABUS
Realtime Response
Realtime systems have to respond to external interactions in a predetermined
amount of time. Successful completion of an operation depends upon the
correct and timely operation of the
system. Design the hardware and the software in the system to meet the
Realtime requirements. For example, a telephone switching system must feed
dial tone to thousands of subscribers within a recommended limit of one
second. To meet these requirements, the off-hook detection mechanism and
the software message communication involved have to work within the limited
time budget. The system has to meet these requirements for all the calls being
set up at any given time.
The designers have to focus very early on the Realtime response requirements.
During the architecture design phase, the hardware and software engineers
work together to select the right system architecture that will meet the
requirements. This involves deciding inter connectivity of the processors, link
speeds, processor speeds, etc. The main questions to be asked are:
102
requirement due to even mild congestion. Thus, a simpler architecture has a
better chance of meeting the Realtime requirements.
Are the link speeds adequate? Generally, loading a link more than 40-50% is
a bad idea. A higher link utilization causes the queues to build up on different
nodes, thus causing variable amounts of delays in message communication.
Are the processing components powerful enough? A CPU with really high
utilization will lead to unpredictable Realtime behavior. Also, it is possible
that the high priority tasks in the system will starve the low priority tasks of
any CPU time. This can cause the low priority tasks to misbehave. As with
link, keep the peak CPU utilization below 50 %.
Is the Operating System suitable? Assign high priority to tasks that are
involved in processing Realtime critical events. Consider preemptive
scheduling if Realtim requirements are stringent. When choosing the
operating system, the interrupt latency and scheduling variance should be
verified.
Interrupt Latency refers to the delay with which the operating system can
handle interrupts and schedule tasks to respond to the interrupt. Again, real-
time operating systems would have much lower interrupt latency
system. Design the hardware and the software in the system to meet the
Realtime requirements. For example, a telephone switching system must feed
dial tone to thousands of subscribers within a recommended limit of one
second. To meet these requirements, the off-hook detection mechanism and
the software message communication involved have to work within the limited
time budget. The system has to meet these requirements for all the calls being
set up at any given time.
The designers have to focus very early on the Realtime response requirements.
During the architecture design phase, the hardware and software engineers
work together to select the right system architecture that will meet the
103
requirements. This involves deciding inter connectivity of the processors, link
speeds, processor speeds, etc. The main questions to be asked are:
Are the link speeds adequate? Generally, loading a link more than 40-50% is
a bad idea. A
higher link utilization causes the queues to build up on different nodes, thus
causing variable amounts of delays in message communication.
Are the processing components powerful enough? A CPU with really high
utilization will lead to unpredictable Realtime behavior. Also, it is possible
that the high priority tasks in system will starve the low priority tasks of any
CPU time. This can cause the low priority tasks to misbehave. As with link,
keep the peak CPU utilization below 50 %.
Is the Operating System suitable? Assign high priority to tasks that are
involved in processing Realtime critical events. Consider preemptive
scheduling if Realtime requirements are stringent. When choosing the
operating system, the interrupt latency and scheduling variance should be
verified.
Interrupt Latency refers to the delay with which the operating system can
handle interrupts and schedule tasks to respond to the interrupt. Again, real-
time operating systems would have much lower interrupt latency.
Realtime systems must function reliably in event of failures. These failures can be
internal as well as external. The following sections discuss the issues involved in
handling these failures.
Internal Failures
Internal failures can be due to hardware and software failures in the system. The
different types of failures you would typically expect are:
104
Software Failures in a Task: Unlike desktop applications, Realtime
applications do not have the luxury of popping a dialog box and exiting on
detecting a failure. Design the tasks to safeguard against error conditions.
This becomes even more important in a Realtime system because sequence of
events can result in a large number of scenarios. It may not be possible to test
all the cases in the laboratory environment. Thus, apply defensive checks to
recover from error conditions. Also, some software error conditions might lead
to a task hitting a processor exception. In such cases, it might sometimes be
possible to just rollback the task to its previous saved state.
2.Recovering Context for the Failed Processor: When the failed processor
comes back up, it will have to recover all its lost context from other processors
in the system. There is always a chance of inconsistencies between different
processors in the system. In such cases, the system runs audits to resolve
any inconsistencies. Taking our switch example, once the control processor
comes up it will recover the status of subscriber ports from other processors.
To avoid any inconsistencies, the system initiates audits to crosscheck data-
structures on the different control processors.
External Failures
Realtime systems have to perform in the real world. Thus, they should recover
from failures in the external environment. Different types of failures that can
take place in the environment are:
105
Invalid Behavior of External Entities: When a Realtime system interacts with
external entities, it should be able to handle all possible failure conditions
from these entities. A good example of this is the way a telephone switching
systems handle calls from subscribers. In this case, the system is interacting
with humans, so it should handle all kinds of failures, like:
106
position where a single processor or link becomes a bottle neck. This leads to
costly redesign of the features to improve system scalability.
Asynchronous Communication
It is said that the three most important things in Realtime system design are
timing, timing and timing. A brief look at any protocol will underscore the
importance of timing. All the steps in a protocol are described with exact
timing specification for each stage. Most protocols will also specify how the
timing should vary with increasing load. Realtime systems deal with timing
issues by using timers. Timers are started to monitor the progress of events.
If the expected event takes place, the timer is stopped. If the expected event
does not take place, the timer will timeout and recovery action will be
triggered.
A race condition occurs when the state of a resource depends on timing factors
that are not predictable. This is best explained with an example. Telephone
exchanges have two-way trunks which can be used by any of the two
exchanges connected by the trunk. The problem is that both ends can allocate
the trunk at more or less the same time, thus resulting in a race condition.
Here the same trunk has been allocated for a incoming and an outgoing call.
This race condition can be easily resolved by defining rules on who gets to
107
keep the resource when such a clash occurs. The race condition can be
avoided by requiring the two exchanges to work from different ends of the
pool. Thus, there will be no clashes under low load. Under high load race
conditions will be hit which will be resolved by the pre-defined rules.
A more conservative design would partition the two-way trunk pool into two
one-way pools. This would avoid the race condition but would fragment the
resource pool.
The main issue here is identifying race conditions. Most race conditions are
not as simple as this one. Some of them are subtle and can only be identified
by careful examination of the design.
RTOS APPLICATIONS
Many real-time systems are embedded in sensors and actuators and function
as digital controllers. The term plant in the block diagram below refers to a
controlled system, for example, an engine, a brake, an aircraft, a patient. The
state of the plant is monitored by sensors and can be changed by actuators.
The real-time (computing) system estimates from the sensor readings the
current state of the plant and computes a control output based on the
difference between the current state and the desired state which is the
reference input.
Voice over IP (VOIP) uses the Internet Protocol (IP) to transmit voice as packets
over an IP network. So VOIP can be achieved on any data network that uses
IP, like Internet, Intranets and Local Area Networks (LAN). Here the voice
signal is digitized, compressed and converted to IP packets and then
transmitted over the IP network. It is an advancing technology that is used to
108
transmit voice over the internet or a local area network using internet protocol
(IP). This technology provides enhanced features such as low cost compared
to the traditional Public Switched Telephone Network (PSTN). VoIP system
costs as much as half the traditional PSTN system in the field of voice
transmission. This is because of the efficient use of bandwidth requiring
fewer long-distance trunks between switches. The voice over internet protocol
system is found to be the successful alternative to the traditional PSTN
communication system due to its advanced features. The voice signal is
processed through the internet-based network during the communication.
The conceptual diagram of VoIP system is shown in Fig. below.
The original speech signal is fed in to the system and the speech samples are
taken from.
The speech signal is then encoded with G.711a and Speex speech encoders,
which is the compressed version of the input signal. G.711a is the standard
used for the communication purpose and is a high bit rate Pulse Code
Modulation codec. It works at sampling rate of 8 kHz and uses and
compresses the 16-bit audio samples into 8bits. The Code Excited Linear
Prediction (CELP) Speex codec is an open-source codec developed for the
packet network and VoIP applications. The Speex supports three different
sampling rates narrowband (8 kHz), wideband (16 kHz) and ultra-wideband
(32 kHz).
The degraded VoIP signal is depacketized and then decoded with G.711a and
Speex decoders.
109
The performance is evaluated with Perceptual Evaluation of Speech Quality
(PESQ) measurement defined by ITU-T recommendation P.862. After
comparing the degraded
signal with the original one, the PESQ measurement gives the subjective
measurement as Mean Opinion Scores (MOS) value from -0.5 to 4.5.
110
METHODOLOGY USED TO IDENTIFY WEAK AND BRIGHT
STUDENTS:
Student performance; social network analysis; educational data mining; prediction;
university information system
3.E material
111
END
112