Cs3451 Introduction to Operating Systems -2 Marks
Cs3451 Introduction to Operating Systems -2 Marks
UNIT I INTRODUCTION
PART – A
1. List and briefly define the four main elements of a computer? [R]
Processor – Controls the operation of the computer & performs its data
processing functions
Main memory – Stores data & programs.it is volatile.
I/O modules – Move data between the computer & its external environment
such as disks, communication equipment & terminals.
System Bus – Provides for communication among processors, main memory &
I/O modules.
9. What is the distinction b/w spatial locality & temporal locality? [An]
Temporal locality refers to the reuse of specific data and/or resources within
relatively small time durations.
Spatial locality refers to the use of data elements within relatively close storage
locations.
Sequential locality, a special case of spatial locality, occurs when data elements
are arranged and accessed linearly, e.g., traversing the elements in a one-
dimensional array.
13. List the Services of operating system function. (NOV/DEC 2013) [R]
1. Program development
2. Program execution
3. User Interface
4. I/O Operations
5. File system Manipulation
6. Communication
7. Error Detection
8. Resource allocation
9. Accounting
10. Security
21. What does the CPU do when there are no user programs to run?
(NOV/DEC 2011) [R]
The CPU will always do processing. Even though there are no application
programs running, the operating system is still running and the CPU will still
have to process.
22. Describe the actions taken by a kernel to context-switch between
processes. [An] (NOV/DEC 2013)
In general, the operating system must save the state of the currently running
process and restore the state of the process scheduled to be run next. Saving the
state of a process typically includes the values of all the CPU registers in addition
to memory allocation. Context switches must also perform many architecture-
specific operations, including flushing data and instruction caches.
The boot program then loads the operating system into memory to
startup the computer. This arrangement is known as bootstrapping.
29. Why API s need to be used rather than system calls?(APR/MAY 2015)
[An]
System calls are much slower than APIs (library calls) since for each system call, a
context switch has to occur to load the OS (which then serves the system
call).Most details of OS interface hidden from programmer by API Managed by
run-time support library (Set of functions built into libraries included with
compiler.)
30. Describe the actions taken by a thread library to context switch between
user-level threads. [An]
The user-level threads are known only within a given process. To context switch,
we only need to save the thread-specific context: the program counter, CPU
registers, and the thread-specific stack pointer
DMA is a hardware device that can move to/from memory without using
CPU instructions.
For instance, a hardware device (lets say, your PCI sound device) wants
audio to play back. You can either:
Configure the DMA device. You give it a start address, a destination, and
the number of bytes to copy. The transfer now occurs while the CPU
does something else instead of spoon feeding the audio device.
DMA can be very complex (scatter gather, etc), and varies by bus type
and system.
34. Write the difference between Batch systems and Time sharing systems.
(Nov/Dec 2015) [An]
Here the CPU is never idle. Time shared operating system allows many
users to share the computer simultaneously.
Time sharing systems requires some sort of memory management and
protection.
35. How does an interrupt differ from a trap or what is the difference
between trap and interrupt? (NOV/DEC 2016) [An] (APR/MAY 2018)
An interrupt handler is called to deal with the cause of the interrupt; control is
then returned to the interrupted context and instruction. A trap is a software-
generated interrupt. An interrupt can be used to signal the completion of an I/O
to obviate the need for device polling.
Multicore systems have a single processor with multiple processing units. These
processing units are termed cores. On the other hand, multiprocessor systems
have two or more processors. A multiprocessor system is much more reliable
than a multicore system but a multiprocessor system has a complex configuration
compared to a multicore system. Both multicore processors and multiprocessors
are used to speeding up the computing process of the system.
UNIT II PROCESS MANAGEMENT
PART – A
1. Define Process?[R]
A Process can be thought of as a program in execution. A process will need
certain resources such as CPU time, memory, files & I/O devices to accomplish its
task.
Draw & briefly explain the process states?[U] or Name and draw five
different process states with proper definition. (NOV/DEC 2017)
New- The process is being created.
Running – Instructions are being executed
Waiting – The process is waiting for some event to occur
Ready – The process is waiting to be assigned a processor
Terminated - the process has finished execution
2. What is process control block? List out the data field associated with PCB.
(APR/MAY2015)[R]
Each process is represented in the operating system by a process control block
also called a task control block. (PCB) also called a task control block.
3. What is meant by context switching?[R]
Switching the CPU to another process requires saving the state of the old process
and loading the savetthe state for the new process. This task is known as context
switch.
4. Define co- operating process and independent process.[R]
Independent process:
A process is independent if it cannot affect or be affected by the other
processes executing in the system.
A process that does not share data with any other process is independent.
Cooperating process:
A process is co-operating if it can affect or be affected by other processes
executing in the system.
Any process that shares data with any other process is cooperating.
The benefits of multithreaded programming can be broken down into four major
categories
Responsiveness
Resource sharing
Economy scalability
Utilization of multiprocessor architectures.
6. What is a thread?[R]
14. What are the difference b/w user level threads and kernel level
threads? (MAY /JUNE 2012) (MAY/ JUNE 2016) (NOV/DEC 2015)[An]
User threads
User threads are supported above the kernel and are implemented by a thread
library at the user level. Thread creation & scheduling are done in the user space,
without kernel intervention. Therefore they are fast to create and manage
blocking system call will cause the entire process to block
Kernel threads
Kernel threads are supported directly by the operating system .Thread creation,
scheduling and management are done by the operating system. Therefore they
are slower to create & manage compared to user threads. If the thread performs a
blocking system call, the kernel can schedule another thread in the application for
execution
17. What are the different ways in which a thread can be cancelled?[An]
Cancellation of a target thread may occur in two different scenarios:
• Asynchronous cancellation: One thread immediately terminates the target
thread
is called asynchronous cancellation.
• Deferred cancellation: The target thread can periodically check if it should
terminate, allowing the target thread an opportunity to terminate itself in an
orderly fashion.
20. What are the requirements that a solution to the critical section
problem must satisfy?[R]
24. Give two hardware instructions and their definitions which can be used
for implementing mutual exclusion.[An]
TestAndSet
boolean TestAndSet (boolean &target)
{
boolean rv = target;
target = true;
return rv;
}
Swap
void Swap (boolean &a, boolean &b)
{
boolean temp = a;
a = b;
b = temp;
}
26. How the mutual exclusion may be violated if the signal and wait
operations are not executed automatically (MAY/JUNE 2014)[An]
A wait operation atomically decrements the value associated with a semaphore. If
two wait operations are executed on a semaphore when its value is1, if the two
operations are not performed atomically, then it is possible that both operations
might proceed to decrement the semaphore value, thereby violating mutual
exclusion
27. Define CPU scheduling.[R]
CPU scheduling is the process of switching the CPU among various processes. CPU
scheduling is the basis of multi programmed operating systems. By switching the
CPU among processes, the operating system can make the computer more
productive.
34. Write the four situations under which CPU scheduling decisions take
place
(MAY/JUNE 2014) [R]
CPU scheduling decisions take place under one of four conditions:
When a process switches from the running state to the waiting state, such
as for an I/O request or invocation of the wait ( ) system call.
When a process switches from the running state to the ready state, for
example in response to an interrupt.
When a process switches from the waiting state to the ready state, say at
completion of I/O or a return from wait ( ).
When a process terminates.
37. What are conditions under which a deadlock situation may arise?
(MAY/JUNE 2009 , MAY/JUNE 2012, MAY/JUNE 2013) (NOV/DEC 2013) [R]
A deadlock situation can arise if the following four conditions hold
simultaneously in a system:
a. Mutual exclusion
b. Hold and wait
c. No pre-emption
d. Circular wait
40. What are the methods for handling deadlocks? (APRIL/MAY 2011)[R]
The deadlock problem can be dealt with in one of the three ways:
1. Use a protocol to prevent or avoid deadlocks, ensuring that the system will
never enter a deadlock state.
2. Allow the system to enter the deadlock state, detect it and then recover.
3. Ignore the problem all together, and pretend that deadlocks never occur in the
system.
41. How real-time Scheduling does differs from normal scheduling?
(NOV/DEC 2012) [R]
In a normal Scheduling, we have two types of processes. User process & kernel
Process. Kernel processes have time constraints. However, user processes do not
have time constraints.
In a RTOS, all process are Kernel process & hence time constraints should be
strictly followed. All process/task (can be used interchangeably) are based on
priority and time constraints are important for the system to run correctly.
43. What is the concept behind strong semaphore and spinlock? (NOV/DEC
2015) [R]
A spinlock is one possible implementation of a lock, namely one that is
implemented by busy waiting ("spinning"). A semaphore is a generalization of a
lock (or, the other way around, a lock is a special case of a semaphore). Usually,
but not necessarily, spinlocks are only valid within one process whereas
semaphores can be used to synchronize between different processes, too.
A semaphore has a counter and will allow itself being acquired by one or several
threads, depending on what value you post to it, and (in some implementations)
depending on what its maximum allowable value is.
43. What is the meaning of the term busy waiting? (May/Jun 2016) [R]
Busy waiting means that a process is waiting for a condition to be satisfied in a
tight loop without relinquish the processor. Alternatively, a process could wait by
relinquishing the processor, and block on a condition and wait to be awakened at
some appropriate time in the future.
44. Distinguish between CPU-bounded and I/O bounded processes
(NOV/DEC 2016) [An]
CPU Bound means the rate at which process progresses is limited by the speed of
the CPU. A task that performs calculations on a small set of numbers, for example
multiplying small matrices, is likely to be CPU bound.
I/O Bound means the rate at which a process progresses is limited by the speed
of the I/O subsystem. A task that processes data from disk, for example, counting
the number of lines in a file is likely to be I/O bound.
45. What resources are required to create threads (NOV/DEC 2016) [R]
When a thread is created, the thread does not require any new resources to
execute the thread shares the resources like memory of the process to which they
belong. The benefit of code sharing is that it allows an application to have several
different threads of activity all within the same address space.
Challenges
Dividing activities
Balance
Data splitting
Data dependency
Testing and debugging
UNIT III MEMORY MANAGEMENT
1. Why page are sizes always powers of 2? [An]
Recall that paging is implemented by breaking up an address into a page and
offset number. It is most efficient to break the address into X page bits and Y
offset bits, rather than perform arithmetic on the address to calculate the page
number and offset. Because each bit 25 26 position represents a power of 2,
splitting an address between bits results in a page size that is a power of 2.
10. List the strategies for managing free memory in kernel? [R]
1. Buddy System
2. Slab Allocation
Buddy System: - The buddy system allocates memory from a fixed size segment
consists of physical contiguous pages. Memory is allocated using power-of-2. This
allocation satisfy request in units sized as a power of 2.
Slab Allocation:- A Slab is made up of one or more physically contiguous pages. A
cache consists of one or more slabs. The slab allocation uses caches to store
kernel Objects.
15. What do you mean by thrashing? (APR/MAY 2015) [R] (APR/MAY 2019)
Thrashing is the coincidence of high page traffic and low CPU utilization.
23. What is difference between demands paging n pure demand paging? [R]
In demand paging, a page is not loaded into main memory until it is needed. In
pure demand paging, even a single page is not loaded into memory initially.
Hence pure demand paging causes a page fault.
26. What are the common strategies to select a free hole from a set of
available holes? [R]
The most common strategies are
A. First fit B. Best fit C. Worst fit
30. What is the various page replacement algorithms used for page
replacement?[R]
32. What are the major problems to implement demand paging? [R]
The two major problems to implement demand paging is developing
Frame allocation algorithm
To limit the effect of thrashing we can use local replacement algorithm. With
Local replacement algorithm, if the process starts thrashing, it cannot steel
frames from another process and cause the latter to thrash as well. The problem
is not entirely solved. Thus the effective access time will increase even for the
process that is not thrashing.
The Global Descriptor Table or GDT is a data structure used by Intel x86-family
processors starting with the 80286 in order to define the characteristics of the
various memory areas used during program execution, including the base
address, the size and access privileges like executability and writability. These
memory areas are called segments.
The Local Descriptor Table (LDT) is a memory table used in the x86
architecture in protected mode and contains memory segment descriptors: start
in linear memory, size, executability, writability, access privilege, actual presence
in memory, etc.
The LDT is supposed to contain memory segments which are private to a
specific program, while the GDT is supposed to contain global segments.
The LDT is the sibling of the Global Descriptor Table (GDT) and defines up
to 8192 memory segments accessible to programs –
Unlike the GDT, the zeroth entry is a valid entry, and can be used like any
other LDT entry.
Unlike the GDT, the LDT cannot be used to store certain system entries:
TSSs or LDTs.
38. Define demand paging in memory management. What are the steps
required to handle a page fault in demand paging. (Nov/Dec 2015) [R]
A demand paging system is quite similar to a paging system with swapping
where processes reside in secondary memory and pages are loaded only on
demand, not in advance. When a context switch occurs, the operating
system does not copy any of the old program‘s pages out to the disk or any
of the new program‘s pages into the main memory Instead, it just begins
executing the new program after loading the first page and fetches that
program‘s pages as they are referenced. While executing a program, if the
program references a page which is not available in the main memory
because it was swapped out a little ago, the processor treats this invalid
memory reference as a page fault and transfers control from the program
to the operating system to demand the page back into the memory.
39. How does the system detect thrashing? (May/Jun 2016) [An]
40. Name two differences between logical and physical addresses. (May/Jun
2016) [R]
A logical address does not refer to an actual existing address; rather, it refers to
an abstract address in an abstract address space. Contrast this with a physical
address that refers to an actual physical address in memory. A logical address is
generated by the CPU and is translated into a physical address by the memory
management unit(MMU). Therefore, physical addresses are generated by the
MMU.
42. What is the purpose of paging the page tables (NOV/DEC 2016)[R]
A page table is the data structure used by a virtual memory system in a computer
operating system to store the mapping between virtual addresses and physical
addresses. Virtual addresses are used by the accessing process, while physical
addresses are used by the hardware, or more specifically, by the RAM subsystem.
\
43. What is the difference between a user-level instruction and a privilege
instruction? Which of the following instruction should be privileged and
only allowed to execute in kernel mode?[An]
(a) Load a value from a memory address to a general-purpose register
(b) Set a new value in the program counter (PC) register
(c)Turn off interrupts (APR/MAY 2017)
Privileged instruction
44. Will optimal page replacement algorithm suffer from Belady‟s anamaly?
Justify your answer (APR/MAY 2017)[An]
In computer storage, Belady‘s anomaly is the phenomenon in which increasing
the number of page frames results in an increase in the number of page faults for
certain memory access patterns. This phenomenon is commonly experienced
when using the first-in first-out (FIFO) page replacement algorithm.
45. Write about swapping. Let us assume the user process is of size 1MB and the
backing store is a standard hard disk with a transfer rate of 5MBPS. Calculate the
transfer rate. [E] (NOV/DEC 2017)
Swapping: A process can be swapped temporarily out of memory to a backing
store(SWAP OUT)and then brought back into memory for continued
execution(SWAP IN).
Let us assume the user process is of size 1MB & the backing store is a standard
hard disk with a transfer rate of 5MBPS.
Transfer time = 1000KB/5000KB per second
= 1/5 sec = 200ms
46. Consider the following page-reference string : [E] (NOV/DEC 2017)
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12.
How many page faults and page fault ratio would occur for the FIFO page
replacement
algorithm ? Assuming there is four frames.
The number of page faults =12
The page fault ratio is 1:1
48. What is the counting based page replacement algorithm? [U] (APR/MAY
2018)
In this algorithm, we keep the counter of the number of reference that have been
made to each page. In this 2 schemes are used :- i) Least Frequency Used (LFU)
Page Replacement Algorithm It requires that the page with smallest count to be
replaced. ii) Most Frequency (MFU) Used Page Replacement Algorithm: It is
based on the argument that the page with the smallest count was probably just
brought in and has yet to be used.
What are the physical addresses for the logical addresses 3400 and 0110?
(APR/MAY 2019)
a. illegal reference; traps to operating system
b.2300+0110 = 2410
50. What is thrashing? and how to resolve this problem? (APR/MAY 2019)
In a virtual storage system (an operating system that manages its logical storage
or memory in units called pages), thrashing is a condition in which excessive
paging
operations are taking place. A system that is thrashing can be perceived as either
a very slow system or one that has come to a halt.
One of the recommended ways to eliminate thrashing is to add more memory to
main memory. Another way of resolving the issue of thrashing is by adjusting the
size of the swap file.
Name
Type
Location
Size
Protection
Time
Date
User identification
3) What are the various file operations? (NOV/DEC 2012, APRIL/MAY 2015)
[R]
The six basic file operations are:
Creating a file
Writing a file
Reading a file
Repositioning within a file
Deleting a file
Truncating a file
4) What are all the information‟s associated with an open file? [R]
Several pieces of information are associated with an open file which may be:
File pointer
File open count
Disk location of the file
Access rights
8) What are the most common schemes for defining the logical structure of
a directory? [R] (MAY/JUNE 2012)
The most common schemes for defining the logical structure of directory
Single-Level Directory
Two-level Directory
Tree-Structured Directories
Acyclic-Graph Directories
General Graph Directory
The file system is composed of many different levels. Each level in the design uses
the feature of the lower levels to create new features for use by higher levels.
i. Application programs
ii. Logical file system
iii. File-organization module
iv. Basic file system
v. I/O control vi Devices
The time taken by the head to move to the appropriate cylinder or track is called
seek time. Once the head is at right track, it must wait until the desired block
rotates under the read-write head. This delay is latency time.
Indexed allocation
16) What are the advantages of Contiguous allocation? [R]
The advantages are
Supports direct access
20) How can the index blocks be implemented in the indexed allocation
scheme? [R]
The index block can be implemented as follows
Linked scheme
Multilevel scheme
Combined scheme
File permissions
File dates (create,
access, write)
File owner, group,
ACL
File size
File data blocks
File permissions
29) What are the information contained in a boot control block and
partition control block? (MAY/JUNE 2014) [R]
Boot control block:
Contain information needed by the system to boot an operating from that
partition. If the disk does not contain an operating system, this block can be
empty. It is typically the first block of a partition. In UFS, this is called the boot
block.
Partition Control block:
Contains partition details, such as number of blocks in the partition, size of the
blocks, free block count and free block pointers, and free FCB count and FCB
pointers.
C-SCAN Scheduling
35) What is the need for disk scheduling? (NOV/DEC 2012) [R]
In operating systems, seek time is very important. Since all device requests are
linked in queues, the seek time is increased causing the system to slow down.
Disk Scheduling Algorithms are used to reduce the total seek time of any
request.
42) What Characteristics determine the disk access speed? (MAY /JUNE
2012) [R]
Seek time
Rotational latency
Command processing time
Settle time
43) Give the importance of Swap space Management. [R] (NOV/DEC 2012,
APR/MAY2010, NOV/DEC 2010)
Swap-space management: Swap-space management is low- level task of the
operating system. The main goal for the design and implementation of swap
space is to provide the best throughput for the virtual memory system.
Swap-space use: The operating system needs to release sufficient main memory
to bring in a process that is ready to execute. Operating system uses this swap
space in various ways. Paging systems may simply store pages that have been
pushed out of main memory. UNIX operating system allows the use of multiple
swap spaces. These swap space are usually put on separate disks, so the load
placed on the I/O system by paging and swapping can be spread over the systems
I/O devices.
Swap-space location: Swap space can reside in two places:
44) Write three basic functions which are provided by the hardware clocks
and timers. (APRIL/MAY 2011) [R]
OSTickInit()
OSTimeSet()
OSTimeGet()
49) Draw the diagram for interrupt driven I/O cycle? (NOV/DEC 2014) [Ap]
51) Identify the two important functions of Virtual File System (VFS) layer
in the concept of file system implementation. (Nov/Dec 2015) [R]
Linux VFS provides a set of common functionalities for each files system, using
function pointers accessed through a table. The same functionality is accessed
through the same table position for all file system types, though the actual
functions pointed to by the pointers may be files system-specific. Common
operations provided include open( ), read( ), write( ), and mmap( ).
Therefore We can seek over 195 tracks(about 4% of the disk) during an average
rotational latency
59) Enlist different types of directory structure. [R](NOV/DEC 2017)
There are many types of directory structure in Operating System. They are as
follows :- 1) Single Level Directory
2) Two Level Directory
3) Tree Structured Directory
4) Acyclic Graph Directory
5) General Graph Directory
60) State the typical bad-sector transactions. [R] (APR/MAY 2018)
Typical bad sector transactions might be as follows :
The OS tries to read logical block.
The controller calculates the ECC and finds that the sector is bad. It reports this
finding to the OS.
The next time that the system is rebooted ,a special command is run to tell the
SCSI controller to replace the bad sector with a spare.
After that, whenever the system requests logical block, the request is translated
into the replacement sector‘s address by the controller.
61) What is the advantage of bit vector approach in free space management
? [R] (APR/MAY 2018)
Advantages of bit vector approach:
Relatively simple
Efficient to find the first free blocks or n consecutive free blocks on the disk
5) Define DNS[R]
The Domain Name System (DNS) provides host-name-to-network-address
translations for the entire Internet. Before DNS became widespread, files
containing the same information were sent via e-mail or ftp between all
networked hosts.
6) What is virtualization? [R]
Virtualization, in computing, refers to the act of creating a virtual (rather than
actual) version of something, including but not limited to a virtual computer
hardware platform, operating system (OS), storage device, or computer network
resources.
7) What is pluggable authentication modules [R]
The pluggable authentication modules (PAM) system is based on a shared library
that can be used by any system component that needs to authenticate users. An
implementation of this system is available under Linux. PAM allows
authentication modules to be loaded on demand as specified in a system-wide
configuration file. If a new authentication mechanism is added at a later date, it
can be added to the configuration file, and all system components will
immediately be able to take advantage of it. PAM modules can specify
authentication methods, account restrictions, session setup functions, and
password-changing functions (so that, when users change their passwords, all the
necessary authentication mechanisms can be updated at once).
12) Enumerate the requirements for Linux system administrator. Brief any
one.
(Nov/Dec 2015) [An]
1. While specific knowledge is a boon, most hiring managers require that you
possess basic knowledge about all aspects of Linux. For example, a little
knowledge about Solaris, BSD or various flavors of Linux never hurt anyone!
2. Knowledge in at least one of the upper tier scripting language is a must. You
have options before you, for instance, Python, Perl, Ruby or more, but you need to
make yourself proficient in at least one of them.
3. Experience is welcome, but you at least need to have some hands-on
experience of system management, system setup and managing Linux or Solaris
based servers as well as configuring them.
17) Mention any two features of Linux file systems. (APR/MAY 2017)[U]
Portable
Open Source
Multi user
Multi Programming
18) Enlist the advantages of using kernel modules in Linux (APR/MAY
2017)[U]
Loading and unloading a module is much more flexible and faster than
recompiling a kernel and rebooting.
You can try different options each time you load a module. Most drivers
that handle hardware will take options for I/O addresses, IRQ or DMA
numbers, plus more esoteric options like full or half duplex. When you have
problems getting a card to run correctly, the ability to try different options
can save hours.
Makes it easier to maintain multiple machines on a single kernel base.
20) What is handle? How does a process obtain a handle? [U] (APR/MAY
2018)
The handle is an integer value in the operating system assigned to the associated
process when the process was started. The system uses this handle to keep track
of process attributes. An application can obtain a handle to a process that can be
used as a parameter to many process-information and control functions.
24. Which layer of iOS contains fundamental system services for apps?
The Core Services layer contains the fundamental system services that all
applications use, The Core Services layer provides an abstraction over the
services provided in the Core OS layer. It provides fundamental access to iOS
services and consists of the following components:
Collections
Address Book
Networking
File Access
SQLite
Core Location
Net Services
Threading
Preferences
URL Utilities